Computer and Information SciencesGhost

Research Graph

Research Graph
Research Graph
Home PageRSS Feed
language
Artificial IntelligenceTocComputer and Information Sciences
Published
Author Yao Chen

Introduction In April 2025, Meta AI released Llama 4, the latest iteration of its open-weight large language models (LLM) family. Building on the success of its predecessors, Llama 4 introduced groundbreaking features like native multimodal capabilities, an innovative Mixture of Experts (MoE) architecture, and an unprecedented context window of up to 10 million tokens.

Artificial IntelligenceTocComputer and Information Sciences
Published
Author Yanran Luo

Introduction Large Language Models (LLMs) have transformed how we build intelligent applications, but implementing production-ready AI systems often requires navigating complex infrastructure, managing model deployments, and building custom interfaces. Many developers face a challenging choice: use simple but limiting no-code platforms that hide the complexity, or build everything from scratch with code-heavy frameworks.

Persistent IdentifiersTocComputer and Information Sciences
Published

Introduction In today’s digital research landscape, finding, accessing, and connecting scholarly information has become increasingly complex. How do we reliably identify research outputs, the people who create them, and the organisations where this work takes place? This is where persistent identifiers (PIDs) come into play.

Artificial IntelligenceTocComputer and Information Sciences
Published

Introduction The AI landscape is evolving at breakneck speed, and Google is firmly in the race with its Gemini family of models. The latest iteration, Gemini 2.5 Pro, recently became available in preview, promising significant advancements in reasoning, coding, multimodality, and an enormous context window.

Persistent IdentifiersTocComputer and Information Sciences
Published
Author Yanran Luo

Introduction In today’s digital research landscape, connecting the dots between various research outputs has become increasingly challenging. Each day, thousands of new papers are published, datasets are created, and research grants are awarded. Yet despite this increase in information, finding meaningful connections between these elements can feel like searching for specific stars in a vast galaxy without a map.

Artificial IntelligenceTocComputer and Information Sciences
Published
Author Yanran Luo

Introduction Large language models like GPT-4 have become remarkably capable at solving a wide range of problems, but they still face a fundamental limitation: they generate text token by token, making decisions linearly without the ability to explore multiple paths or backtrack when needed.

Artificial IntelligenceTocComputer and Information Sciences
Published

Introduction Generated Knowledge Prompting is a prompt engineering technique designed to enhance the performance of large language models (LLMs) by leveraging their ability to generate relevant knowledge dynamically. By first generating useful knowledge related to the task, the model can better understand the context and provide more accurate, adaptable, and contextually rich answers.

Persistent IdentifiersTocComputer and Information Sciences
Published

Introduction Imagine a world where every piece of research—every article, dataset, researcher, and institution—is seamlessly connected, no matter where it resides or how the digital landscape shifts. This isn’t a distant dream; it’s the reality being forged by persistent identifiers (PIDs). These unassuming strings of characters are revolutionizing how we create, share, and build upon knowledge.

Artificial IntelligenceTocComputer and Information Sciences
Published

Introduction Large language models (LLMs) like ChatGPT have transformed how we use artificial intelligence, excelling at tasks like writing essays, answering questions, and even holding conversations. But when it comes to complex reasoning—think solving math problems, tackling commonsense puzzles, or working with symbols—these models often hit a wall.

Artificial IntelligenceTocComputer and Information Sciences
Published
Author Yao Chen

Introduction The rapid advancement of Large Language Models (LLMs) has revolutionised artificial intelligence, enabling powerful text understanding and generation capabilities. However, despite their strengths, LLMs often struggle with complex reasoning tasks requiring deep abstraction. A new paradigm known as Meta Prompting (MP) emerges as a promising technique to enhance LLM efficiency and cognitive depth.

Artificial IntelligenceTocComputer and Information Sciences
Published

The Evolution of Numerical Reasoning in AI: The Power of Program of Thoughts (PoT) Introduction Artificial Intelligence has made significant strides in solving complex numerical reasoning tasks. Traditional approaches focused on training models from scratch or fine-tuning them to demonstrate reasoning step by step. However, these methods often required extensive datasets and training with manually crafted reasoning steps.