Looking for a specific paper or subject?


MoNE Architecture Overview

Mixture of Nested Experts: Adaptive Processing of Visual Tokens

In this post we dive into Mixture of Nested Experts, a new method presented by Google that can dramatically reduce AI computational cost…
MoE_layer

Introduction to Mixture-of-Experts | Original MoE Paper Explained

Diving into the original Google paper which introduced the Mixture-of-Experts (MoE) method, which was critical to AI progress…
MoA Architecture

Mixture-of-Agents Enhances Large Language Model Capabilities

In this post we explain the Mixture-of-Agents method, which shows a way to unite open-source LLMs to win GPT-4o on AlpacaEval 2.0…
Abacus Embeddings Overview

Arithmetic Transformers with Abacus Positional Embeddings

In this post we dive into Abacus Embeddings, which dramatically enhance Transformers arithmetic capabilities with strong logical extrapolation…
CLLMs Training

CLLMs: Consistency Large Language Models

In this post we dive into Consistency Large Language Models (CLLMs), a new family of models which can dramatically speedup LLMs inference!…

ReFT: Representation Finetuning for Language Models

Learn about Representation Finetuning (ReFT) by Stanford University, a method to fine-tune large language models (LLMs) efficiently…
Attacking LLM preview

Stealing Part of a Production Language Model

What if we could discover OpenAI models internal weights? In this post we dive into a paper which presents an attack that steals LLMs data…
V-JEPA Training Process

How Meta AI ‘s Human-Like V-JEPA Works?

Explore V-JEPA, which stands for Video Joint-Embedding Predicting Architecture. Another step in Meta AI’s journey for human-like AI…
Era of 1 bit LLMs Pareto improvement

The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits

In this post we dive into the era of 1-bit LLMs paper by Microsoft, which shows a promising direction for low cost large language models…
Self Rewarding LLMs Training

Self-Rewarding Language Models by Meta AI

In this post we dive into the Self-Rewarding Language Models paper by Meta AI, which can possibly be a step towards open-source AGI…
Speculative experts loading

Fast Inference of Mixture-of-Experts Language Models with Offloading

Diving into a research paper introducing an innovative method to enhance LLM inference efficiency using memory offloading…
TinyGPT architecture

TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones

In this post we dive into TinyGPT-V, a small but mighty Multimodal LLM which brings Phi-2 success to vision-language tasks…
LLM_in_a_flash architecture

LLM in a flash: Efficient Large Language Model Inference with Limited Memory

In this post we dive into LLM in a flash paper by Apple, that introduces a method to run LLMs on devices that have limited memory…
ViT_architecture

Vision Transformers Explained | The ViT Paper

In this post we go back to the important vision transformers paper, to understand how ViT adapted transformers to computer vision…
Orca 2 Preview

Orca 2: Teaching Small Language Models How to Reason

Dive into Orca 2 research paper, the second version of the successful Orca small language model from Microsoft…
Overview of LCM-LoRA

From Diffusion Models to LCM-LoRA

Following LCM-LoRA release, in this post we explore the evolution of diffusion models up to latent consistency models with LoRA…

CODEFUSION: A Pre-trained Diffusion Model for Code Generation

In this post we dive into Microsoft’s CODEFUSION, an approach to use diffusion models for code generation that achieves remarkable results…
LLM yields accurate response for a text prompt and inaccurate response for a table data input

Table-GPT: Empower LLMs To Understand Tables

In this post we dive into Table-GPT, a novel research by Microsoft, that empowers LLMs to understand tabular data…
Scroll to Top