NLP

Speculative experts loading

Fast Inference of Mixture-of-Experts Language Models with Offloading

In this post, we dive into a new research paper, titled: “Fast Inference of Mixture-of-Experts Language Models with Offloading”. Motivation LLMs Are Getting Larger In recent years, large language models are in charge of remarkable advances in AI, with models such as GPT-3 and 4 which are closed source and with open source models such […]

Fast Inference of Mixture-of-Experts Language Models with Offloading Read More »

LLM_in_a_flash architecture

LLM in a flash: Efficient Large Language Model Inference with Limited Memory

In this post we dive into a new research paper from Apple titled: “LLM in a flash: Efficient Large Language Model Inference with Limited Memory”. Before divining in, if you prefer a video format then check out our video review for this paper: Motivation In recent years, we’ve seen a tremendous success of large language

LLM in a flash: Efficient Large Language Model Inference with Limited Memory Read More »

CODEFUSION: A Pre-trained Diffusion Model for Code Generation

CODEFUSION is a new code generation model which was introduced in a research paper from Microsoft, titled: “CODEFUSION: A Pre-trained Diffusion Model for Code Generation”. Recently, we’ve observed a significant progress with code generation using AI, which is mostly based on large language models (LLMs), so we refer to them as code LLMs. With a

CODEFUSION: A Pre-trained Diffusion Model for Code Generation Read More »

Opro framework overview

Large Language Models As Optimizers – OPRO by Google DeepMind

OPRO (Optimization by PROmpting), is a new approach to leverage large language models as optimizers, which was introduced by Google DeepMind in a research paper titled “Large Language Models As Optimizers”. Large language models are very good at getting a prompt, such as an instruction or a question, and yield a useful response that match

Large Language Models As Optimizers – OPRO by Google DeepMind Read More »

Active Evol-Instruct

WizardMath – Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct

Welcome WizardMath, a new open-source large language model contributed by Microsoft. While top large language models such as GPT-4 have demonstrated remarkable capabilities in various tasks including mathematical reasoning, they are not open-source. And for open-source large language models such as LLaMA-2 the situation is different, and until now they did not demonstrate strong math

WizardMath – Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct Read More »

Dilated attention overview

LongNet: Scaling Transformers to 1B Tokens with Dilated Attention

In this post we review LongNet, a new research paper by Microsoft titled “LongeNet: Scaling Transformers to 1,000,000,000 Tokens”. The paper starts with the above amusing chart that shows the trend of transformer sequence lengths over time in a non-logarithmic y axis and we can see LongNet is way above with its one billion tokens.

LongNet: Scaling Transformers to 1B Tokens with Dilated Attention Read More »

Scroll to Top