NLP

Orca 2: Teaching Small Language Models How to Reason

Several months ago, Microsoft released the first version of Orca, which achieved remarkable results, even surpassing ChatGPT on data from BigBench-Hard dataset, and the ideas from Orca 1 helped to create better language models released in the recent period. The Orca 2 model, presented in the paper we review in this post, achieves significantly better […]

Orca 2: Teaching Small Language Models How to Reason Read More »

CODEFUSION: A Pre-trained Diffusion Model for Code Generation

CODEFUSION is a new code generation model which was introduced in a research paper from Microsoft, titled: “CODEFUSION: A Pre-trained Diffusion Model for Code Generation”. Recently, we’ve observed a significant progress with code generation using AI, which is mostly based on large language models (LLMs), so we refer to them as code LLMs. With a

CODEFUSION: A Pre-trained Diffusion Model for Code Generation Read More »

Opro framework overview

Large Language Models As Optimizers – OPRO by Google DeepMind

OPRO (Optimization by PROmpting), is a new approach to leverage large language models as optimizers, which was introduced by Google DeepMind in a research paper titled “Large Language Models As Optimizers”. Large language models are very good at getting a prompt, such as an instruction or a question, and yield a useful response that match

Large Language Models As Optimizers – OPRO by Google DeepMind Read More »

Active Evol-Instruct

WizardMath – Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct

Welcome WizardMath, a new open-source large language model contributed by Microsoft. While top large language models such as GPT-4 have demonstrated remarkable capabilities in various tasks including mathematical reasoning, they are not open-source. And for open-source large language models such as LLaMA-2 the situation is different, and until now they did not demonstrate strong math

WizardMath – Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct Read More »

Dilated attention overview

LongNet: Scaling Transformers to 1B Tokens with Dilated Attention

In this post we review LongNet, a new research paper by Microsoft titled “LongeNet: Scaling Transformers to 1,000,000,000 Tokens”. The paper starts with the above amusing chart that shows the trend of transformer sequence lengths over time in a non-logarithmic y axis and we can see LongNet is way above with its one billion tokens.

LongNet: Scaling Transformers to 1B Tokens with Dilated Attention Read More »

Scroll to Top