Looking for a specific paper or subject?


  • Code Llama Paper Explained

    Code Llama Paper Explained

    Code Llama is a new family of open-source large language models for code by Meta AI that includes three type of models. Each type was released with 7B, 13B and 34B params. In this post we’ll explain the research paper behind them, titled “Code Llama: Open Foundation Models for Code”, to understand how these models…

  • DINOv2 from Meta AI – Finally a Foundational Model in Computer Vision

    DINOv2 from Meta AI – Finally a Foundational Model in Computer Vision

    DINOv2 is a computer vision model from Meta AI that claims to finally provide a foundational model in computer vision, closing some of the gap from natural language processing where it is already common for a while now. In this post, we’ll explain what does it mean to be a foundational model in computer vision…

  • I-JEPA – A Human-Like Computer Vision Model

    I-JEPA – A Human-Like Computer Vision Model

    I-JEPA, Image-based Joint-Embedding Predictive Architecture, is an open-source computer vision model from Meta AI, and the first AI model based on Yann LeCun’s vision for a more human-like AI, which he presented last year in a 62 pages paper titled “A Path Towards Autonomous Machine Intelligence”.In this post we’ll dive into the research paper that…

  • What is YOLO-NAS and How it Was Created

    What is YOLO-NAS and How it Was Created

    In this post we dive into YOLO-NAS, an improved version in the YOLO models family for object detection which was precented earlier this year by Deci. YOLO models have been around for a while now, presented in 2015 with the paper You Only Look Once, which is what the shortcut YOLO stands for, and over…

  • The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits

    The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits

    In this post, we dive into a new and exciting research paper by Microsoft, titled: “The Era of 1-bit LLMs: All Large Language Models are 1.58 bits”. In recent years, we’ve seen a tremendous success of large language models with models such as GPT, LLaMA and more. As we move forward, we see that the…

  • Self-Rewarding Language Models by Meta AI

    Self-Rewarding Language Models by Meta AI

    On January 18, Mark Zuckerberg announced that the long-term goal of Meta AI is to build general intelligence, and open-source it responsibly. So Meta AI is officially working on building an open-source AGI. On the same day, Meta AI have released a new research paper titled “Self-Rewarding Language Models”, which can be a step that…

  • Fast Inference of Mixture-of-Experts Language Models with Offloading

    Fast Inference of Mixture-of-Experts Language Models with Offloading

    In this post, we dive into a new research paper, titled: “Fast Inference of Mixture-of-Experts Language Models with Offloading”. Motivation LLMs Are Getting Larger In recent years, large language models are in charge of remarkable advances in AI, with models such as GPT-3 and 4 which are closed source and with open source models such…

  • TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones

    TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones

    In this post we dive into TinyGPT-V, a new multimodal large language model which was introduced in a research paper titled “TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones”. Before divining in, if you prefer a video format then check out our video review for this paper: Motivation In recent years we’ve seen a…

  • LLM in a flash: Efficient Large Language Model Inference with Limited Memory

    LLM in a flash: Efficient Large Language Model Inference with Limited Memory

    In this post we dive into a new research paper from Apple titled: “LLM in a flash: Efficient Large Language Model Inference with Limited Memory”. Before divining in, if you prefer a video format then check out our video review for this paper: Motivation In recent years, we’ve seen a tremendous success of large language…

  • How Do Vision Transformers Work?

    How Do Vision Transformers Work?

    Up until vision transformers were invented, the dominating model architecture in computer vision was convolutional neural network (CNN), which was invented at 1989 by famous researchers including Yann LeCun and Yoshua Bengio. At 2017, transformers were invented by Google and took the natural language processing domain by storm, but were not adapted successfully to computer…

  • Orca 2: Teaching Small Language Models How to Reason

    Orca 2: Teaching Small Language Models How to Reason

    Several months ago, Microsoft released the first version of Orca, which achieved remarkable results, even surpassing ChatGPT on data from BigBench-Hard dataset, and the ideas from Orca 1 helped to create better language models released in the recent period. The Orca 2 model, presented in the paper we review in this post, achieves significantly better…

  • From Diffusion Models to LCM-LoRA

    From Diffusion Models to LCM-LoRA

    Recently, a new research paper was released, titled: “LCM-LoRA: A Universal Stable-Diffusion Acceleration Module”, which presents a method to generate high quality images with large text-to-image generation models, specifically SDXL, but doing so dramatically faster. And not only it can run SDXL much faster, it can also do so for a fine-tuned SDXL, say for…

  • CODEFUSION: A Pre-trained Diffusion Model for Code Generation

    CODEFUSION: A Pre-trained Diffusion Model for Code Generation

    CODEFUSION is a new code generation model which was introduced in a research paper from Microsoft, titled: “CODEFUSION: A Pre-trained Diffusion Model for Code Generation”. Recently, we’ve observed a significant progress with code generation using AI, which is mostly based on large language models (LLMs), so we refer to them as code LLMs. With a…

  • Table-GPT: Empower LLMs To Understand Tables

    Table-GPT: Empower LLMs To Understand Tables

    Nowadays, we are witnessing a tremendous progress with large language models (LLMs) such as ChatGPT, Llama and more, where we can feed a LLM with a text instruction or question, and most of the times get an accurate response from the model. However, if we’ll try to feed the model with a table data, in…

Scroll to Top