Looking for a specific paper or subject?


FACET data curation pipeline

FACET: Fairness in Computer Vision Evaluation Benchmark

In this post we cover FACET, a new dataset created by Meta AI in order to evaluate a benchmark for fairness of computer vision models…
Code Llama repository-level reasoning

Code Llama Paper Explained

Discover an in-depth review of Code Llama paper, a specialized version of the Llama 2 model designed for coding tasks…
Active Evol-Instruct

WizardMath – Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct

Diving into WizardMath, a LLM for mathematical reasoning contributed by Microsoft, surpassing models such as WizardLM and LLaMA-2…
Imitation learning

Orca Research Paper Explained

In this post we dive into Orca’s paper which shows how to do imitation tuning effectively, outperforms ChatGPT with about 7% of its size!…
Dilated attention overview

LongNet: Scaling Transformers to 1B Tokens with Dilated Attention

In this post we dive into the LongNet research paper which introduced the Dilated Attention mechanism and explain how it works…
DINOv2 as foundational model

DINOv2 from Meta AI – Finally a Foundational Model in Computer Vision

DINOv2 by Meta AI finally gives us a foundational model for computer vision. We’ll explain what it means and why DINOv2 can count as such…
I-JEPA example

I-JEPA: The First Human-Like Computer Vision Model

Dive into I-JEPA, Image-based Joint-Embedding Predictive Architecture, the first model based on Yann LeCun’s vision for a more human-like AI…
ImageBind examples

ImageBind: One Embedding Space To Bind Them All

ImageBind is a multimodality model by Meta AI. In this post, we dive into ImageBind research paper to understand what it is and how it works…
Scroll to Top