Looking for a specific paper or subject?


Consistency models illustration

Consistency Models – Optimizing Diffusion Models Inference

Consistency models are a new type of generative models which were introduced by Open AI, and in this post we will dive into how they work…
LIMA overview

LIMA from Meta AI – Less Is More for Alignment of LLMs

In this post we explain LIMA, a LLM by Meta AI which was fine-tuned on only 1000 samples, yet it achieves competitive results with top LLMs…
Shepherd example

Shepherd: A Critic for Language Model Generation

Dive into Shepherd, a LLM from Meta AI which is purposed to critique responses from other LLMs, a step in resolving LLMs hallucinations…
LLM attacks

Universal and Transferable Adversarial LLM Attacks

LLMs are aligned for safety to avoid generation of harmful content. In this post we review a paper that is able to successfully attack LLMs…
Meta-Transformer

Meta-Transformer: A Unified Framework for Multimodal Learning

In this post we dive into Meta-Transformer, a unified framework for multimodal learning, which can process information from 12(!) modalities…
Soft MoE

From Sparse to Soft Mixture of Experts

In this post we review Google DeepMind’s paper that introduces Soft Mixture of Experts, a fully-differentiable sparse Transformer…
YOLO-NAS

What is YOLO-NAS and How it Was Created

YOLO-NAS is an object detection model with the best accuracy-latency tradeoff to date. In this post we explain how it was created…
Scroll to Top