ImageBind: One Embedding Space To Bind Them All
ImageBind is a multimodality model by Meta AI. In this post, we dive into ImageBind research paper to understand what it is and how it works.
ImageBind is a multimodality model by Meta AI. In this post, we dive into ImageBind research paper to understand what it is and how it works.
Consistency models are a new type of generative models which were introduced by Open AI, and in this post we will dive into how they work
Consistency Models – Optimizing Diffusion Models Inference Read More »
In this post we explain LIMA, a LLM by Meta AI which was fine-tuned on only 1000 samples, yet it achieves competitive results with top LLMs
LIMA from Meta AI – Less Is More for Alignment of LLMs Read More »
Dive into Shepherd, a LLM from Meta AI which is purposed to critique responses from other LLMs, a step in resolving LLMs hallucinations.
Shepherd: A Critic for Language Model Generation Read More »
LLMs are aligned for safety to avoid generation of harmful content. In this post we review a paper that is able to successfully attack LLMs.
Universal and Transferable Adversarial LLM Attacks Read More »
In this post we dive into Meta-Transformer, a unified framework for multimodal learning, which can process information from 12(!) modalities
Meta-Transformer: A Unified Framework for Multimodal Learning Read More »
In this post we review Google DeepMind’s paper that introduces Soft Mixture of Experts, a fully-differentiable sparse Transformer.
YOLO-NAS is an object detection model with the best accuracy-latency tradeoff to date. In this post we explain how it was created.