From Diffusion Models to LCM-LoRA
Following LCM-LoRA release, in this post we explore the evolution of diffusion models up to latent consistency models with LoRA
From Diffusion Models to LCM-LoRA Read More »
Following LCM-LoRA release, in this post we explore the evolution of diffusion models up to latent consistency models with LoRA
From Diffusion Models to LCM-LoRA Read More »
In this post we dive into Microsoft’s CODEFUSION, an approach to use diffusion models for code generation that achieves remarkable results
CODEFUSION: A Pre-trained Diffusion Model for Code Generation Read More »
In this post we dive into Table-GPT, a novel research by Microsoft, that empowers LLMs to understand tabular data
Table-GPT: Empower LLMs To Understand Tables Read More »
In this post we explain the paper “Vision Transformers Need Registers” by Meta AI, that explains an interesting behavior in DINOv2 features
Vision Transformers Need Registers – Fixing a Bug in DINOv2? Read More »
In this post we dive into Emu, a text-to-image generation model by Meta AI, which is quality-tuned to generate highly aesthetic images.
Emu: Enhancing Image Generation Models Using Photogenic Needles in a Haystack Read More »
In this post we dive into NExT-GPT, a multimodal large language model (MM-LLM), that can both understand and respond with multiple modalities
NExT-GPT: Any-to-Any Multimodal LLM Read More »
In this post we dive into the Large Language Models As Optimizers paper by Google DeepMind, which introduces OPRO (Optimization by PROmpting).
Large Language Models As Optimizers – OPRO by Google DeepMind Read More »
In this post we cover FACET, a new dataset created by Meta AI in order to evaluate a benchmark for fairness of computer vision models
FACET: Fairness in Computer Vision Evaluation Benchmark Read More »
Discover an in-depth review of Code Llama paper, a specialized version of the Llama 2 model designed for coding tasks
Code Llama Paper Explained Read More »
Diving into WizardMath, a LLM for mathematical reasoning contributed by Microsoft, surpassing models such as WizardLM and LLaMA-2.