LLaMA-Mesh by Nvidia: LLM for 3D Mesh Generation
Dive into Nvidia’s LLaMA-Mesh: Unifying 3D Mesh Generation with Language Models, a LLM which was adapted to understand 3D objects.
LLaMA-Mesh by Nvidia: LLM for 3D Mesh Generation Read More »
Dive into Nvidia’s LLaMA-Mesh: Unifying 3D Mesh Generation with Language Models, a LLM which was adapted to understand 3D objects.
LLaMA-Mesh by Nvidia: LLM for 3D Mesh Generation Read More »
Dive into Tokenformer, a novel architecture that improves Transformers to support incremental model growth without training from scratch
Tokenformer: Rethinking Transformer Scaling with Tokenized Model Parameters Read More »
In this post we dive into a Stanford research presenting Generative Reward Models, a hybrid Human and AI RL to improve LLMs
Generative Reward Models: Merging the Power of RLHF and RLAIF for Smarter AI Read More »
In this post we dive into Sapiens, a new family of computer vision models by Meta AI that show remarkable advancement in human-centric tasks!
Sapiens by Meta AI: Foundation for Human Vision Models Read More »
In this post we dive into Mixture of Nested Experts, a new method presented by Google that can dramatically reduce AI computational cost
Mixture of Nested Experts: Adaptive Processing of Visual Tokens Read More »
Diving into the original Google paper which introduced the Mixture-of-Experts (MoE) method, which was critical to AI progress
Introduction to Mixture-of-Experts | Original MoE Paper Explained Read More »
In this post we explain the Mixture-of-Agents method, which shows a way to unite open-source LLMs to win GPT-4o on AlpacaEval 2.0
Mixture-of-Agents Enhances Large Language Model Capabilities Read More »
In this post we dive into Abacus Embeddings, which dramatically enhance Transformers arithmetic capabilities with strong logical extrapolation
Arithmetic Transformers with Abacus Positional Embeddings Read More »
In this post we dive into Consistency Large Language Models (CLLMs), a new family of models which can dramatically speedup LLMs inference!
Learn about Representation Finetuning (ReFT) by Stanford University, a method to fine-tune large language models (LLMs) efficiently.
ReFT: Representation Finetuning for Language Models Read More »