An update from Adept

June 28, 2024 — Adept Team

Announcing some updates to our strategy and the company.


Adept is now SOC 2 Type 1 Compliant

May 1, 2024 — Adept Team

Adept is committed to keeping customer and company data secure. That is why we are thrilled to announce that as of March 28, 2024 Adept is SOC 2 Type 1 compliant.


Adept Fuyu-Heavy: A new multimodal model

January 24, 2024 — Adept Team

Adept Fuyu-Heavy is a new multimodal model designed specifically for digital agents.


Introducing Adept Experiments

November 9, 2023 — Adept Team

We're opening access to Adept Experiments, a new way to explore the technology we are developing at Adept.


Fuyu-8B: A Multimodal Architecture for AI Agents

October 17, 2023 — Rohan Bavishi, Erich Elsen, Curtis Hawthorne, Maxwell Nye, Augustus Odena, Arushi Somani, Sağnak Taşırlar

We’re open-sourcing Fuyu-8B - a small version of the multimodal model that powers our product.


The Adventure of the Errant Hardware

September 19, 2023 — Erich Elsen, Curtis Hawthorne, Arushi Somani

A tale of mystery, intrigue and derring-do. We recount our investigation into curious errors occuring during our large training runs–clues found, causes deciphered and solutions implemented.


Releasing Persimmon-8B

September 7, 2023 — Erich Elsen, Augustus Odena, Maxwell Nye, Sağnak Taşırlar, Tri Dao, Curtis Hawthorne, Deepak Moparthi, Arushi Somani

We’re open-sourcing Persimmon-8B, the most powerful fully permissively-licensed language model with <10 billion parameters.


Announcing our Series B

March 14, 2023 — Adept Team

We’ve raised $350M in new funding as part of our Series B led by General Catalyst and co-led by Spark Capital, with additional participation from existing investors, new financial partners, and some of the most iconic companies in tech.


FlashAttention: Fast Transformer training with long sequences

January 17, 2023 — Tri Dao

‍Transformers have grown deeper and wider, but training them on long sequences remains difficult. The attention layer at their heart is the compute and memory bottleneck: doubling the sequence length would quadruple the runtime and memory requirements.


ACT-1: Transformer for Actions

September 14, 2022 — Adept Team

AI has moved at an incredible pace in the last few years. Scaling up Transformers has led to remarkable capabilities in language (e.g., GPT-3, PaLM, Chinchilla), code (e.g., Codex, AlphaCode), and image generation (e.g., DALL-E, Imagen).


Introducing Adept

April 26, 2022 — David Luan

Adept is an ML research and product lab building general intelligence by enabling people and computers to work together creatively.