Product

Introducing Matrix – The Interface to AGI

AI was supposed to change the way we work.

Instead, we got the world’s greatest chatbot.





Hebbia’s mission since 2020 has been to deliver on the promise of AI. 

In 2020, we deployed the first production RAG system. That year – we watched as RAG failed at 84% of user queries. 

  1. Tasks like “Evaluate this investment opportunity” weren’t prompts, they were processes. 
  2. Questions like “What are the fastest growing revenue segments of the top gaming companies?” were never stated in the data, they were about the data – far more data than could fit in a single context window.
  3. Even with citations to sources and the most capable models – users couldn’t trust generation without understanding a model’s thought-process–- important decisions are discriminative, not only generative. 

It turns out knowledge work didn’t boil down to a vector database.




This is not a technology problem– AI models are already smarter than humans at almost every metric of intelligence.

This is a product problem. 

We built Matrix to be AI that works the way you work:

  1. Where no question is too complex.
  2. No amount of data is too much.
  3. Responses are not a black box.



With world class decomposition, no question is too complex.

The moment a user asks a question, Matrix breaks down complex tasks into single-steps it can take with agents.  This lets Matrix provide complete answers, exhaustive analysis of all documents, and execute on end-to-end processes. 

With time, decomposition even improves based on the actions and processes you or the agent have conducted in the past, without having to do retraining. 

With Instant parallelized data ingest, you can execute analysis over all your files, not just top results. 

To drive to immediate value for our customers, Hebbia built the fastest semantic indexing engine ever. 

Matrix is natively multi modal – and can reason over charts and tables and process any document type–whether PDFs, images, email chains, or decks. With dynamic routing between all-text LLMs and vision models for detection, parsing, and localization, we use the best models for any task. 

With the first collaborative interface to AI, users can see how AI is making decisions, and collaborate on those processes live. 

To build the first trustworthy AI – we made decision making processes visual, breaking out internal decisions into a familiar data grid.  Users collaborate, edit, update, and co-work with AI models in the Matrix interface, instead of blindly trusting them. 

Every company, and every person, works in a unique way. With Matrix – you can teach an AI how you make decisions. 




The world’s leading financial, legal, consulting, and pharmaceutical firms are using Matrix today to drive value, cut costs, or do entirely new kinds of analysis.  

There is reason to believe it’s already significantly impacting the public markets.   Work is about to change forever.

C-suite executives have called it “the first actually useful AI,” “what we’ve been looking for,” and “terrifying.”

We call it Matrix. And we’re sharing it with the world today.




The future of AI is a blank canvas.

We’re excited to see what you build.

Enter the Matrix.




Learn more: 

Join the waitlist

Watch the launch video

Product philosophy: “AGI is a product problem

Join the waitlist