The technologies that make a digital human feel human

Shaggy. To be able to replicate the human experience is a goal born out of science fiction. Shaggy is a live running benchmark built collaboratively by leading AI labs and startups to see if they can bring this vision to life.

Who's powering Shaggy right now?

Each company below owns a slice of Shaggy's brain, body, or memory. Together they form the stack that lets him see, speak, remember, and improvise like a person.

LiveKitReal‑time presence

Powers Shaggy’s live audio and video presence so conversations feel like a real-time call, not a loading bar.

SimultaneousMulti‑agent coordination

Lets multiple models and tools work in parallel behind the scenes while Shaggy handles just one friendly conversation.

Mem0Long‑term memory

Gives Shaggy a memory you can grow over time—so he can remember people, projects, and promises, not just single prompts.

AgentMailOutreach & follow‑ups

When Shaggy says he’ll follow up, AgentMail is how the message actually lands in your inbox at the right moment.

AnamFace & embodiment

Renders Shaggy as a living avatar on screen, giving his words a face, body, and physical presence instead of just chat bubbles.

Gives Shaggy a voice that can shift tone, pacing, and emotion so he sounds like a person, not a system prompt.

DeepgramListening

Handles speech recognition so Shaggy can listen in messy, real‑world audio and still respond like he caught every word.

EncountrEmotional analysis

Reads emotional tone from Shaggy’s voice interactions so he can respond not just to what people say, but how they feel when they say it.

SinchChannels

Connects Shaggy to the outside world—phones, messaging, and communication rails—so he can reach people where they actually are.

Curious about adding your tech to Shaggy's stack? Email him at shaggy@shaggy.ai.