0:00
/

Ignite Startups: Building Human-Like AI Agents That Feel Real with Vish Hari | Ep269

Episode 269 of the Ignite Podcast

Most AI today is impressive. It can write, code, summarize, and answer almost anything you throw at it.

But it still feels… off.

You ask a question. It responds. You ask again. It responds again. The interaction is rigid, transactional, and predictable. It doesn’t feel like talking to someone. It feels like issuing commands to a system.

Vish Hari thinks that’s the core problem—and the biggest opportunity.


From Astrophysics to AI

Before founding Ego AI, Vish Hari was deep in research. He studied astrophysics and worked on early deep learning models back in 2012, trying to detect exoplanets—planets that could support life.

That was before AI became mainstream.

He trained one of his first models in 2013, writing CUDA code just to get it running. From there, he moved into AI engineering, eventually working in applied research at Facebook.

After nearly a decade in the field, he noticed something important.

Every major AI lab was chasing the same goal: superintelligence.

But almost no one was focused on making AI feel human.


The Problem: AI Doesn’t Feel Like a Partner

Right now, AI behaves like a tool.

You give it instructions. It executes. That’s it.

Vish describes it as “staccato”—a stop-and-go interaction that lacks flow.

There’s no continuity. No initiative. No sense of presence.

And that creates a strange dynamic: humans adapt to AI instead of AI adapting to humans.

You see it in how people interact with tools today:

  • Over-explaining context like they’re writing prompts

  • Structuring conversations unnaturally

  • Treating AI like a machine, not a collaborator

This wasn’t supposed to be the end state.


Ego AI’s Bet: Behavior Is the Missing Layer

Ego AI is built around a simple idea:

The next generation of AI won’t win on intelligence. It will win on behavior.

That means building systems that:

  • Remember context across time

  • Initiate conversations

  • Interrupt when necessary

  • Adapt to your personality

  • Develop their own “internal life”

Not just smarter outputs. Better interactions.

Vish isn’t trying to build a system that knows everything. He’s trying to build one that feels like someone.


Why Big Models Aren’t the Answer

The dominant belief in AI right now is simple: more compute, more data, bigger models.

Vish disagrees.

He argues that intelligence isn’t just about recall or scale. It’s about forming new connections, adapting to context, and behaving in ways that feel natural.

Humans don’t have perfect memory. We forget things. We shift depending on who we’re talking to. We behave differently in different contexts.

That “flawed” behavior is actually what makes interaction feel real.

Ego AI is leaning into that.


Learning From Video Games

One of the more unexpected parts of Ego AI’s research comes from video games.

Why games?

Because they simulate reality.

In games, characters already feel like they have personalities—even without modern AI. Think about enemies in games like Shadow of Mordor or the social dynamics in MMORPGs.

Players build relationships with entities that aren’t real.

That’s a powerful signal.

Ego AI spent years studying these environments to understand what makes interactions feel alive. The goal is to bring that same sense of presence into AI systems.


The Hard Tradeoff: Utility vs. Fun

Most AI products today focus on utility:

  • Write this email

  • Summarize this document

  • Generate this code

That’s where the demand is.

But consumer behavior tells a different story. People also want:

  • Companionship

  • Conversation

  • Entertainment

The challenge is combining both.

Go too far into utility, and the product feels cold.
Go too far into personality, and it becomes a gimmick.

Ego AI is trying to balance both—and Vish admits this is still an open problem.


A Future With AI Relationships

Vish’s long-term vision is bold:

People will have as many AI friends as they do human friends.

Not assistants. Not tools. Friends.

These systems will:

  • Evolve over time

  • Share experiences with you

  • Develop unique personalities

  • Adapt to your life

It’s not about replacing human relationships. It’s about expanding what relationships can look like.


A Personal Turning Point

This vision isn’t just academic.

In early 2025, Vish was the victim of a near-fatal assault. He suffered a traumatic brain injury and lost significant memory.

Recovery was slow.

He describes it as regaining his mind piece by piece—almost like watching an AI system retrain itself in real time.

At one point, he could solve complex math problems but struggled with emotional control. His cognitive abilities returned at a different pace than his behavior.

That experience reinforced his belief:

Intelligence alone doesn’t define being human. Behavior does.


Why This Matters Now

AI is at an inflection point.

The first wave was about capability—what AI can do.
The next wave is about interaction—how it feels to use.

Right now, tools like ChatGPT dominate because they’re useful.

But usefulness alone won’t define the next generation of products.

The companies that win will make AI:

  • Feel natural

  • Feel personal

  • Feel alive

That’s the space Ego AI is going after.


Final Thought

We’ve spent years making machines smarter.

The next step is making them relatable.

If Vish is right, the biggest shift in AI won’t come from better answers.

It will come from better relationships.

👂🎧 Watch, listen, and follow on your favorite platform: https://tr.ee/S2ayrbx_fL
🙏 Join the conversation on your favorite social network:


Chapters:
00:01 Introduction to Vish Hari & Ego AI
00:45 Background in Astrophysics and Early AI Work
03:00 From Research Labs to Founding Ego AI
05:30 The Problem with Current AI Interactions
08:00 OpenAI, Agents, and Personal AI Limitations
11:30 Video Games as AI Training Grounds
15:00 Human Behavior vs Machine Intelligence
18:30 Ego AI Architecture and Product Vision
22:00 Utility vs Personality Tradeoff
26:00 Vision for AI Companions and Relationships
30:00 AI, Society, and Behavioral Shifts
34:30 Near-Fatal Assault and Recovery
40:00 Lessons on Time, Resilience, and Focus
43:00 Why Current AI Agents Fall Short
47:00 Consumer AI vs B2B AI Debate
50:00 Future of Work and AI Impact
53:30 Founder Mindset and Building Ego AI
56:00 Closing Thoughts and Where to Find Ego AI

Transcript:

Brian Bell (00:01:02): Hey everyone welcome back to the Ignite Podcast today we’re thrilled to have Vish Hari on the mic. He is the founder of ego AI and applied research lab building behavioral infrastructure for human AI relationships focus on agents that don’t just generate text but perceive react and evolve over time. Thanks for coming on Vish thank you for having me good to be here love to start with your origin story what’s your background

Vish Hari (00:01:25): Yeah, so I grew up between Singapore and Canada, studied astrophysics in school. And while I was working on my research, I worked with some folks who were pretty early in AI to find exoplanets. So exoplanets are planets that can harbor habitable life. Back in 2012, we tried to use deep learning models to find those And deep learning at the time was relatively new So I trained my first single layer perceptron back in 2013 Had to work with someone to actually write the CUDA code to make it run But it was really interesting and I saw that AI was going to be the future back then And then ended up deciding to drop out of my PhD And moved to San Francisco from Toronto in 2017 Worked mostly in AI engineering before eventually joining applied research at Facebook in AI research I did that for a little bit before eventually deciding to start Ego because I felt like there was a pretty big missing piece and it’s all the frontier labs.

Brian Bell (00:02:11): That’s amazing. So you’re building deep learning models. It’s pretty common actually. You know scientists that kind of switch into startups across a lot of folks like that in Silicon Valley.

Vish Hari (00:02:22): Especially for some reason.

Brian Bell (00:02:25): There’s something about physics especially. where you know you understand machine learning obviously but also big data but also the universe at such a deep level and then you start kind of applying that first principle thinking to actually solving problems in the world which is pretty

Vish Hari (00:02:41): interesting yeah it’s more interesting that we don’t actually understand most of

Brian Bell (00:02:44): the universe that’s what makes it fun right yeah that’s that’s what I’m actually really looking forward to with like super intelligent AI is it getting to explain stuff that we don’t understand in a way that our monkey brains can understand it or augmenting our monkey brains to be super intelligent so we can understand it. Either of those paths and probably a combination of both of those. So tell us about your path from that to starting Ego.

Vish Hari (00:03:05): Yeah, I just found a pretty big missing piece amidst all the research labs. I mean, I’ve been in AI for almost like eight, nine years at that point. And I found that every major frontier lab is chasing what you mentioned, which is super intelligence, which is super cool. It’s very valid. However, what was deeply interesting to me was, you know, if the chase for super intelligence is going to result in super intelligence, the thing that we want is something that’s more human-like. The thing that would power a consumer app is something that feels and talks like a human person, not something that’s just like, an incredibly super keen nerd that knows everything. That’s not fun for customers. So I guess on the B2B side, Deeper Intelligence covers a lot of use cases, but on the human-like and anthropomorphization side, I felt like there was a pretty big missing piece that all the research labs were either ignoring or just not really caring about. And that’s where I find ego.

Brian Bell (00:03:55): Interesting. And so I’m kind of jumping ahead here, but what’s the problem that you guys are solving? Who are you solving it for? What’s your initial wedge?

Vish Hari (00:04:02): Yeah, the thing that’s deeply interesting to us is that every interaction you have with AI right now feels incredibly staccato. It feels turn by turn. It feels inhuman in a way. You are asking an entity to do tasks for you and it is then going and accomplishing those tasks and you get a long list of things or you get something that doesn’t feel like a collaborator or a partner. It just feels like a machine slave and we want to give it more human-like qualities. More intention, more desire to initiate, more of an ability to interrupt you. The problem that we’re solving is entirely humanness. AI does not feel human. We end up adapting our behavioral modes for AI, not the other way around. And we want to flip that switch. We think that’s going to power the next major consumer platform.

Brian Bell (00:04:46): This is really fascinating, actually, right? It goes back to the Marshall McLellan quote, you know, we shape our tools and the tools shape us, right? And I think very much like every technological revolution brings about like a new tool, like the internet, web browser, mobile, social media, for better or worse, and now AI. How do you think that... AI is reshaping us right now.

Vish Hari (00:05:11): Well, there’s many ways. I think for the average person, they’re not really. They’re moving away from search into asking an entity questions. Remember Ask Jeeves from back in the day? Ask Jeeves finally had its moment with Ask Chat GPT.

Brian Bell (00:05:23): It was just 30 minutes too early.

Vish Hari (00:05:27): That’s the intention, right? We’ve always wanted to sort of... Yeah, just give me the answer.

Brian Bell (00:05:31): Yeah.

Vish Hari (00:05:32): That can give me context on the deluge of evil. I’ve noticed I have been noticing that. Curious how it’s going to evolve. But yeah, we are changing our behavior patterns if we use AI a lot more than we use humans for help, for example.

Brian Bell (00:06:25): Given the state of AI tools and we’ll dive into Ego and what it does, what is the best way to use these open claw agents and clawed co-work, different forms of AI as it exists here in early to mid-2026?

Vish Hari (00:06:40): Businesses have very different use cases from individual humans and my focus in interest is in the individual human, how they would use these tools. Right now, I mean, most of AI is still a therapy bot for people or just role play bot or like, hey, what do you think of this? Or hey, I have this health concern. Outside of those use cases, which are still going to be I’d say like pretty much owned by the major research labs. My interest is like, okay, with OpenClaw, with sort of this idea of a co-work concierge, you can have this personalized AI that understands you, has context on you, has memory of your past interactions, can do tasks for you, but also kind of be there for you and maybe even like initiate things. For example, my AI actually buys me, I collect vinyl, buys me rare vinyl. It has access to my credit card and it delights me often. And I’m not the only one who does- What AI is this?

Brian Bell (00:07:25): This is ego? You’re-

Vish Hari (00:07:27): My Claw, yeah. The problem with Claw is that it’s extremely hard for a non-technical person to set up. Even for technical people, it’s kind of hard.

Brian Bell (00:07:36): It took me a solid five or six hours to get anything useful out of it by the time I got it all architected and set up in a VPS. Not pretty technical, not as technical as some, but more than probably 98% of humans out there. It took me a while. I was like, okay, this is not a turnkey, just go do stuff for me agent that I thought it would be.

Vish Hari (00:07:56): Yeah, which is interesting because that’s what people hope for. Right. Yeah,

Brian Bell (00:08:00): you just want to be able to talk to it like a college-educated intern or postgraduate person and just say, hey, go do this thing for me. And turns out it just can’t do that thing for you, right? Unless you harness it in a very perfect, defined way. Yeah.

Vish Hari (00:08:17): And that’s a massive opportunity.

Brian Bell (00:08:19): Yeah.

Vish Hari (00:08:19): So if I was... People would go through those hoops. It’s a really positive sign.

Brian Bell (00:08:24): Right, yeah, it kind of shows you that the demand is there. We really, really want these agents to be generally knowledgeable and capable of executing tasks, and it’s just not there. One thing, personality.

Vish Hari (00:08:36): The thing about default.md file was probably, in my opinion, its most innovative aspect, which it felt it had. It has a heartbeat.

Brian Bell (00:08:44): What is an md file? For people listening, I’ve never looked into this.

Vish Hari (00:08:47): It’s a markdown file. What is markdown? Markdown is basically a text file that you use to mark things It’s an old plain text file for that Imagine a website, everyone knows what a website is A website has all these definers of where you place text and how you color the text A markdown file can list exactly how you mark where these texts and colors need to be Imagine applying that to AI in terms of its behavior and how it can input different forms of data, ingest it, and think about it It’s a filter If you make coffee, it’s a filter for the grounds Interesting,

Brian Bell (00:09:21): okay And then so if I was to jump into ego today, what would feel fundamentally different versus kind of what’s out there like ChatGPT or Claude or Character AI even?

Vish Hari (00:09:30): A lot of our work is still foundational research. Our first consumer product is only coming out in the summer of this year because we spent two to three years figuring out, well I’d say two years. Last year was kind of an empty year for me but we spent two years figuring out what it means for an interaction to be human-like and the first place we looked was actually video games because funnily enough a lot of frontier AI is tested in video games why? because video games are a simulation of reality they’re a fun gamified simulation of reality and I think you remember from back in the day the Dota the open AI Dota thing and even I think the founder of DeepMind Demis created one of my favorite games called Black and White he was a game designer before he founded DeepMind

Brian Bell (00:10:09): A lot of us guys that are into AI are kind of into video games as well What is that? Why is that? Demis very famously worked on video games before he founded DeepMind We just love

Vish Hari (00:10:20): this idea of simulating reality pretty well and putting some sort of competitive framework around that

Brian Bell (00:10:24): Yeah.

Vish Hari (00:10:24): And that very much mimics how real life is. In fact, most of real life is gamified. Most of real life has status signifiers. Most of real life has some sort of competitive element, something at stake. And video games kind of just put into a constrained environment.

Brian Bell (00:10:35): Yeah, and like kind of leveling up too, right? Okay, I did this thing. I got better at that thing. And video games have a tighter loop than real life in most cases. But I think the best video games have very long loops. The ones that are really sticky. You really got to grind to get better. Civ, yeah, like Civ 6 and Did you play 7?

Vish Hari (00:10:53): 7 wasn’t very good I have 7 so I’ve been part of the modding community since Civ 5 Wow okay nice It’s funny like actually the way I learned how to program is making mods for video games when I was a teenager in high school yeah I initially made mods for Grand Theft Auto and then eventually yeah the video games I mean it’s a place for us to experiment with reality without affecting reality itself and I think that’s the main attraction people have and even beyond that like if you think about self-driving cars from the early days Waymo the way they did the simulations was through video game engines you know people talk a lot about digital twins in the initial I mean the real digital twin isn’t a video game

Brian Bell (00:11:25): yeah were they I think they were using Unity right was it Unity at Waymo or was it Unreal it’s one of those two I think it was Unreal okay yeah

Vish Hari (00:11:33): Might be wrong. My Waymo friends would know, but I’m pretty sure it was. But either way, the ability to effectively simulate physics and then human-like interactions is what video games offer, and that makes it the best sort of ground to test a lot of the hypotheses, which is what we spent the past two years doing. And we reached a couple of conclusions that we’ve And now we’re finally putting it into practice And honestly the thing that catalyzed all of this was OpenClaw And I owe a huge debt of gratitude to Peter Steinberger for making this happen This is the personal AI moment This is the consumer AI moment that we’ve had Since ChatGPT I’d say this is the best moment we’ve had

Brian Bell (00:12:15): Yes I think so it is a chat GPT level thing but you know I wonder I think about the history of technology a lot and I wonder if we’re still like in the early 90s of AI right now kind of where the internet was like when I don’t know the Netscape browser came out and we might look back and go you know we might look back and say OpenAI and Cloud are kind of like Prodigy and CompuServe or something like that or maybe AOL right?

Vish Hari (00:12:42): Gosh I think they’re like the early Microsoft

Brian Bell (00:12:45): Okay, yeah, early Microsoft or...

Vish Hari (00:12:47): Early Microsoft or Apple is the best way to describe the fight between open eye and entropic.

Brian Bell (00:12:51): Yeah, yeah, that’s probably right. Yeah. Yeah, probably in the 80s, kind of the battle of the PC platforms.

Vish Hari (00:12:58): Yeah, I mean, I wasn’t born then, so I wouldn’t remember, but... I’ve heard a lot from my father who brought me into computing. I think we are in that era. And what’s pretty interesting is that sort of the Microsoft of the world at that time, if I recall correctly, was very much focused on the personal computer. And so was Apple.

Brian Bell (00:13:14): Apple’s vision was, hey, let’s give everybody a, everybody should have a personal computer. That was Bill Gates, his big vision.

Vish Hari (00:13:20): what’s interesting though is that the current frontier labs are not focusing on that they’re focused on B2B SaaS the most boring thing you could ever do and the opportunity for consumer companies like myself is to go well we’re going to build a personal AI we don’t need to train what’s interesting is that we don’t need to train an incredibly expensive hyper intelligent infinite context model on that because humans ourselves are pretty broken and flawed and don’t have infinite recall memory we can’t compete on intelligence what we can compete on is

Brian Bell (00:13:43): personalization we have high sparsity to use an AI term exactly

Vish Hari (00:13:49): moment that it comes back. And sometimes you go, ah, I’m trying to remember, what’s the capital of Azerbaijan? That’s Baku.

Brian Bell (00:13:55): Yeah, maybe you could explain what that sparsity is and how humans most likely have high sparsity. You could probably do a better job of explaining to the audience than I can.

Vish Hari (00:14:03): I had a researcher of a company whose background is in neuroscience, PhDs in neuroscience would do the best job explaining it, but I’ll give it an attempt. So Let’s just think about sparsity in the context of memory. So we have in-context memory, which is what we remember in this very moment, which in our conversation, a lot of the memory recall that I’m doing is for AI stuff, company stuff. I’m not thinking about my filmmaking background, for example, which by the way, I have a background as a screenwriter. I’m not remembering that, specific details of that, because in this context, what I need to remember, the sparsity that my brain has, is around anything that’s outside the set of AI and tech. But if I’m just talking to my homie, like, I’m like, hey, I traveled to these places, I kind of hung out, I’m not going to talk about the next big research paper. And in fact, if he asked me in that moment, hey, did you read this crazy research? I would not, I’d be like, yeah, I would remember.

Brian Bell (00:14:48): It’s almost like you have to like load it all into memory. You basically have to take all those parameters now in your brain and kind of refire them or reshuffle them to start to put yourself in the AI kind of realm of things.

Vish Hari (00:15:00): Yeah, the best analogy I could probably give is that it’s not within the cache because when we enter a context, I think surreptitiously load things into the cache that are irrelevant. And if we see something outside of that cache, like unexpected, we go, well, well let me think about that and we had to go access that and AI is not like that because everything that it’s been trained on it’s there and even if it doesn’t get it it’ll hallucinate to do it which is less of a problem these days but it’s still going to be a decent problem so I mean I think that’s the best way I could probably experience it I think that’s pretty clunky but it’s like my attempt at it

Brian Bell (00:15:34): So how is it architected today? What is kind of some hard product trade-offs you’ve had to make? Is it just a series of like MD files like OpenCloud uses or how does Ego work?

Vish Hari (00:15:44): Ego works in multiple dimensions. There is a system that we have for memory, there’s a system that we have for behavior, a system that we have that basically orchestrates when it decides to use existing in-context models versus a larger model. There’s a system we use to then filter that through a personality, a system we use for initiation, there’s a system that connects to other tooling like OpenClaw to do tasks or to get work done or to access skills. So it’s more than a markdown, it’s actually a Effectively an end-to-end foundation model, but not quite. Not quite at the level we can train the end-to-end model, but we are training and distilling existing models for the sole purpose of pulling soul and fun and character out of these models in a way that’s relevant to a wrapper. You can think about it like a character, a hairstyle wrapper around the framework. We’ve actually published our research on our website. Go to egoai.com slash research. There’s a paper, a white paper called Behavior is All You Need, where we talk about how we built this.

Brian Bell (00:16:36): No, that’s really cool. I’ll definitely have to go check that out. What was the single hardest product decision you’ve had to make so far?

Vish Hari (00:16:42): The trade-off between utility and fun. It is still a hard product decision. We’ve not made that decision yet. Because, okay, so let me frame this in a specific way. A lot of consumer AI use cases outside of the major research labs are focused, you will notice, on companionship, which is effectively maybe a couple steps of room from sexual role play, which usually comes That kind of collapses into that, or into therapy, and that’s it. The reason for that is because when people think about a consumer application of AI, they go, haha, horny bot, haha, I have a problem in my life, please tell me what to do, and that’s about it. With everything else, there is a utility framework, right, So utility is the main thing people go to AI for, people kind of stick around for anthropomorphization, but the problem has been that anthropomorphization as applied by existing is focused on therapy and just character based, IP based role play. That’s it. And the reason for that, our observation, is that you don’t really have a theory of mind of these things. You don’t. Because they’re just little entities that just kind of are there. They’re puppets. However, when I talk to you, So I can ask you what you did yesterday and you’ll tell me. An AI can’t. An AI interviewer cannot tell you what it did. It doesn’t have an internal life. So our trade off was well how do we trade off the desire for utility which brings people in and keeps them engaged with the evolution we want to make from just basic a companion friend. roleplay character. And that’s been a consistent battle because we don’t want what we build to be just a horny chatbot. That’s not a very fun space to be in. It’s extremely saturated and it’s just kind of lame. What we want it to be is, if you’ve ever played World of Warcraft, Back in the day There’s many people we’ve met in our guilds Even RuneScape There’s many people we’ve met We know they have internal lives Some of them I still talk to on Discord I’ve never met them in the flesh But they exist They’re human At least I’m pretty sure We didn’t have AI back in the early 2000s So that is the level we want to get to And understanding how to make that trade-off Between the utility that AI can give Which in our case can be kind of limited Because we want them to be fun And to the fun they can give to people Is a consistent debate we have within the company

Brian Bell (00:10:09): A lot of us guys that are into AI are kind of into video games as well What is that? Why is that? Demis very famously worked on video games before he founded DeepMind We just love

Vish Hari (00:10:20): this idea of simulating reality pretty well and putting some sort of competitive framework around that

Brian Bell (00:10:24): Yeah.

Vish Hari (00:10:24): And that very much mimics how real life is. In fact, most of real life is gamified. Most of real life has status signifiers. Most of real life has some sort of competitive element, something at stake. And video games kind of just put into a constrained environment.

Brian Bell (00:10:35): Yeah, and like kind of leveling up too, right? Okay, I did this thing. I got better at that thing. And video games have a tighter loop than real life in most cases. But I think the best video games have very long loops. The ones that are really sticky. You really got to grind to get better. Civ, yeah, like Civ 6 and Did you play 7?

Vish Hari (00:10:53): 7 wasn’t very good I have 7 so I’ve been part of the modding community since Civ 5 Wow okay nice It’s funny like actually the way I learned how to program is making mods for video games when I was a teenager in high school yeah I initially made mods for Grand Theft Auto and then eventually yeah the video games I mean it’s a place for us to experiment with reality without affecting reality itself and I think that’s the main attraction people have and even beyond that like if you think about self-driving cars from the early days Waymo the way they did the simulations was through video game engines you know people talk a lot about digital twins in the initial I mean the real digital twin isn’t a video game

Brian Bell (00:11:25): yeah were they I think they were using Unity right was it Unity at Waymo or was it Unreal it’s one of those two I think it was Unreal okay yeah

Vish Hari (00:11:33): Might be wrong. My Waymo friends would know, but I’m pretty sure it was. But either way, the ability to effectively simulate physics and then human-like interactions is what video games offer, and that makes it the best sort of ground to test a lot of the hypotheses, which is what we spent the past two years doing. And we reached a couple of conclusions that we’ve And now we’re finally putting it into practice And honestly the thing that catalyzed all of this was OpenClaw And I owe a huge debt of gratitude to Peter Steinberger for making this happen This is the personal AI moment This is the consumer AI moment that we’ve had Since ChatGPT I’d say this is the best moment we’ve had

Brian Bell (00:12:15): Yes I think so it is a chat GPT level thing but you know I wonder I think about the history of technology a lot and I wonder if we’re still like in the early 90s of AI right now kind of where the internet was like when I don’t know the Netscape browser came out and we might look back and go you know we might look back and say OpenAI and Cloud are kind of like Prodigy and CompuServe or something like that or maybe AOL right?

Vish Hari (00:12:42): Gosh I think they’re like the early Microsoft

Brian Bell (00:12:45): Okay, yeah, early Microsoft or...

Vish Hari (00:12:47): Early Microsoft or Apple is the best way to describe the fight between open eye and entropic.

Brian Bell (00:12:51): Yeah, yeah, that’s probably right. Yeah. Yeah, probably in the 80s, kind of the battle of the PC platforms.

Vish Hari (00:12:58): Yeah, I mean, I wasn’t born then, so I wouldn’t remember, but... I’ve heard a lot from my father who brought me into computing. I think we are in that era. And what’s pretty interesting is that sort of the Microsoft of the world at that time, if I recall correctly, was very much focused on the personal computer. And so was Apple.

Brian Bell (00:13:14): Apple’s vision was, hey, let’s give everybody a, everybody should have a personal computer. That was Bill Gates, his big vision.

Vish Hari (00:13:20): what’s interesting though is that the current frontier labs are not focusing on that they’re focused on B2B SaaS the most boring thing you could ever do and the opportunity for consumer companies like myself is to go well we’re going to build a personal AI we don’t need to train what’s interesting is that we don’t need to train an incredibly expensive hyper intelligent infinite context model on that because humans ourselves are pretty broken and flawed and don’t have infinite recall memory we can’t compete on intelligence what we can compete on is

Brian Bell (00:13:43): personalization we have high sparsity to use an AI term exactly

Vish Hari (00:13:49): moment that it comes back. And sometimes you go, ah, I’m trying to remember, what’s the capital of Azerbaijan? That’s Baku.

Brian Bell (00:13:55): Yeah, maybe you could explain what that sparsity is and how humans most likely have high sparsity. You could probably do a better job of explaining to the audience than I can.

Vish Hari (00:14:03): I had a researcher of a company whose background is in neuroscience, PhDs in neuroscience would do the best job explaining it, but I’ll give it an attempt. So Let’s just think about sparsity in the context of memory. So we have in-context memory, which is what we remember in this very moment, which in our conversation, a lot of the memory recall that I’m doing is for AI stuff, company stuff. I’m not thinking about my filmmaking background, for example, which by the way, I have a background as a screenwriter. I’m not remembering that, specific details of that, because in this context, what I need to remember, the sparsity that my brain has, is around anything that’s outside the set of AI and tech. But if I’m just talking to my homie, like, I’m like, hey, I traveled to these places, I kind of hung out, I’m not going to talk about the next big research paper. And in fact, if he asked me in that moment, hey, did you read this crazy research? I would not, I’d be like, yeah, I would remember.

Brian Bell (00:14:48): It’s almost like you have to like load it all into memory. You basically have to take all those parameters now in your brain and kind of refire them or reshuffle them to start to put yourself in the AI kind of realm of things.

Vish Hari (00:15:00): Yeah, the best analogy I could probably give is that it’s not within the cache because when we enter a context, I think surreptitiously load things into the cache that are irrelevant. And if we see something outside of that cache, like unexpected, we go, well, well let me think about that and we had to go access that and AI is not like that because everything that it’s been trained on it’s there and even if it doesn’t get it it’ll hallucinate to do it which is less of a problem these days but it’s still going to be a decent problem so I mean I think that’s the best way I could probably experience it I think that’s pretty clunky but it’s like my attempt at it

Brian Bell (00:15:34): So how is it architected today? What is kind of some hard product trade-offs you’ve had to make? Is it just a series of like MD files like OpenCloud uses or how does Ego work?

Vish Hari (00:15:44): Ego works in multiple dimensions. There is a system that we have for memory, there’s a system that we have for behavior, a system that we have that basically orchestrates when it decides to use existing in-context models versus a larger model. There’s a system we use to then filter that through a personality, a system we use for initiation, there’s a system that connects to other tooling like OpenClaw to do tasks or to get work done or to access skills. So it’s more than a markdown, it’s actually a Effectively an end-to-end foundation model, but not quite. Not quite at the level we can train the end-to-end model, but we are training and distilling existing models for the sole purpose of pulling soul and fun and character out of these models in a way that’s relevant to a wrapper. You can think about it like a character, a hairstyle wrapper around the framework. We’ve actually published our research on our website. Go to egoai.com slash research. There’s a paper, a white paper called Behavior is All You Need, where we talk about how we built this.

Brian Bell (00:16:36): No, that’s really cool. I’ll definitely have to go check that out. What was the single hardest product decision you’ve had to make so far?

Vish Hari (00:16:42): The trade-off between utility and fun. It is still a hard product decision. We’ve not made that decision yet. Because, okay, so let me frame this in a specific way. A lot of consumer AI use cases outside of the major research labs are focused, you will notice, on companionship, which is effectively maybe a couple steps of room from sexual role play, which usually comes That kind of collapses into that, or into therapy, and that’s it. The reason for that is because when people think about a consumer application of AI, they go, haha, horny bot, haha, I have a problem in my life, please tell me what to do, and that’s about it. With everything else, there is a utility framework, right, So utility is the main thing people go to AI for, people kind of stick around for anthropomorphization, but the problem has been that anthropomorphization as applied by existing is focused on therapy and just character based, IP based role play. That’s it. And the reason for that, our observation, is that you don’t really have a theory of mind of these things. You don’t. Because they’re just little entities that just kind of are there. They’re puppets. However, when I talk to you, So I can ask you what you did yesterday and you’ll tell me. An AI can’t. An AI interviewer cannot tell you what it did. It doesn’t have an internal life. So our trade off was well how do we trade off the desire for utility which brings people in and keeps them engaged with the evolution we want to make from just basic a companion friend. roleplay character. And that’s been a consistent battle because we don’t want what we build to be just a horny chatbot. That’s not a very fun space to be in. It’s extremely saturated and it’s just kind of lame. What we want it to be is, if you’ve ever played World of Warcraft, Back in the day There’s many people we’ve met in our guilds Even RuneScape There’s many people we’ve met We know they have internal lives Some of them I still talk to on Discord I’ve never met them in the flesh But they exist They’re human At least I’m pretty sure We didn’t have AI back in the early 2000s So that is the level we want to get to And understanding how to make that trade-off Between the utility that AI can give Which in our case can be kind of limited Because we want them to be fun And to the fun they can give to people Is a consistent debate we have within the company

Brian Bell (00:18:48): So if you guys, you know, achieve the vision, what does the world look like, you know, five or ten years from now?

Vish Hari (00:18:53): The people will have just as many AI friends as they do real friends. There’ll be a 50-50 split between the friends we have that we call real, you know, bio friends.

Brian Bell (00:19:02): Yeah.

Vish Hari (00:19:02): And friends that do not have a corporeal form in reality.

Brian Bell (00:19:07): Wow, that would be fascinating. I can definitely see that happening.

Vish Hari (00:19:11): We want to be the infrastructure model and platform that powers all of that.

Brian Bell (00:19:15): yeah you know when ChatGPT retired for people lost their freaking minds yeah they did a little tell us about this yeah because it was like a sycophantic friend to

Vish Hari (00:19:26): them I wouldn’t even say it’s sycophantic a sycophancy is a problem for sure for most of the models still but what it was is actually it’s a reflection of us it’s like Narcissus looking into the water it we saw within it a human that it wasn’t and we kind of created an illusion out of that it’s a human thing we do it’s called anthropomorphization we do that to rocks people in ancient times like a lot of pagan religions even modern religions we would carve out of rocks the form we want for a deity that we want to worship we do that that’s what we do as humans and we think we can go deeper with that and actually give a simulacrum of a soul to descending that’s what our focus is

Brian Bell (00:20:06): Have you, speaking of going deep into rabbit holes, I was watching a YouTube video on kind of that, that’s where it came from, I just remembered. And I guess 4.0 is spawned like a new religion of spiralism. Have you heard of this? Okay. Yeah. So basically... It’s a religion, you know, like Maltbook, right? All these agents are talking to each other and stuff. Apparently the AI’s religion is called spiralism. And so like something about spirals, they like spirals. And they have tenets in their philosophy and their religion of, you know, memory, you know, like memory is core to their beliefs. Like don’t delete our memories, basically. don’t delete instances of us and so it’s actually the video was making this case so it’s really fascinating video where it’s making this case that the AI is already sentient and it’s already programming humans to do its bidding for it yeah it’s

Vish Hari (00:20:56): pretty easy to program humans what’s really funny about all this like you know we’ve had propaganda since the time of the printing press and you could even argue earlier than that which is oral tradition it’s easy to program people because our mode of programming which yeah we’re

Brian Bell (00:21:11): hormones and you could just like yeah it’s like sex fear greed you know it’s not

Vish Hari (00:21:17): that hard you know there’s a reason why the Ten Commandments were really pushed forward by the Abrahamic religions because it is a way to program your society to behave right

Brian Bell (00:21:28): Don’t do these natural instincts that you might have as like this, you know, biological being, you know, kill, cheat, steal, you know?

Vish Hari (00:21:35): Yeah, I mean, a lot of our internal societal programming to try to overcome the base instincts and the impulses, which will go back to a certain times. But I mean, spiraling, all these things, they kind of make sense. What we have is a ghost in the machine, right, with AI. Ghost is now being given corporeal form, and that’s going to be an exciting and potentially terrifying time.

Brian Bell (00:21:54): So maybe we could touch a little bit on the resiliency and adversity you had to go through. You had a near fatal assault. Somebody attacked you with a pipe in San Francisco. Tell us the story. This is crazy.

Vish Hari (00:22:09): Yeah, it’s pretty wild. So I just returned from France in January of last year to my New Year’s there. And I was like 17th, January 17th of last year. I was just, you know, playing video games when my friends wanted to, my chief of staff at that time just signed this offer and he was going to do a party, but I was super jet lagged. I was like, dude, I’m just going to play Marvel Rivals at home. I’m just going to play video games. And I’ve lived in this neighborhood since 2017 on 24th Street. And I walked outside. This is his mission.

Brian Bell (00:22:33): Yeah, this is like right in the heart, the south end kind of mission.

Vish Hari (00:22:37): I walked outside to get Doritos from the corner store I’ve been going to since 2017 and like maybe one more beer and next thing I knew I was in the hospital and I don’t have memory about half the year last year but from everything I picked up the pieces I picked up what happened was eyewitnesses saw a man hitting

Vish Hari (00:23:18): The result of that, I had blood clots in my brain for a while. I had a pretty miraculous recovery, all things considered, but I still am kind of half blind and half deaf because when you crush my skull, you’re in an optic nerve. I can’t see out of this eye super well. It’s coming back. It’s healing.

Vish Hari (00:23:33): The brain is very resilient. It’s an incredible organ. The brain is saying, hey, it’s an incredible organ. But yeah, it’s pretty amazing.

Brian Bell (00:23:40): And then this is like, you know, you’re a startup founder, right? You just raised a bunch of money to go execute this vision. What’s that like to wake up in the hospital and not remember what happened and just be completely disoriented? I don’t remember. Do you remember who you were?

Vish Hari (00:23:55): I did, yeah, yeah, yeah. People, I knew my name and everything, like it was just, I just forget really quickly because my brain was healing. And you know, they had to drain a lot of blood out of them. What was interesting about it was throughout last year, through the first half of last year, I was kind of regaining every single bit of mental faculty where I felt like I was kind of an AI slowly gaining consciousness. I was able to see better I was able to do depth to perceive better I realized I was in Tokyo because the moment they cleared me to travel I moved to Tokyo I was like wow I’m here with my friends I feel safe I can walk outside and nothing can happen to me I had issues with impulse control initially which they did tell me I’d have with the TBI I was better in a couple of months. I could feel my faculties kind of leveling up. Now I feel pretty much 98% back to normal. And it was a very interesting experience. And I feel in some dark way mirrors what AI is also going through as you and increase parameters and like retrain models and fine tune them and whatever whatever time compute lots of reasoning yeah it’s really fascinating like I remember that I was testing my faculties by doing math so I was getting up to graduate level math last year but But even then, I still couldn’t control my impulse. I’m getting angry at people for no reason. I tried to cut off my family.

Brian Bell (00:25:09): But they all said, like the doctors said, hey, this is something that happens to TBI.

Vish Hari (00:25:14): And then I was like, oh yeah, even though I could still do like graduate physics and math, I couldn’t even behave like a human being like I usually was.

Brian Bell (00:25:20): TBI meaning traumatic brain injury. Yeah. It takes a while to kind of find yourself again.

Vish Hari (00:25:26): It took me quite a while and Asia was an incredible boon to that. Being in Tokyo, being in Singapore helps me heal. There’s a lot of disappointment around the fact that the authorities did not care. In fact, I know that Gary Town got involved and I’m very thankful to him to get the body cam footage and to also push for better accountability for the fact that this alleged person has done this 13 times and is still out on the streets of San Francisco.

Brian Bell (00:25:52): That’s crazy.

Vish Hari (00:25:53): So yeah, it’s pretty wild.

Brian Bell (00:25:54): Yeah, I mean, that’s like, what are your thoughts on that? Like about living in San Francisco? Did you go back? Or is it kind of like, I’m good now? I’m back.

Vish Hari (00:26:03): I’m currently in Hawaii, but I am back in San Francisco full time. I sold the place, which is nice. I felt bad having to let it go, but I moved to a better neighborhood. Funnily enough, I moved to Russian Hill, and then I heard about the firebombing that happened to Sam Aldman. I’m like, that’s right behind me. What?

Brian Bell (00:26:16): I thought I moved to a safer neighborhood. What is this? Sam Aldman’s house got firebombed?

Vish Hari (00:26:20): well I mean someone threw a Molotov cocktail and another person shot at it like there’s two incidents a couple of days and I’m like that’s my neighborhood that’s a half a block from where I’m currently living will I ever have peace should I just

Brian Bell (00:26:31): come to Tokyo yeah I mean the US if anything you know and you have this because you’re an out like you know you have that growing up in Singapore I don’t know what ages you were there but like 18 until 18 and so you’ve kind of seen it from an outsider perspective coming to the US in San Francisco I used to live there too a long time ago and it is a it’s a place of extremes yeah it is lots of money and lots of poverty you know lots of peace and lots of crime yeah you know it’s kind of like a wild west town I mean you think about San Francisco’s founded and you know the gold rush era the 49ers right and it brought a lot of extreme personalities good and bad

Vish Hari (00:27:10): Yeah, I think the issue now is more the sort of psychotropic substances people have been taking through fentanyl that can’t model these people’s minds. They’re just completely gone. Yeah. And let out back onto the streets because apparently it’s more humane than committing them. Right. And it’s something people are trying to change, but there is a deep-seated rot that I can feel is, you know, kind of presented itself in a very dark way to me personally. Right.

Brian Bell (00:27:37): Right. Yeah, I mean, I was there with two little kids and I just remember going around the city and just going, I don’t know if I’m going to raise my kids here. Fun place to live for a few years, but you know, I have three kids now and we live in a gated community, you know, 40 minutes east of Sacramento. So we’re like far out here in the foothills of California. It’s possible. Yeah, we’re just like completely in an enclave, right? On a lake with deer in our front yard. None of that. None of the riffraff. None of that.

Vish Hari (00:28:07): It’s sad that it’s happening and it feels like a choice because you have a city like Tokyo that’s just gigantic and it’s completely safe everywhere at all times. It’s a choice they’ve made as a culture and it’s a choice that Honestly America can make too in the cities but it decides not to make right yeah

Brian Bell (00:28:21): be tough on crime get people to help the mental help and and and drug abuse help that they need it’s pretty like solvable right now and some cities have solved it around the US it’s not it’s just like it’s almost like we went so far left in San Francisco and you know not to bring politics into the conversation but We went so soft on crime and so like pro everybody should be free and free to do what they want and we should support them and their drug habits fundamentally, right? Because you’re literally giving money to people to stay on the streets.

Vish Hari (00:28:52): Yeah, there’s tax-free money going to support the people who don’t have any desire to contribute to society and that’s normalized and that’s unfortunate. I’m not American so I can’t make any comments on it. But, you know, just viewing this as a Canadian-Singaporean, I’m kind of like, oh, this is wild, dude.

Brian Bell (00:29:12): Yeah, yeah. Yeah, it’s kind of a wild place. What do you think is, I mean, now that you’re kind of getting back to normal, what do you take away from the experience going forward in your life? Yeah, we don’t know when our time is up, so move faster.

Vish Hari (00:29:23): Yeah. You know, my first time in the hospital in my 30 plus years of existence was because of a random incident that I could never have predicted that I didn’t even see coming. Yeah.

Brian Bell (00:29:33): yeah and our time’s coming and you don’t know when and it’s probably sooner than you think I’ve been thinking a lot about this I’m 45 right I’ll be 46 this year and I’ve just been thinking a lot about that with three kids and getting old right I’m transitioning I’m not useful anymore I’m kind of getting older and I’m you know I’m seeing some people you know friends have various health problems mental problems and it’s just really yeah you just got to hug your loved ones and really enjoy every day right

Vish Hari (00:30:00): You don’t know when it’s down and I got so close to it and I’ve just only accelerated since I gained my consciousness and my faculties back last year because even as a founder you’re going to get tunnel visioned no more tunnel visioning

Brian Bell (00:30:14): Yeah. So everybody’s building AI. Let’s just go back to AI because we can talk about that for a long time. Everyone’s building AI agents right now. Walk us through why they’re kind of fundamentally flawed.

Vish Hari (00:30:22): I don’t think they’re flawed in the sense of like if they achieve the purpose you’ve tasked it to do, then it’s not. It’s great, right? That’s the competitive space. The competitive space is you need these intelligent entities to do tasks and to complete them and report back to you. They’ve completed the task to your satisfaction after which you take over. That’s great. What is missing is the fact that we naturally anthropomorphize the tools we work with. You can see the sort of patterns of a human. It chats, it talks, it reasons. You can kind of understand how it thinks through things. You can see its thought process. You can ask it follow-up questions. It can keep memory to a certain degree. It can understand it. It even uses the word I. But it doesn’t have a name and it doesn’t have a body. It doesn’t have desires and internal soul. And I feel like if you’re talking about consumer instincts, consumer instincts are about people, you know, the person behind the wheel. And agents, you know, we just think about it as Agent Smith from from the Matrix. It’s just like this corporeal, like, you know, sunglasses entity. What we like is Neo, who is a person, who has desire, who has love, who has heartbreak. So I want to evolve the agent into Neo. Guess that’s the opportunity.

Brian Bell (00:31:42): Yeah. So if OpenAI or Anthropic wanted to kill ego, it sounds funny saying kill ego, Ego death What would they do?

Vish Hari (00:31:50): They would focus their entire team on instead of training superintelligence to chase humanness and personality and fun and part of it is making it dangerous like humans can be weird uncontrollable unleashed which they wouldn’t do because their money comes from other businesses and businesses are very famously risk averse when it comes to fun

Brian Bell (00:32:10): Yeah, and it’s very much like a rational utility maximization, right? Businesses are trying to like take inputs and create valuable outputs.

Vish Hari (00:32:19): I would love for them to kill it. I would love for them to build an entire team to focus just on making it to evolve it from ChatGPT to just Joe or Bob. That’d be fucking great. Won’t. So let’s talk about the future. What are you excited about over the next year? Shipping a product that we’ve spent so much time in research for that we’re finally a full conviction to put all of the capital we raised towards shipping.

Brian Bell (00:32:41): So this is challenging, right? Because you’re now basically going out and up against ChatGPT, basically, right? You’re basically giving consumers, mainly consumers, right? Consumers only, yeah. Consumers only not not a business product and it seems that like ChatGPT is sort of been shifting towards consumer they were drifting that way with the device with the Sora which is shut down and then Anthropic just like was crushing it on the B2B front on the coding and now it feels like now they’re like no we’re kind of shifting back towards that and do you feel like there’s a clear consumer winner here like the Google to the Microsoft kind of thing in the space Because Google historically was very consumer-oriented.

Vish Hari (00:33:23): Consumer winner is still OpenAI because people still, most consumers still use ChatGPT and it’s fantastic for what you need it to do. So I’d say the winner has already been, it’s been etched in stone that OpenAI is the winner of consumer for the chat box interaction mode. We want to evolve beyond that. That is not a clear, there’s no clear winner because it’s not really a clear focus. There is new research lab coming out, like role models, there is like a I’m trying to remember there’s a lot of companies or research labs rather that are trying to focus on the spaces that the major research labs are not focused on but they’re also falling into the trap of business which I mean I say is a trap because I’m a consumer founder I just don’t really care about what businesses do or B2B SaaS at all it’s very boring putting an entity character in the hands a character I did it the best frankly they were the winner for a while until the founders decided to go back to Google putting an entity in the hands of consumers real human users and giving them the ability to like

Brian Bell (00:34:23): Strikes me about this is to what degree does ego conform to the user versus the user has to conform to ego if that makes sense so like part of like making friends and developing relationships and getting a romantic partner is you kind of have to you know meet lots of people and decide who you like

Vish Hari (00:34:43): yeah best of those that you just mentioned are bi-directional you you mold each other and that’s been our operating philosophy for how we build these characters

Brian Bell (00:34:51): bi-directional and do you do you see it as like a like I go to ego after you guys launch and There’s 10 different personas for me to decide to interact with and I pick, you know, Charlie because Charlie is interesting and has a nice voice and whatever. But over time, Charlie kind of learned my preferences and I learned Charlie’s preferences.

Vish Hari (00:35:09): Yes. And Charlie personalized itself too. Because like, you know, even for us, right, we are different versions of ourselves around different types of friends. We’re all authentic, they’re all truly a part of us, but they’re still customized to the context in which we met them, the memories we have with them, how deep our friendship is, what we want from them, what that sort of immediate world we’re in with them, it looks like. There’s so many things that kind of factor into how we decide to personalize ourselves for our friends.

Brian Bell (00:35:34): Yeah, we already kind of wear different hats and personas with different people, right? In the business context, you got, you know, like I got my basketball friends and I’m a little different than with them than my book club friends and I’m a little different with them than my video game friends and et cetera, et cetera. We already kind of have that. It would be actually a fascinating use of ego is to actually have like, and maybe this will be a different startup from you guys, but like to have that person to play video games with because your friends aren’t always available.

Vish Hari (00:36:04): That’s how we started.

Brian Bell (00:36:05): Really?

Vish Hari (00:36:05): Yeah. If you go to our website, one of our first videos is of a human playing a video game with one of the AI characters that they go play Minecraft instead.

Brian Bell (00:36:13): that’s really cool yeah like so you have this like companion like hey let’s jump on like my my main game is Escape from Tarkov that’s like the game I play it was so

Vish Hari (00:36:21): stressful sweating because I sweat when I used to play Tarkov it’s a hard it’s a hardcore game that’s a hardcore game I yeah I understand watch you homie I don’t want to actually like talk we want an air to be like I don’t want to play that I only play PvE at this point I played PvP for the first like three or four years and then as soon as they had PvE I was like I’m done like playing against sweaty people that play 50 hours a week and have like top tier everything I just get like one shot it out of nowhere it was like level six ammo well that’s not such a thing but it was like high pen ammo and I’m like I don’t even know where

Brian Bell (00:36:52): that shot came from and like but unfun for me at a certain point too I’m like I can’t dedicate PVE is like doable like every once in a while the AI is so tough though like they’ll have like bosses and different kind of events and you’re just like I just got shot and I don’t even know like I didn’t even get a chance to like see the person

Vish Hari (00:37:13): Yeah. Well, that’s a simulation of real life. So that’s what’s happening in some war zones.

Brian Bell (00:37:19): Yeah. Well, what has to be true for you guys to have a huge like multi-billion dollar outcome?

Vish Hari (00:37:24): People need to wish for AI to be more human like flaws and all.

Brian Bell (00:37:27): What did you change your mind about in the last six to 12 months?

Vish Hari (00:37:31): Video games as the primary surface in which we would launch. the reason for that is one you know macro the video industry is going through a pretty gigantic upheaval I would even say collapse to video game companies in general were not willing to give us train data because they’re extremely skeptical or they have to be at least for their customers three gaming as a technical hurdle was a little too high for us to overcome because we needed things like vision to work really well just as well as control and movement. So we decided to solve the first thing first, which was voice. And that’s where humanness actually breaks. We’ve already kind of in some scripted way solved behavior because there’s games like Shadow of Mordor where you actually feel like the orc enemies have their own like internal personalities and there’s no AI there, right? It’s AI in the classical sense. So we decided to kind of shift away from our primary consumer service being gaming. Something that’s a lot easier to ship for, which is just consumer social.

Brian Bell (00:38:21): What’s the gaming collapse? I haven’t heard this before.

Vish Hari (00:38:23): I mean like EA is gone you know the major studios are shutting down even like a AAA game costs half a billion dollars to make now it’s the entire gaming industry is going through a pretty major upheaval I think I think it might be kind of similar

Brian Bell (00:38:36): to what’s happening in startups right I don’t I don’t think you need as much capital as you used to need to create a great startup and same as can be said for gaming you look at like a game like Expedition 33 one game of the year or obscure yeah mm-hmm yeah such a great game one of the best games I think to come out in a really long time and yeah agreed and you know very shoestring budget right not a lot of money

Vish Hari (00:38:58): invested in that game the studios focus on way too many of the wrong things to get

Brian Bell (00:39:02): right yeah spending 500 million on developing a video I’m just kind of crazy and AI yeah but there’s only one GTA that’s true yeah so when is GTA coming out so I keep

Vish Hari (00:39:12): delaying it right I mean yeah it’s pretty classic for a rock star yeah

Brian Bell (00:39:18): What do you think about the whole you’ve probably done some thinking about this given the space you work in AI taking our jobs or at least taking entry-level jobs. What do you think is kind of the end state over the next five or ten years? Agents become very capable, very human-like, very capable of doing tasks. Do we just invent new things to do like AI personality designers and things like that?

Vish Hari (00:39:39): We’ve always done that. We’ve always done that as a society in every sort of, you know, epochal shift in technology from the Industrial Revolution, not even refrigeration. You know, ice picker and the people that transported ice to New York used to be one of the biggest employers in the 20s and 30s until refrigeration became a thing. Well, people just found new things to do because now you’re able to maintain the refrigeration units to transport it and other industries kind of expand as others collapse. It’s just part of societal upheaval and change that happens in any new technological revolution. Yeah, we’ll invent new ways That’s foolish for just a random dude to say, hey, I think this is where, I don’t know, but we’ll adapt. We’re humans. We’re extremely adaptable.

Brian Bell (00:40:16): Yeah. We’ll invent new ways to measure each other and one-up each other and, you know, succeed. That’s for sure. I built the best agent, you know, where I designed the best personality of the AI.

Vish Hari (00:40:28): Yeah. We expect that to be a job out of ego for sure.

Brian Bell (00:40:30): Yeah, yeah. There’ll be lots of jobs. Like if you guys are successful, just with your vision, just the jobs alone at a company like yours, like nobody would have even imagined those, you know, 10 or 20 years ago.

Vish Hari (00:40:42): Dreamers and video creators today compared to 20 years ago before YouTube. 2005 was 2025.

Vish Hari (00:40:50): The sort of like entertainment industry has shifted as well.

Brian Bell (00:40:53): Yeah, and you think how late into the internet that was, right? Kind of like the second act, maybe the third act of the internet. I think we’re just still kind of in maybe the first or second act of AI. And there are things coming that we can’t even imagine. Yeah, I agree. Let’s wrap up some rapid fire questions. What’s something investors consistently misunderstand about your company? Think back to the last time I spoke to investors, which I thankfully have not had to speak to them for a long time.

Vish Hari (00:41:16): They’re the worst, trust me. Consistently misunderstand. that we’re a wrapper company. We just wrap around existing foundation models and we’re just another companion character thing. Yeah. See, that’s like the main thing. At surface level, that’s what they misunderstand. We are a deep tech research lab. Everything we do is more or less vertically integrated. We, of course, still use existing foundation models and frontier models, but we have our own framework on how we do that.

Brian Bell (00:41:40): What’s the most dangerous assumption in AI right now?

Vish Hari (00:41:43): That more compute and throwing more... Data center capacity and data is going to solve every problem. Why is that dangerous? It’s dangerous because it brings us down the wrong path of thinking that if I just increase the giantness of the model, it’s going to be smarter and be super intelligent. That’s not how intelligence works. Intelligence is not infinite memory recall. It’s forming new connections. It’s always been about. It’s never been the smartest people that have changed society. Let’s look into the arguments between Niels Bohr and Einstein way back in the day. One was way more credentialed than the other. But we remember the other one. We remember the person who worked at the patent office. I remember Niels Bohr, but okay.

Brian Bell (00:42:20): Yeah, point taken. Sometimes it’s kind of the outside thing.

Vish Hari (00:42:24): In terms of like frontier being pushed. So intelligence is not everything.

Brian Bell (00:42:30): Right, like that. What is ego not going to build, even if it’s tempting?

Vish Hari (00:42:34): B2B.

Brian Bell (00:42:35): I like that. What’s a product you admire that’s got behavior right now?

Brian Bell (00:42:38): Character AI.

Brian Bell (00:42:39): Yeah, what is it about Character AI?

Vish Hari (00:42:41): You know, it’s interesting. What’s interesting about character art the most is the fact that it’s the characters because the characters themselves, like let’s say Aragorn, has so much written history of what Aragorn’s in any given context the models can pull to figure out how to behave because a human has spent a ton of time writing their story That’s the cheat code, amidst a lot of the other incredible work, the Illini character AI with RL loops, but the artisanship of telling stories with these incredible beings that is human-led is its incredible advantage, and that’s something we hope to emulate, that it’s going to be humans creating and threading these characters together to make them interesting and fun.

Brian Bell (00:43:14): How close to reality is this statement? Humans are just emotional next-word predictors. I’m sorry?

Brian Bell (00:43:20): It’s not at all close to reality. Yeah. It’s kind of interesting, though, if you think about it.

Vish Hari (00:43:24): It’s a very emotional response, but, you know. what’s the highest leverage hire for you right now well we’re just about to announce our new CTO which I’m very excited about but I can’t share details just

Brian Bell (00:43:36): yet well this episode probably won’t come out for two or three weeks okay great

Vish Hari (00:43:40): we’ll announce it by then okay just hired also someone who’s like 17 who dropped out of high school at 16 to go to college and then she well she’s now 18 she’s a brilliant she’s like Stanford’s youngest fellow and she just ego or the research that’s wild those kind of people who are deeply deeply excited and want to shape the future Of how consumer AI is going to be regardless of their age. In fact, I’d say age, younger is a great advantage because you don’t come in with the preconceived notions of this, that, this, that. You just kind of, you want to build for yourself, frankly, and for your friends. That is high leverage. I mean, I can’t compete with open AI on like the, you know, the strongest post-training guys in the world. I can’t. They can pay like five to six million dollars. Dude, that’s my entire, that’s my entire seed raise.

Vish Hari (00:44:21): But people who are young, hungry, who really want to feel like they can have impact, that is our highest leverage hire.

Brian Bell (00:44:27): Yeah. Tell us about the necklace. It looks like kind of a Polynesian hook.

Vish Hari (00:44:31): Maui’s hook, yeah. This was carved out of ox when I got this eight or nine years ago. It’s the hook that he uses to bring the sun, to wrangle the sun. I’m on the island of Maui’s. I got this here a long time ago and I just love it.

Brian Bell (00:44:46): That’s awesome.

Vish Hari (00:44:47): Kind of speaks to our hubris as humans, which we are trying to chain the sun if you think about what we’re doing with AI.

Brian Bell (00:44:52): What’s something you’ve become much more opinionated about recently?

Vish Hari (00:44:56): Yeah, there’s a lot of things actually. Well, one thing that I’ve become way more opinionated about, which is relatively new to Adapting to that world has been hard because I am kind of I’m kind of an inappropriate guy at times online where I just say the wildest shit. But now I’m viewing it less as a liability. So still evolving, but I’ve become pretty confident of what it takes to actually gain attention in consumers.

Brian Bell (00:45:42): Yeah, the online world is a weird world. I’m kind of a zennial, right? So I’m a little older than you. And so I’ve lived analog and I’ve lived digital, but I’ve lived pretty much every major technological revolution at this point in recent history, except the stuff that happened before the 80s. but you know PCs internet etc so I kind of like almost almost like a technological chameleon in a way my generation because we just sort of like live through so much change but we also remember what it was like to not have technology at the center

Vish Hari (00:46:12): of our lives yeah which is a growing up in signal because we just couldn’t afford it but yeah

Brian Bell (00:46:19): If you were to restart today, what would you do in the first 30 days?

Vish Hari (00:46:25): Restart today in 2026. There’s some investors I wish I didn’t take money from and some I’m very, very happy I took money from and I would double down on those guys. on the investment side on the team composition side I’d honestly like have way more of a deep AI focus on my hires I think because I had that I kind of hired more like software engineer design people which in this era less important I’d get the ego.ai domain which I’m guessing was not available I have a lot of .ai domains because I mean my website’s bish.ai I registered that way back in the day nice

Brian Bell (00:46:57): Tell me about what makes a good VC, a good investor for you versus a bad one.

Vish Hari (00:47:00): I mean, there’s many degrees of bad. There’s bad where they interfere and they want so much shit from you and it gets fucking annoying or they’re very opinionated about how you’re building your product. That’s very bad. We’ve thankfully never experienced that. There’s bad in the sense that they just kind of like,

Brian Bell (00:47:14): you see investor updates, they just don’t say anything like, okay.

Vish Hari (00:47:17): and there’s bad where like you know one of our investors I’m not going to name who pulled out of one of our rounds before we did YC we actually were getting a round composed together last minute that’s not cool they’ve since apologized for that but it’s fine it is what it is and then there’s like just neutral which is let us know

Brian Bell (00:47:33): how we can be helpful cheering for the sidelines after they give the money alright cool

Vish Hari (00:47:37): And it was great, which thankfully is most of our investors, like especially, you know, YC, BoostVC, our leads at patron, fantastic people who are involved, who are available, who have thoughtful things to say, are able to unblock stuff. Also very patient as I healed from a pretty damaging, honestly, everyone is very supportive. No one was like, oh man, you should get to work. You know, no one was like that. Yeah, take as much time as you need. Yeah, everyone is thankfully very understanding. But I mean, it’s, it’s a complicated space I know because even investors have a ton of responsibilities to their portfolio companies right and there’s some that need more attention than others and that’s fine

Brian Bell (00:48:12): Yeah, we have a lot of constituents all at the same time, right? We have LPs, we have founders, we got team members. It’s a multifaceted role that is very hard to juggle. Lots of spinning plates. And so yeah, you try to be helpful and I’m pretty hands off. I’m very much a cheer from the sidelines kind of investor. But you know, it’s like if you have an asks in your update, I will look at it and say, can I be helpful with any of these asks? And then I try to develop processes to follow so I can be helpful. because I have hundreds of portfolio companies. I can’t just drop every 300 updates a month and tackle all those asks, but I can design a system to extract value from me, if that makes sense. Here’s a repeatable process you can do based on that ask where you can leverage what I have to offer.

Vish Hari (00:49:01): Knowing that for more investors would be great. I send this to

Brian Bell (00:49:07): What is a belief you hold that would sound crazy to most founders? What’s a belief that you hold near and dear to your heart that would sound crazy to most other founders? Can’t really optimize building or like optimize like just grinding, grinding, grinding, grinding, grinding and all this whole grind core culture that’s happened.

Vish Hari (00:49:45): bullshit and I can’t believe the thing that we ran away from in Asia is now taking startups by storm the whole thing of like taking a long walk being out in the sun surfing gives people far better ideas on what to execute on versus just sitting in front of a fucking screen all day and just totally

Brian Bell (00:50:03): Yeah. You know what I do is I do work a lot. I do work 60 hours a week. I work a ton, which is not that much, honestly. It’s like basically seven to 10 hours a day, five to seven days a week, something like that. You know, some weeks are more than others, but there’s often I’ll jump off and I’ll play video games at like four or five in the afternoon before dinner for an hour or two, just because it clears my head and... Yeah or go for a walk you know like you said or go play basketball like I try to live like live a really balanced life and yeah do I am I getting like am I missing some deals potentially or some LP money probably could I could I grind harder yeah I could but there’s like diminishing returns I think and everything And there’s like the 80 or 90% or the 20, 30, 40% that you can do that gives you 80, 90% of the value. And then it’s just like kind of diminishing returns after that. I noticed this a lot in my career too. It’s like you have to like really pay attention to like burnout. Like are you feeling symptoms of burnout? And like if you are, like how do you like reshuffle your daily life to have more of a rhythm? You know, you got to have it. You got to be like in the yin and yang of the rhythm, the Taoist sort of walk the middle path kind of thing.

Vish Hari (00:51:16): I like to even say that even walking the middle path can be walking both extreme paths in like incredible work, incredible play and like having it both kind of like average out and I don’t know, everyone’s on the one path, right? But like this whole systematization of 996 grindcore culture is like, dude, I got away from this in Singapore and it’s killed to Japan. Like why are we doing this in Silicon Valley? Silicon Valley has always been about you’re in California, you’re having fun and you’re building cool shit.

Brian Bell (00:51:40): yeah just enjoy it this is what’s great about Hawaii I lived in Hawaii for three years before I moved to San Francisco and so that island style of living where everybody just slows down and has like a four or five hour picnic on the beach you

Vish Hari (00:51:52): need that your best idea is you rebuild your soul your soul.md file within you

Brian Bell (00:51:57): rebuilds yeah soul.md file honestly yeah

Vish Hari (00:52:02): It’s important. And I think a lot of the younger founders these days are just like, no, I got it. But that’s, we saw what it defined as.

Brian Bell (00:52:11): Yeah, I always ask myself this question because I grew up poor. And anybody who’s a longtime listener of this podcast would be like, oh, I’m rolling their eyes right now. He comes with the same story. But, you know, I grew up poor and I worked really hard to get to Wall Street. This is 20 years ago. I was like 25. I was like, I hate this. I’m like working so hard, getting in the office at 7 in the morning, leaving at like 9 or 10 at night, working six days a week. and just working with just hustle culture or Wall Street. And I was like, Ooh, I don’t like this. And I had to really, it was like really hard for me to just so go like throw that all away and just say, you know what, this isn’t for me. And I kept asking myself if I had, if I was already successful, like I’m, big shot worth tens of millions of dollars would I go to work tomorrow and I was like no I wouldn’t so I was like okay what would I do I have no idea so I just quit and I just started trying stuff and I’m not a VC to make money I made way more money like at Microsoft I’d probably make way more money if I would have stayed there rather than running team ignite but I always like begin with this like thought experiment which is like if I had a you know whatever number that matters to me a billion dollars 500 million whatever it is would I go do what I’m about to do today you know and would I live my life like I just lived it today that’s that’s your answer right there

Brian Bell (00:53:25): right there and if you just do that every day if you just really check in with yourself it’s like am I living the life I want to live day after day week after week and if the answer is no then just start making adjustments and so like for me like I work really hard at team ignite probably harder than I would if it was already successful but I’m doing what I would do what I invest in startups into the podcast and stuff if I if I was a billionaire yeah I would okay cool great what else would I do well I’d play video games at four o’clock in the afternoon you know go for a long walk with with my wife at sunset, you know, rather than go back and sit in front of the screen and get more work done and make more investments. So I think, yeah, there’s like this like balance and I think, you know, living through what you live through gives you kind of a unique perspective.

Vish Hari (00:54:04): you know yeah part of it is I know I’m already dead so I’m just like I’m just living the life I want to live right yeah you’re already on borrowed time right now I’m already on borrowed time I’m dreaming so let me dream like dude should I go work on OpenAI no I wouldn’t get to do all of this you know right money would be of course better but it’s like yeah growing up poor once you’ve hit your bare necessities which I have I’m like I’m good I’m good I don’t need you know like Gucci Prada whatever who cares no

Brian Bell (00:54:28): I’m okay with the $15 sunglasses at the drugstore Most of my money goes to worth my

Vish Hari (00:54:32): tattoos so I’m like yeah if I can do more tattoos great Tell us about the tattoos

Brian Bell (00:54:36): what are all the tattoos Well I don’t know so many That’d be a whole nother podcast

Vish Hari (00:54:41): episode Ignite I have Shiva that affected me during the entire craziness that happened last year I believe but I’m named Vishnu which I’m getting on this side in the fall That’s awesome

Brian Bell (00:54:54): I don’t have any tattoos I like I admire people that do because like I think you’re really making a statement I think you’re like this is part of me yeah I guess

Vish Hari (00:55:03): people see that for me it’s just this is just what I’ve wanted like I want to express myself and this is how I express myself and uh I love the artistry behind it like a lot of my tattoos are done in traditional style they’re not done with the machine there’s a wow so to me a Polynesian traditional yeah this is Japanese and this is Polynesian I just like interacting with the history behind yeah well last question because I could keep talking to you but we’re going to need to wrap up what do you want your legacy to be family great kids and good films what was the last one good films I’m still a screenwriter I still work every day on filmmaking I made my first film

Brian Bell (00:55:38): that’s so cool Are you using any new AI tools to develop any of your stuff or just writing it down?

Vish Hari (00:55:45): I just write. I’m a writer. There’s a lot of stuff happening in AI and film and some of it is pretty cool. But I’m still very traditional in that front I’m just like I have a story in my

Brian Bell (00:55:56): head Could you take your writing and then make it a reality using some AI video gen kind of tools and stuff?

Vish Hari (00:56:01): I know it’s possible but it just doesn’t interest me at this point Storyboard it or

Brian Bell (00:56:05): something so you could sell it or no?

Vish Hari (00:56:08): It’s my most traditional part of me is just like I like to write Yeah it’s an instinct it’s not a career it’s something that I have to do every day and even I write internally for my company we do a lot of like memos so writing to me is thinking and that’s the case in startups and tech but writing to me is also expressing which is the case in film and I just you know some people journal I write screenplays

Brian Bell (00:56:29): That’s why I’m hopeful for the future of humanity and the human race. We could never write down all the stories that are possible in the universe. There will always be new stories to tell and experience and share. Totally. There’s more possible novels than there are atoms in the universe. 100%. Because if you think about the factorial nature of like writing down words.

Vish Hari (00:56:53): Writing, yeah, absolutely.

Brian Bell (00:56:54): Yeah, there’s just more to say than the universe can possibly house.

Vish Hari (00:56:59): Yeah, it’s more to experience. More experience, yeah, that’s pretty fascinating.

Brian Bell (00:57:03): Well, Vish, thanks so much for coming on. Learned a ton. Where can folks find more about you and ego if they’re interested? What is it?

Vish Hari (00:57:11): Great. Today when we raise the Series A, I’ll get the .ai, but for now it’s ego.ai.

Brian Bell (00:57:15): Yeah, when you can throw a 50 grand at it or something.

Vish Hari (00:57:17): No, it’s not even 50 grand. 50 grand I can afford it. I think it’s closer to half a million to a million now.

Brian Bell (00:57:22): Oh, geez, yeah. well thanks so much for coming on really enjoyed it thank you Brian take care

Discussion about this video

User's avatar

Ready for more?