- Emergent Behavior
- Posts
- EB-3: Ego - Transcript - Part 3
EB-3: Ego - Transcript - Part 3
Transcript: Ego - Part 3
π©βπ€ Ate-A-Pi: Yeah, so I'm like, because, you know, my way of doing things is I figure out like a social strategy to go viral. And I that you know that annoyed a lot of people on Twitter. So, you know, Right, right. And yeah. Yeah, so I have like certain rules like only annoying nerds so nerd bait is okay.
π‘οΈ vish: Yeah. Hey, when you annoy someone, you know you're doing something right. Especially on X or Twitter, whatever.
π©βπ€ Ate-A-Pi: So and after a lifetime of dealing with nerds, I'm like, all right, let's see. So, yeah, I was like, Feynman's IQ is 125. Oh my gosh, you should have seen the meltdown. Like, true factual statement, like, oh my gosh, there's an enormous meltdown, right? So I'm a little bit wary of LinkedIn, because I'm like.
π‘οΈ vish: You know how to trigger them.
π‘οΈ vish: Oh my god.
π©βπ€ Ate-A-Pi: If I put something on there and I'm like, I'm going to trigger someone and it's just going to be a meltdown. So I'm like, ah. So one thing that struck me was Mark Zuckerberg did a interview with Lex Friedman where they were both using scans. And he said, they'd done the scan and they'd gone to shopping mall and took like,
π± Peggy: Yeah.
π‘οΈ vish: Mm-hmm.
π©βπ€ Ate-A-Pi: two to four hours or something, and they did like full body scan, et cetera. And also in most of like Facebook's metaverse stuff, they always show characters which are very close to that person's identity. They don't like display kind of like, oh, you know, you can take, because once you have a 3D body scan face scan, Mark could take Lexus like image, right? But of course that's gonna be very sensitive
π‘οΈ vish: Mm-hmm.
π‘οΈ vish: Right.
π©βπ€ Ate-A-Pi: this whole thing about Facebook having your 3D scan, and then you just attach the rig to someone else and someone else can be you, right? So obviously, they're very sensitive about this aspect of switching roles. So how do you see that? Do you see social networks where you have this kind of like, you have to be closer to your true self versus social networks where you are allowed to be something else? Is that gonna be this point of divergence
π‘οΈ vish: Mm-hmm.
π©βπ€ Ate-A-Pi: between the two. Yeah.
π‘οΈ vish: Yeah, possibly. I think that's what the larger platforms are interested in, but it's not what we're interested in. We're absolutely interested in however you see yourself to be manifesting in a 3D space. It can be whatever you want. And I think that's the beauty of what we're trying to do. Like, it makes attribution hard and, you know, there's obviously going to be problems with people pretending to be somebody else, but I think that's part of the trade-off. And how we do with that, I have no idea, but we'll figure it out when we get there. When we cross...
When that's a real problem, that's going to be an exciting problem to want to solve. And I think the larger companies, they get caught in these what if thought loops. I think you've seen that most recently with Google Gemini. They try to over-engineer something to prevent anything bad from happening, which usually results in much worse outcomes actually coming to reality. We want to avoid any of that. We just want to be like, what's super cool and fun that no one else can do? All right, let's ship it.
When that's a real problem, that's going to be an exciting problem to want to solve. And I think the larger companies, they get caught in these what if thought loops. I think you've seen that most recently with Google Gemini. They try to over-engineer something to prevent anything bad from happening, which usually results in much worse outcomes actually coming to reality. We want to avoid any of that. We just want to be like, what's super cool and fun that no one else can do? All right, let's ship it.
π©βπ€ Ate-A-Pi: Right, right, right.
π‘οΈ vish: Let's see how people actually use it. Let's see if all these concerns are truly legitimate concerns, right? I mean, I still remember back in the early days of the internet, people were like, oh, don't get into a stranger's car. Who's ever going to trust getting into a stranger's car and just summoning it off the internet? But we do that every day now, right? So people always tend to imagine the worst case scenario and go, oh, this could be dangerous and unsafe. But reality usually plays out a little differently.
π©βπ€ Ate-A-Pi: Indeed, indeed.
π± Peggy: Yeah, I also think that we're kind of circling back. I remember, I mean, I was like, super young back then. But the early days of the internet was when everybody was pretty much like either pseudonymous or anonymous. And Facebook's like big strategy was they basically, like, only allow people to be on their platform who've showed their real identity. And now I think like people are kind of getting tired of that. And especially with
you know, Instagram and TikTok and like kind of extreme manifestations, like showing your best self where it's like almost unrealistic in a sense. Like you're like you're always posting like hyper realistic, super like all the best parts of you on Instagram. And I feel like kids today, like they feel like this intense, like social pressure to always appear like being perfect on social media. And so now we're kind of seeing like a, I guess, like a divergence, like a back kind of
you know, Instagram and TikTok and like kind of extreme manifestations, like showing your best self where it's like almost unrealistic in a sense. Like you're like you're always posting like hyper realistic, super like all the best parts of you on Instagram. And I feel like kids today, like they feel like this intense, like social pressure to always appear like being perfect on social media. And so now we're kind of seeing like a, I guess, like a divergence, like a back kind of
π©βπ€ Ate-A-Pi: Mm-hmm.
π‘οΈ vish: highlight reel.
π± Peggy: back into more pseudonymous spaces, right? I think like this is why people are kind of not, you know, on Facebook and Instagram as much. I've had a lot of my friends delete their Instagram accounts, but there's not really like another social media platform for them to go to. And so I think that's actually a super interesting space.
π‘οΈ vish: I mean, I think I'd say Discord is probably the closest we get to like a pseudonymous chat app that a lot of people use in the younger generations. But you know, I wonder and I question like, what is real then? You know, like what is your real self? If people feel this immense pressure to bring their highlight reels out on these current social platforms, is that truly real? Is that who you really are? You know, I question that. And I think like being putting on a mask allows you to show your real self a lot more.
π± Peggy: This is more chat.
π‘οΈ vish: I think we lost connection.
π‘οΈ vish: Now Edipai is frozen.
π± Peggy: Stick your tongue out, bitch.
π‘οΈ vish: Oh, the tongue doesn't work for the desktop one.
π©βπ€ Ate-A-Pi: Okay. Yeah, yeah, it's okay. The recording would have caught up, so it's okay. No big deal. So, okay, a couple of quick questions. So how did you guys decide to found together? Because it's a very sensitive kind of like decision too, because you might be tying up for like five to 10 years. So how did you decide? Okay, this is gonna be my co-founder.
π± Peggy: sudden.
π‘οΈ vish: Oh, you're back. Hello.
π± Peggy: and then.
π‘οΈ vish: Okay.
π©βπ€ Ate-A-Pi: what drove that decision-making process.
π‘οΈ vish: Yeah, maybe we could, I'm gonna switch to my real camera so you can see the real Vish for a second and see how not at all close my avatar self is. And then maybe we could do like a quick introduction on our backgrounds and then that kind of leads to us figuring out why we wanted to work together. Peggy, you wanna get started while I switch my camera out?
π± Peggy: Um, sure. I'll stay at Smushy because my room is really messy. Um, but, uh, yeah. So. Yeah. I, uh, I was basically very interested in the space of AI and B, uh, and like deep tech and robotics for a very long time. Um, so I actually started out being like super obsessed with robotics and autonomous driving. So I did a bunch of research, um, at Stanford in behavior planning specifically. Um.
π©βπ€ Ate-A-Pi: Yeah.
π± Peggy: at Professor Michael Kukender first lab. He's awesome. Everybody should take his class. And did a bunch of research on how like autonomous cars and robots should basically drive when there are other like pedestrians and cyclists around in the environment. I ended up doing a couple internships at Lyft level five, which is Lyft's autonomous driving division. And that was
π©βπ€ Ate-A-Pi: Mm-hmm.
π± Peggy: Um, super interesting because I actually got to put code on a car that told the car how to drive at four way stop sign intersections. Uh, so hopefully, uh, no humans were damaged by my, um, and, um, actually, so in between my internships and my research, uh, a Facebook recruiter actually came up to me and convinced me or try to convince me initially to join Facebook. And, um, basically what I told her was I'm very interested in, you know, deep tech.
π©βπ€ Ate-A-Pi: Hahaha
π± Peggy: I would only join Facebook if you let me work on ARVR and on Oculus. And so I actually did an internship at Oculus in ground truth depth sensing, basically turning 3D LiDAR point clouds and reprojecting that into 2D depth data. Basically use it as ground truth to train a bunch of ML models for ML depth sensing on the Oculus Quest.
π©βπ€ Ate-A-Pi: Mm-hmm.
π©βπ€ Ate-A-Pi: Mm-hmm.
π± Peggy: And I liked my internship so much that I decided to switch from self-driving cars into like the AR VR space. So I went back to Facebook after graduation, joined an ML team, and then switched to basically the team I was on right before I left, which was face tracking on Facebook and Meta Reality Labs. And we basically, you know, were the
applied like research team that made all the algorithms for face tracking that like you see my avatar right here. But and that's actually how I met Vish. We met on Facebook. I'll wait until he comes back to tell the whole story but we've been
applied like research team that made all the algorithms for face tracking that like you see my avatar right here. But and that's actually how I met Vish. We met on Facebook. I'll wait until he comes back to tell the whole story but we've been
π©βπ€ Ate-A-Pi: Right on.
π©βπ€ Ate-A-Pi: So I have a random kind of question. There was a, I think there was a bunch of Berkeley, a Berkeley professors, two Berkeley professors who did this kind of, they identified all the muscles in the face. And then from those muscles, you can detect emotion. And so they had this, and then after that, they had this thing where, and I've seen like, is that true? Is it really? I mean, because-
π± Peggy: Yeah.
π©βπ€ Ate-A-Pi: Now we, you know, those guys I remembered, I think in the 70s, they used to take electrical electrodes and they would zap each other's like, you know, facial muscles and try to figure out like, you know, what, which muscle, which muscles were moving. And then they would look at like, video and like spot the same muscles moving and then identify is that guy lying? Is that guy telling the truth? Like,
π± Peggy: Interesting.
π± Peggy: Yeah.
π©βπ€ Ate-A-Pi: Is that actually, number one, is that actually possible? Is that, I mean, are you aware of the research? And number two, I mean, you have this kind of facial tracking software. I've seen some surveillance companies kind of propose this, oh, you can tell if someone's lying from a kind of official video. Is that true? Does that work?
π± Peggy: I think you've got cut out a bit. I'll wait till you come back. And hi, Vish.
π‘οΈ vish: Hello.
π‘οΈ vish: The challenges of losing connection.
π©βπ€ Ate-A-Pi: All right, are we, can you hear me?
π± Peggy: Mission...sad.
π©βπ€ Ate-A-Pi: Uh... Hello?
π©βπ€ Ate-A-Pi: Can you guys hear me?
π‘οΈ vish: Yes, hello.
π± Peggy: Yep.
π©βπ€ Ate-A-Pi: Okay, so let me just go back over that question. So in, I think, the 1970s, there was a bunch of Berkeley professors who identified that they started off poking electrodes into each other's faces and like zapping the muscles and then identifying each group of muscles and identifying basically what emotion, you know, each group of muscles would
π± Peggy: Yeah.
π©βπ€ Ate-A-Pi: relate to. And then they would look at videos of people and they'd be able to tell, is that guy lying? Is that guy not lying? And there's all, I think there's even a taxonomy and a bunch of a book that was printed. So is that like researchers, which is well known, have people used it to detect emotion before?
π± Peggy: Yeah, so I don't think I've seen that particular study, but there is like a well-known in the face tracking industry that what are called action units or a different way to call a FACS, F-A-C-S, facial action coding system. Actually, we know one of the most prominent proponents of FACS, Melinda Ozel, they're like facial research scientists that basically created the system.
where each basically muscle of the face is coded in like one of, you know, I think 50 to 100 action units or facial like action coding units. And basically by a combination of how the intents, the units are activated, you can tell whether a person has a specific emotion. So a concrete example is like if you're very angry, like this.
Um, you basically furrow your brows. So each brow, um, has like, I think three or four action units. And like the action of furring your brows is like a specific action unit. And, um, by how intense, like the telling, how intense your furrow is and how you kind of like scrunch up your face, like you wrinkle your nose and you like, um, kind of move your, um, your cheeks in a certain way, you can like, uh, you can basically guess like what emotion people have.
where each basically muscle of the face is coded in like one of, you know, I think 50 to 100 action units or facial like action coding units. And basically by a combination of how the intents, the units are activated, you can tell whether a person has a specific emotion. So a concrete example is like if you're very angry, like this.
Um, you basically furrow your brows. So each brow, um, has like, I think three or four action units. And like the action of furring your brows is like a specific action unit. And, um, by how intense, like the telling, how intense your furrow is and how you kind of like scrunch up your face, like you wrinkle your nose and you like, um, kind of move your, um, your cheeks in a certain way, you can like, uh, you can basically guess like what emotion people have.
π©βπ€ Ate-A-Pi: Mm-hmm.
π± Peggy: Another action unit is like a smile. So how wide you smile and how you like move your brows and your forehead and into like, you kind of ease it out a little bit. Not very good at this, but you can like kind of tell that people are happy. So in the early days of face tracking, what you would do is basically you would track like landmarks on your face. So you would have landmarks around basically the brows on your face.
π©βπ€ Ate-A-Pi: Mm-hmm.
π©βπ€ Ate-A-Pi: Mm-hmm.
π± Peggy: the shape of your eyes, your nose, your mouth. And basically based on how these landmarks would move, you can basically try to tell what emotion that person has just based on that. So yeah, I think that's super interesting. And I think obviously today you have much better algorithms to use for tracking people's emotion. But this is just a simple pre-deep learning days of how you would do face tracking.
π©βπ€ Ate-A-Pi: I see.
π©βπ€ Ate-A-Pi: Right. And post-seed learning, is it just like a form of image recognition? And, you know, you can do a classification of the person's facial image or video or whatever.
π± Peggy: So yeah, basically you collect a bunch of training of like happy people, sad people, depressed people, angry people, and you basically label them. And then you train a neural network to classify like a specific image as happy, sad, you know, angry, depressed, like whatever the seven, there's like seven or eight different like big emotions, frustration of what people have. So yeah, I mean, it's actually like
Neither of the methods are like extremely complicated, but it's kind of interesting to see how they evolved.
Neither of the methods are like extremely complicated, but it's kind of interesting to see how they evolved.
π©βπ€ Ate-A-Pi: right on. So Vish, you know, so going back to our question earlier, how did you guys meet? What was the background? You know, and what what? How did you guys make this decision to found together? Like what's the what's the founder meet cute? You know?
π‘οΈ vish: We'll be right back to our questions.
π‘οΈ vish: Yeah, so it's a very me cute. Yeah, my background's originally in astrophysics. I was doing... Oh, is it?
π± Peggy: It's a very big cute.
π± Peggy: By the way, your audio is a little soft.
π©βπ€ Ate-A-Pi: Maybe this, wish on me.
π± Peggy: Yeah, un-vish, yeah.
π‘οΈ vish: OK, let me just bring the mic a little closer. There we go. Yeah, so my background is actually in astrophysics. I was doing research on finding exoplanets with deep learning models way back in the day. This was before deep learning was cool. And so my first model site I actually worked on is I built a single layer perceptron to classify whether something was potentially an Earth-sized habitable exoplanet or not. And kind of, yeah, you could say so. Yeah.
π©βπ€ Ate-A-Pi: And would that be an image recognition? Something, yeah.
π‘οΈ vish: It was actually a bunch of things, combined with image recognition, but the way you'd figure out whether there's a planet in front of the star is you sort of look at a dip in an observed light curve. So essentially, like, you have the sun and then, you know, you have a shadow. And then when things pass by the shadow, you can kind of deduce a lot of things about the planet, you know, its size, its chemical composition, its shape, how far away it is from its companion star. And a real problem in that day was like...
π©βπ€ Ate-A-Pi: Yeah.
π‘οΈ vish: figuring out if what you're resolving was actually a planet or just some random astrophysical phenomena. The idea was that you could train a model on things that were bad astrophysical phenomena, like for example, an eclipsed binary, and you could use that to say, oh, we're pretty confident this is actually an earth-sized planet because the smaller your resolution was, the more likely it was you weren't sure whether it was actually a planet or some artifact. So I kind of saw the power of deep learning. I was at the University of Toronto around the time, you know, Ilya and Alex were...
They shipped Alex that and then they were working with Hinton for I think it was Deep Learning, the Deep Learning Company, I think it's called Deep Something. It was then sold to Google. So I was very excited about AI. And I decided to drop out of doing my higher studies, my PhD, my master's to come out to the Bay Area. I held a couple of roles in mostly engineering leadership before moving to Facebook in AI applied research. There I kind of worked on like graph nets and ads. But as soon as.
Facebook made the big switch to becoming meta. I was like super curious and really excited about it. So I decided to, you know, move to the Metaverse org where I was in charge of, I was a PM in charge of scripting for Horizon Worlds. And that's around the time I'd met Peggy. So Peggy and I, we met at actually crypto thing through a friend. I was going to these crypto events in...
They shipped Alex that and then they were working with Hinton for I think it was Deep Learning, the Deep Learning Company, I think it's called Deep Something. It was then sold to Google. So I was very excited about AI. And I decided to drop out of doing my higher studies, my PhD, my master's to come out to the Bay Area. I held a couple of roles in mostly engineering leadership before moving to Facebook in AI applied research. There I kind of worked on like graph nets and ads. But as soon as.
Facebook made the big switch to becoming meta. I was like super curious and really excited about it. So I decided to, you know, move to the Metaverse org where I was in charge of, I was a PM in charge of scripting for Horizon Worlds. And that's around the time I'd met Peggy. So Peggy and I, we met at actually crypto thing through a friend. I was going to these crypto events in...
π± Peggy: Yeah.
π‘οΈ vish: in SF just to sort of figure out what was going on. I think I met a friend of mine at a Harmony blockchain event and I was kind of just adjacent to the space. I'd mined a bunch of crypto, but I'd
π©βπ€ Ate-A-Pi: I think you guys have outlasted the Harmony blockchain at this point.
π± Peggy: Hahaha
π‘οΈ vish: Yeah, I guess so. We definitely got to thank Harmony for bringing us together. Kind of apt given its name. But yeah, so we were just casually curious about the space because a lot of people were making a lot of money in it. I'd mined some Dogecoin casually and I actually ended up selling that to fund my condo purchase in SF. So I was like, this is kind of crazy that this is happening. I need to figure out what's going on in crypto. So I started going to all these events and I met someone who introduced me to Peggy because at that time I was actively looking for a co-founder.
π± Peggy: Yeah.
π± Peggy: I'm sorry.
π‘οΈ vish: to find my and to define my company. And in the middle of that, I was also trying to figure out a bunch of green card issues. I'd actually been denied my green card three times. I ended up suing the US government to like actually get it approved, which is another story altogether. But so a lot of things kind of happened together at the same time where like the green card situation was resolved. I had met Peggy, we were carpooling every day to work at Burlingame, which is where the reality labs office was. We're talking about different ideas that we had. And I'd come up with this idea of like, hey man, I've...
I learned programming by literally modding video games, and I've always wanted to build a truly pseudonymous 3D space. And you're in face tracking, you know how avatars work really well. I built an earlier version of Ego when ARKit first came out in 2018, I think. I was super excited about the promise of allowing anyone to become a 3D avatar, because I thought that was the first step you needed to solve, to allow people to then hang on in 3D worlds. And we kind of knew of a lot of the things that were going on in this 2D space, like the mid-journey started starting to get some traction.
it was reasonable to assume that the same was going to happen to 3D. So we thought, let's, let's go ahead. Let's, let's make this happen. We spent a bunch of time like figuring out who we were a good fit as, a good fit as founders, uh, before raising our first pre-seed round. Um, we were backed by, uh, pair VC and a boost VC. Shout out to pair and boost. You guys are awesome. Um, they were our first checks and, uh, we, you know, began in January, 2023. Uh, we raised about a million at that point. And, um, we started working on ego, which is the app that you saw that you seeing Peggy use right now.
And when we launched, we realized that even though a lot of people wanted to stream as avatars, they didn't really care about face tracking as much as we thought they did. They cared more about customization. And customization was a problem that we couldn't solve at that point, especially in terms of scaling. So we were kind of in the swamp of trying to figure out what to do. And we had a chance to have dinner with Emma Cheere, who's one of the co-founders of Twitch, who gave us some incredible advice. That session was revelatory for us. He told us, effective dojo building is a game. And we're like,
are we really building a game? He's like, yeah, you're building a game. That's downstream of 3D social spaces. 3D social spaces, either you're building a game or you're building a really sleazy chat bot app, which is what a lot of these 3D chat spaces end up becoming. So if you wanted to be more than just a little chat, you actually kind of have to build a game. And we're like, oh, yeah, you're right. That's probably true. And around that time, the Stanford generative agents' paper had come on. We were like, let's implement this in 3D and see what happens. So it was kind of like working in parallel to figure out...
I learned programming by literally modding video games, and I've always wanted to build a truly pseudonymous 3D space. And you're in face tracking, you know how avatars work really well. I built an earlier version of Ego when ARKit first came out in 2018, I think. I was super excited about the promise of allowing anyone to become a 3D avatar, because I thought that was the first step you needed to solve, to allow people to then hang on in 3D worlds. And we kind of knew of a lot of the things that were going on in this 2D space, like the mid-journey started starting to get some traction.
it was reasonable to assume that the same was going to happen to 3D. So we thought, let's, let's go ahead. Let's, let's make this happen. We spent a bunch of time like figuring out who we were a good fit as, a good fit as founders, uh, before raising our first pre-seed round. Um, we were backed by, uh, pair VC and a boost VC. Shout out to pair and boost. You guys are awesome. Um, they were our first checks and, uh, we, you know, began in January, 2023. Uh, we raised about a million at that point. And, um, we started working on ego, which is the app that you saw that you seeing Peggy use right now.
And when we launched, we realized that even though a lot of people wanted to stream as avatars, they didn't really care about face tracking as much as we thought they did. They cared more about customization. And customization was a problem that we couldn't solve at that point, especially in terms of scaling. So we were kind of in the swamp of trying to figure out what to do. And we had a chance to have dinner with Emma Cheere, who's one of the co-founders of Twitch, who gave us some incredible advice. That session was revelatory for us. He told us, effective dojo building is a game. And we're like,
are we really building a game? He's like, yeah, you're building a game. That's downstream of 3D social spaces. 3D social spaces, either you're building a game or you're building a really sleazy chat bot app, which is what a lot of these 3D chat spaces end up becoming. So if you wanted to be more than just a little chat, you actually kind of have to build a game. And we're like, oh, yeah, you're right. That's probably true. And around that time, the Stanford generative agents' paper had come on. We were like, let's implement this in 3D and see what happens. So it was kind of like working in parallel to figure out...
π± Peggy: Yeah.
π‘οΈ vish: Where are we going to pivot to that's a truly venture scale problem to solve? And there was this advice about like, you know, figure out what, you know, what the game is at the end of the day. And there was this like awesome AI agents who were coming out. So we were in that sort of swamp of pivoting when we met Emmett. And he encouraged us to apply to YC. So we did. And we got in and we were interviewing, I think with Michael at that point, Michael Seibel, who gave us some really difficult questions. It was a pretty brutal interview, but I think we're very lucky to have been accepted into YC and to do the current batch.
And we thought it would be helpful for us to get that kind of high quality of advice, which we continue to get. Currently our group partners are Diana Hu and Amit Sher, obviously both in the AR and AI spaces. So they've been giving us valuable feedback as we experiment on this sort of like, what happens when you can become an avatar in a 3D space filled with AI agents? What do you want to do? What is fun? What are problems to solve in the space? So that's kind of our entire path. Obviously, we're still experimenting, like, you know, we're way before product market fit.
We're just trying to see if there is a fun loop that we can generate where people create these random characters and have them talk to one another and then talk to them and then see what happens from there. We have seen signs that the streaming approach is not the right approach. Obviously, there was the Seinfeld one, but there's been a couple of other AI streaming channels. There's jars.ai. There's always break time. But we found that they've had difficulty retaining an audience. If we're able to consistently watch the stream, you need to have something more than just
people talking and weird generated stuff happening, you need to feel like you're immersed in the space. You need to feel like there's new things and novel things they can do. You need to have more audience control. So again, it all goes back to there is a game here. And as we explore the simulation space, we realized there was an opportunity to build both, which is our struggle to our wise. She's like, are we building a game engine? Are we building a game? Building a game engine, building a game. And recently we had office hours at PG, which helped us figure out actually you could do both.
The beauty of AI is that you can finally build a game that is a game engine. And when I heard that from Pujya, I was like, oh, yeah, that makes a ton of sense. When I think back to why I started programming, I started programming because I wanted to customize the games that I was playing. I was always thinking, what if I could? And I would write code to make it happen. And then other people also wanted it, so people liked my mods. And then I was like, oh, maybe I should go be a game developer. I didn't want to do that because game developers make no money. But um.
And we thought it would be helpful for us to get that kind of high quality of advice, which we continue to get. Currently our group partners are Diana Hu and Amit Sher, obviously both in the AR and AI spaces. So they've been giving us valuable feedback as we experiment on this sort of like, what happens when you can become an avatar in a 3D space filled with AI agents? What do you want to do? What is fun? What are problems to solve in the space? So that's kind of our entire path. Obviously, we're still experimenting, like, you know, we're way before product market fit.
We're just trying to see if there is a fun loop that we can generate where people create these random characters and have them talk to one another and then talk to them and then see what happens from there. We have seen signs that the streaming approach is not the right approach. Obviously, there was the Seinfeld one, but there's been a couple of other AI streaming channels. There's jars.ai. There's always break time. But we found that they've had difficulty retaining an audience. If we're able to consistently watch the stream, you need to have something more than just
people talking and weird generated stuff happening, you need to feel like you're immersed in the space. You need to feel like there's new things and novel things they can do. You need to have more audience control. So again, it all goes back to there is a game here. And as we explore the simulation space, we realized there was an opportunity to build both, which is our struggle to our wise. She's like, are we building a game engine? Are we building a game? Building a game engine, building a game. And recently we had office hours at PG, which helped us figure out actually you could do both.
The beauty of AI is that you can finally build a game that is a game engine. And when I heard that from Pujya, I was like, oh, yeah, that makes a ton of sense. When I think back to why I started programming, I started programming because I wanted to customize the games that I was playing. I was always thinking, what if I could? And I would write code to make it happen. And then other people also wanted it, so people liked my mods. And then I was like, oh, maybe I should go be a game developer. I didn't want to do that because game developers make no money. But um.
π‘οΈ vish: But I was thinking about it, I was like, huh, if game developers, game development is so hard and people do it in spite of the fact that there's not a lot of money in it, if you reduce the amount of effort it takes to actually customize and build games, you're gonna have a lot more people building games and doing cooler things with it, and then you basically increase the level of access, and then suddenly you've built an infinite game, which has always been my life goal, to build an infinite game, or to have a platform where people can infinitely exist pseudonymous 3D characters. So...
π©βπ€ Ate-A-Pi: and crack.
π‘οΈ vish: IronCrad, exactly, that's actually my LinkedIn profile, building avatars in IronCrad at Ego. So that's been like the sort of overarching vision, that's what got us to build Ego and it's what's keeping us going.
Reply