- Emergent Behavior
- Posts
- EB-2: Prophetic - Transcript - Part 2
EB-2: Prophetic - Transcript - Part 2
Transcript: Prophetic - Part 2
π Eric Wollberg: It's not really the job of venture capitalists to underwrite research, right? It's more the universities and governments. And I firmly felt it was an R. So I kind of tabled it, but kept very close to both the neuroscience, neuro simulation, and then later the machine learning architectures. And it wasn't until 2022 when I found transcranial focus ultrasound and the neural transformer that I firmly felt we had entered D. Now, let me just talk a little bit in particular about
π©βπ€ Ate-A-Pi: Right.
π©βπ€ Ate-A-Pi: Yep.
π©βπ€ Ate-A-Pi: Mm-hmm.
π Eric Wollberg: TFUS as it compares to the previous neuro-stimulation modalities, back to those three core previous limiting factors, depth is centimeters into the brain, not invasively. Precision is millimeters. So you're going from no precision to millimeters. It's not orders of magnitude, this is a paradigm shift. And then three is the ability to steer these millimeter pulses in a three-dimensional space. This is critical, right, because your brain fires in three-dimensional neural firing patterns. And so,
π©βπ€ Ate-A-Pi: Yeah.
π©βπ€ Ate-A-Pi: Yeah.
π©βπ€ Ate-A-Pi: Mm-hmm.
π Eric Wollberg: That paired with the neural transformer architecture, which we should definitely dive deeper into and Wes can go deep on that. He built, we built that architecture from the ground up. That those two things paired together were what gave me the confidence that we had firmly entered Dea and to eventually start the company. And what was amazing is that I found Wes, who was already working on using neural transformers for a variety of different applications in Neurotech.
π©βπ€ Ate-A-Pi: Right.
π Eric Wollberg: And was also, you know, just started going down the ultrasound rabbit hole. And so lucky, lucky for us to have found each other.
π©βπ€ Ate-A-Pi: So just one more question on the hardware. When did all of this hardware start to kind of make sense in the package that it does? Because I imagine EEGs at one point were very, very big machines. And ultrasound was something that you have a doctor uses it on a patient for a baby or whatever. It's something you hold in your hand. It's not really small enough to put on a headband. So when did these pieces of tech get small enough for this to start to make sense?
π Eric Wollberg: Yeah.
π Eric Wollberg: Yeah, great question. So, you know, TFUS for neuromodulation actually started around 2004. And through the oh-oh, the late kind of mid to late oh-ohs and the 2010s, you know, you just saw, you know, again, this is really happening in research, you know, institutions, etc., where they're increasing both, you know, the element count. So the amount, you know, elements are, you know, piezoelectric material, right, a crystal, for example, where you run electrical current through it and that oscillates creating an ultrasonic wave.
π©βπ€ Ate-A-Pi: Yeah.
π Eric Wollberg: the more elements you have, the more ability you are capable of both moving that around and phasing it, as well as the precision. And we kind of note that there's actually like these curves that we kind of observed. I named it Wes's Law because Wes has too much humility to name it after himself. But you're seeing very similar kind of curves, a la like a Moore's Law.
where both the size of the transducers and the number of elements on a given transducer are increasing over time. And so, you always wanna be in a place, right, in Harvard in particular, where you're riding some kind of curve, right? And so you're seeing that happening in transducers where both the number of elements, the cost, and so on and so forth are decreasing over time. And so,
where both the size of the transducers and the number of elements on a given transducer are increasing over time. And so, you always wanna be in a place, right, in Harvard in particular, where you're riding some kind of curve, right? And so you're seeing that happening in transducers where both the number of elements, the cost, and so on and so forth are decreasing over time. And so,
π©βπ€ Ate-A-Pi: All right.
π©βπ€ Ate-A-Pi: All right.
π Eric Wollberg: That was a really critical thing that was done, you know, over the course of what's now, you know, probably 20 years where you now see this technology really capable of being, you know, commercialized.
π©βπ€ Ate-A-Pi: Right.
π©βπ€ Ate-A-Pi: So increasing resolution is always nice, right? If you get a boost from a tailwind from the increasing resolution. Okay, so let's get to the build out of the neural model. I mean, when we build kind of like machine learning models, we always start with a messy, large messy data set, hopefully, hopefully not a small one. So talk about like the first kind of like,
alpha version of the neural model. Like, where did the data come from? What were you trying to do? What was the test that you did that was like, okay, you know what, it's worth putting more time and effort into this.
alpha version of the neural model. Like, where did the data come from? What were you trying to do? What was the test that you did that was like, okay, you know what, it's worth putting more time and effort into this.
π§ wes: Yeah. So V1, I'll start with the V1 of Morpheus. What we did is we basically sourced data from open source data sets. Um, alongside some lucid dreaming data from, from Donner's, but primarily to set surrounds, um, uh, it is really built off of what we found open source. And so the idea.
π©βπ€ Ate-A-Pi: Okay.
π©βπ€ Ate-A-Pi: So maybe just to take a step back, what is the first thing that you're trying to detect there? Are you trying to detect the entry into REM? What's the first question that you have that you need to solve? Are you looking at the EEGs and then you're trying to classify, build a classifier to figure out whether it's entering REM? Is that the first thing? Or are you trying to build a classifier to detect the lucidity? What are you trying to detect from that data that you first have?
π§ wes: The goal of the Morpheus one is to be given a particular brain state and output continuously TFUS instructions to get a response in the brain. That's the number one goal. We're not doing a significant amount of classification. That's the number one goal of the model. The REM stuff, that's, you know, we leave that to other.
π©βπ€ Ate-A-Pi: Right.
π©βπ€ Ate-A-Pi: Right.
π§ wes: techniques. It's really not something we spend a great deal on. It's really the act, the number one problem is getting something, because that's a solved problem. That's been a problem that's been solved a long time ago. The number one thing is getting
π©βπ€ Ate-A-Pi: Right, right, right. So you have a bunch of other tech, which basically detects the RAM, which gets you into the lucid state. And what you're focused on is you have a EEG coming in, and you need to produce a target trans-cranial ultrasound, focus ultrasound kind of like map that your transducers are gonna take and implement. Is that?
Is that an accurate characterization?
Is that an accurate characterization?
π§ wes: Yeah, yes.
π©βπ€ Ate-A-Pi: Right. So when you start off, do you do the actual work, as in, like, do you actually like, OK, I'm going to take this EEG. I'm going to produce a transcranial focus ultrasound. I'm going to actually deploy it. And I'm going to actually do the reading. Did you actually do the data collection, or do you start off with an open source data set, which where other people have done it, and you're just kind of trying to produce the output first? Like, what is the start of this process for you?
π§ wes: Yeah. So the start is that we grab an open source dataset that is simultaneous EEG and fMRI. We do a number of pre-processing techniques to fMRI to basically find targets of height in activity. So, you know, we've masked the prefrontal cortex and we look for what voxels, what 3D pixels exist that are of a height in state, right?
π©βπ€ Ate-A-Pi: I see.
π©βπ€ Ate-A-Pi: Mm-hmm.
π§ wes: intuitively, right, that height and activity is in some way correlated with what's being read in the EEG. So when you have the simultaneous EEG fMRI, what you're able to do is say, okay, at this timestamp, this fMRI scan was ran, and at the same time, this sequence of EEG signals were collected, right?
π©βπ€ Ate-A-Pi: Mm-hmm.
π§ wes: And what you basically say is, well, you feed this to the model and you basically say, okay, what are the patterns that are existing between this EEG data and the fMRI data, both spatially and temporally? And then from that, you're really approximating something like, what does a model of the prefrontal cortex look like?
π©βπ€ Ate-A-Pi: Right.
π©βπ€ Ate-A-Pi: Right.
π§ wes: of hiding activity, right? And so the goal being is, well, how do you get a transformer to output instructions to the TFUS to bring the prefrontal cortex into a hiding state? You know, the prefrontal cortex is the key in all of this, right? And if you think of your prefrontal cortex when you are asleep, in a deep sleep, right? You know, everything's slowing down. Activity is very little. But right now, you know,
π©βπ€ Ate-A-Pi: Mm-hmm.
π©βπ€ Ate-A-Pi: Mm-hmm.
Right.
Right.
π©βπ€ Ate-A-Pi: Right.
π§ wes: As we're having this conversation, our prefrontal cortexes are quite active. And so the delta between those states, right, is what we attempt to model, and how do you pull someone upward into a heightened state?
π©βπ€ Ate-A-Pi: Right, so when you have this data set, is it a data set of EEG fMRI while transcranial ultrasound is being applied? No. But it is a data set of people experiencing lucidity in dreams. Yes? No.
π§ wes: No.
π§ wes: No, it's someone in waking state, their prefrontal cortex is active, right? And the goal of the transformer is to then model what does that waking state prefrontal cortex look like, only the prefrontal cortex. And then how do we build a transformer where the TFUS can bring someone to a state where that prefrontal cortex is active?
π©βπ€ Ate-A-Pi: Mm-hmm.
I see.
I see.
π©βπ€ Ate-A-Pi: Right.
π©βπ€ Ate-A-Pi: Right. So what is the second step of that? Yeah. Great.
π Eric Wollberg: Let me just jump in for one, for a clarifying point. We are a collaboration with the Daughters Institute, which is probably the top lucid dreaming lab in the world. And just led by a gentleman named Dr. Martin Dressler, whose work in 2012, 2014 was critical into establishing the neural correlates of lucid dreaming and so on.
π§ wes: Yeah.
π Eric Wollberg: We are doing a neuroimaging, the largest neuroimaging aggregation of lucid dreaming data ever done. And we do about four of these a week right now, okay? Simultaneous EGF MRI. We have some data from them, primarily EEG right now. We should be getting our first data set, for example, from them. Unless you should probably mute, just because I think we're in echo.
π§ wes: Thanks.
π§ wes: Right now.
π©βπ€ Ate-A-Pi: Right.
π§ wes: from them apparently each year right now. We should be getting our first data set for table.
π©βπ€ Ate-A-Pi: Right. Yeah.
π Eric Wollberg: We're getting the first data from them, probably maybe today or tomorrow, actually. I was just on the phone with Dr. Dresser earlier today. And so I wanna be clear, what Wes is talking about is that the model that we showcased a couple of weeks ago is trained on open source waking state data where that prefrontal cortex activation, stimulation can be done. It's also supplemented with EEG data of lucidity, okay?
π©βπ€ Ate-A-Pi: Right.
π©βπ€ Ate-A-Pi: Yeah.
π©βπ€ Ate-A-Pi: All right.
π Eric Wollberg: But the simultaneous EGF MRI data set, we're only just starting to add into the data set, now in the coming weeks and months, and we'll continue to add to that. So that was just the one clarifying point that I wanted to make.
π©βπ€ Ate-A-Pi: All right. Right.
All right.
All right.
π©βπ€ Ate-A-Pi: No, I mean, and just to be, you know, what I always find is that, you know, whenever we try to implement machine learning or AI, we always end up with insufficient data for the actual thing that we're trying to do, right? And we always end up with like trying to find proxies because you need like a large amount of data at first to get like some initial result, which allows you to go forward and, you know, then obtain like specific data on certain things, right?
So I'm just trying to understand how that worked in the beginning stages, right? Which is where you need this initial signal to kind of like, oh, you know what? This is worth our time and effort in order to put something in order to actually collect and annotate much more granular and sophisticated data, which is a huge effort on its own, right? So I was just trying to capture
how that would have felt like at the early stages of the company, which is obviously you've progressed far beyond that at this stage. So you have this EEG FMRI dataset at the very beginning. You have the targets that are being generated. Then from those targets, you generate a transcranial ultrasound map that you need to target. The next step of that is that generation of
the TFUS map, right? Like the targets are identified in the fMRI, and then you have a build out of your TFUS, and basically that TFUS is basically, it's targeting those areas. Is that correct? Is that an accurate characterization?
So I'm just trying to understand how that worked in the beginning stages, right? Which is where you need this initial signal to kind of like, oh, you know what? This is worth our time and effort in order to put something in order to actually collect and annotate much more granular and sophisticated data, which is a huge effort on its own, right? So I was just trying to capture
how that would have felt like at the early stages of the company, which is obviously you've progressed far beyond that at this stage. So you have this EEG FMRI dataset at the very beginning. You have the targets that are being generated. Then from those targets, you generate a transcranial ultrasound map that you need to target. The next step of that is that generation of
the TFUS map, right? Like the targets are identified in the fMRI, and then you have a build out of your TFUS, and basically that TFUS is basically, it's targeting those areas. Is that correct? Is that an accurate characterization?
π§ wes: Yeah, yeah.
π©βπ€ Ate-A-Pi: Okay, so, and then you have, and then you basically have like small pulses that go in, and then you have another EEG reading, right? The next EEG reading comes out, and then you compare, right? And you compare how close you got it to where it needs to go, and then you modify the TFUS again, and then you send in another pulse. Is that accurate?
π§ wes: Yeah, the EEG that is read from the headband prompts the model and the output is then stimulated.
π©βπ€ Ate-A-Pi: Yeah.
π©βπ€ Ate-A-Pi: So how many times a second are you taking these readings or what is the kind of resolution that you have on this? Are you taking a reading every 10 times a second and then the ultrasound is at 60 hertz? Or what are the numbers that we're talking about here?
π§ wes: I mean, the EEG right now samples at 1000 Hz a second. You know, do you need all that, right? You can down sample it. You know, you could do that to reduce complexity if there's a lot of redundancy. So, I mean, it could be anywhere from 256 samples to, you know, 1000, but right now it's 1000.
π©βπ€ Ate-A-Pi: Right.
π©βπ€ Ate-A-Pi: Right.
Ate-A-Pi (44:33.) Right now it's 1000, and then the ultrasound is at what kind of frequency are we talking about here?
Ate-A-Pi (44:33.) Right now it's 1000, and then the ultrasound is at what kind of frequency are we talking about here?
π§ wes: So, I mean, there's a few different frequencies, right? So there's duty cycle, which is like a proxy for, so there's a few things. So there's the frequency that the ultrasound operates at, call that 500 kilohertz, and then there's the pulses themselves. And our goal is gamma, and so that's probably gonna be around 40 pulses per second.
π©βπ€ Ate-A-Pi: Yeah.
π©βπ€ Ate-A-Pi: 40 pulses per second. So, and each pulse, each of those, you know, 40 pulses could be a different, kind of slightly different as in slightly modified to, you know, get you to where you need to go. Is that correct?
π§ wes: Certainly, but I mean, I would say the primary adjustment that's made, you know, per loop cycle is really the spatial targets more than anything.
π©βπ€ Ate-A-Pi: The spatial, sorry.
π§ wes: the spatial targets as opposed to the temporal.
π©βπ€ Ate-A-Pi: I see, I see. So more on like which location in the brain is being targeted rather than the timing. I see, I see. And so, yeah.
π§ wes: Yeah, I mean, I think they're both critical, but the spatial is the most important. I would put that as a priority.
π©βπ€ Ate-A-Pi: Right. And that brings the other thing, which is even if the halo is slightly situated slightly differently each time, you still kind of take a reading of the EEG in real time. So you can adjust the spatial targets accordingly. Is that correct?
So the next question I have is how much of it is personalized? Because how much difference is there from person to person that you have to adjust for, that the device or the model has to adjust for in real time?
So the next question I have is how much of it is personalized? Because how much difference is there from person to person that you have to adjust for, that the device or the model has to adjust for in real time?
π§ wes: Yeah. So the next big thing that we're working on right now is the reinforcement learning layer of all of this. So, you know, there's two types of feedback that the model will receive, right? It's the user explicit feedback. You know, the user, you know, for example, even if they're awake, you know, they run a sequence and they give some feedback, like I felt I didn't feel anything, maybe on a scale of one to 10, you know, or I felt something very significant, you know, so on and so forth.
π©βπ€ Ate-A-Pi: Yeah.
π©βπ€ Ate-A-Pi: Mm-hmm.
π§ wes: they've explicitly ranked a given sequence, or they've explicitly ranked an experience and that they woke up and they've given feedback. So that's number one, right? Number two is the neurofeedback, right? What are the sequences given the TFUS pulses, what is the response in the brain?
π©βπ€ Ate-A-Pi: Mm.
π©βπ€ Ate-A-Pi: Right.
π§ wes: you know, there's a number of ways to measure, but you can sort of think of it like, you're sort of pinging the brain and you're sort of listening to the responses, right? And so, you know, spikes in gamma, general patterns between the different electrode placements, you know, that's really informative for the model. Because what then you can then do is measure on a continuous scale, you know, what...
π©βπ€ Ate-A-Pi: Mm-hmm.
π§ wes: is working, right? You can do sort of binary classification in terms of like, did this work or didn't work on user feedback, but you can take this sort of continuous feedback and, you know, provide that as, you know, either rewards or penalties to the given sequences that the user experienced. And then from that, learn what's sort of hitting with a particular person, you know, in a similar way, you can think of, you know, a TikTok feed or Twitter feed.
π©βπ€ Ate-A-Pi: Yep.
π©βπ€ Ate-A-Pi: Mm-hmm.
π©βπ€ Ate-A-Pi: All right.
π§ wes: as the user gives feedback, both, you know, maybe explicitly in the form of a like, or maybe it's, you know, some sort of, you know, continuous factor in like the time that they viewed a particular tweet or video, the model can then learn, you know, what are the preferences or what are sort of the things that in terms of content, which are just tokens that work with a particular person and then from that, you know, the model will learn over time.
π©βπ€ Ate-A-Pi: Right on. Yeah, go ahead.
π Eric Wollberg: Let me just, because I always like to, like I let Wes give the super technical version, but I know you have a very informed audience, but just in the sense for layman, you know, a layman explanation, right? First of all, we only focus on like training models on brain states that are both discrete and universal. To define terms, like when I say discrete, I mean, it is one thing and not another. A counter example being, for example, unfortunately,
π©βπ€ Ate-A-Pi: Mm-hmm.
π©βπ€ Ate-A-Pi: Mm-hmm.
π Eric Wollberg: flow states, right? So flow states are not discrete. You can be a surfer and enter a state of flow and what triggered that is your kind of motor cortex, right? Versus like a chess player can enter a state of flow and it's like their spatial reasoning. So you would just need more data to create a generalized model. But the way that lucid dreaming is discrete and universal, it is prefrontal cortex activation during REM, whether you're in a lucid dream or Wes is in a lucid dream or I'm in a lucid dream.
π©βπ€ Ate-A-Pi: Mm-hmm.
π©βπ€ Ate-A-Pi: Mm-hmm.
π©βπ€ Ate-A-Pi: Right.
π©βπ€ Ate-A-Pi: Right.
π Eric Wollberg: which makes it easier to make these generalizable models. And also I should mention, you know, we should mention, I mean, what we already just did in terms of, you know, the sampling rate of EG, but for these data sets, these some of these EG fMRI data sets, they're extraordinarily information dense, right? A thousand, you know, to give you a sense, a spatial reading in fMRI once every 2.1 seconds, in that same period, you'll have 2,100, you know, EG sampling, so it's extremely information dense.
π©βπ€ Ate-A-Pi: All right.
π©βπ€ Ate-A-Pi: Yep.
All right.
All right.
π©βπ€ Ate-A-Pi: Right.
π Eric Wollberg: And so broadly how you could think about this is like, you're creating this kind of vector space of possible, you know, lucid dreaming sequences, right? And so and so forth. But, you know, you might be over here and your reinforcement learning will drive the model, you know, to really focus on targets around here. Whereas Wes or I might be, you know, in other areas in that space. So that's just the important kind of, I don't know, layman explanation that I would give.
π©βπ€ Ate-A-Pi: Yeah.
π©βπ€ Ate-A-Pi: Mm-hmm.
π©βπ€ Ate-A-Pi: Oh, absolutely. How much, like let's say you have, what is your target for like the first run of the Halo? Like is it 10,000 devices? Is it more? Like what is your rough number that you're coming up with?
π Eric Wollberg: Yeah, you know, we have a reservation program that I started because I talked to a lot of consumer hardware founders and you know, they said, listen, like, one, obviously, de-risk demand, but also for go to manufacturing motion. It's really great if you have an order book that you can kind of, you know, use to approach better manufacturing partners because
π©βπ€ Ate-A-Pi: Yeah.
π Eric Wollberg: You know the bane of all consumer hardware companies and really hardware companies broadly But particularly consumer hardware companies is like these small batch medium batch manufacturers You like you order a thousand five hundred of them don't work you like hey these don't work No, I will send us more money now 250 and don't work etc. It's like death by a thousand paper cuts And so, you know, we've done you know $2.4 million of book revenue through that reservation program. I want to be very clear
π©βπ€ Ate-A-Pi: Right.
π©βπ€ Ate-A-Pi: Yeah.
π Eric Wollberg: That money's held in a separate bank account. We do not use it for development. It's fully refundable at any time. But I think it shows a good sense of demand. I think in terms of what we want the first run to look like, it's very much gonna be reflective of where that reservation list is at the time. Certainly anyone who puts down a reservation should be getting a device because they took a bet on us when in some of these cases, when we were nothing but a website and a dream, not to be too, you know.
π©βπ€ Ate-A-Pi: Yep.
π©βπ€ Ate-A-Pi: Hmm.
I see.
I see.
π©βπ€ Ate-A-Pi: Right.
π©βπ€ Ate-A-Pi: Right, right, right.
π Eric Wollberg: on the nose. But, you know, that's really, you know, the critical thing is, you know, building up this reservation program. We spent $0 on marketing so far. And so, you know, I think what this shows right is there's enormous demand for this. We have to make it work. Once we start, you know, we're starting NeuroSimulation, you know, in the spring, we have a beta user program that we launched alongside showcasing the model. We've got like 3,500 people to sign up for that, you know, in five days or something like that.
π©βπ€ Ate-A-Pi: Right, right.
π©βπ€ Ate-A-Pi: Right.
π©βπ€ Ate-A-Pi: Yep.
π©βπ€ Ate-A-Pi: Yeah.
π Eric Wollberg: And so we'll go through the process of selecting those people for the beta user program. But anyway, in terms of what the initial run will look like, it'll be very dependent on that reservation list.
π©βπ€ Ate-A-Pi: Right.
π©βπ€ Ate-A-Pi: I mean, I just wanted to draw a metaphor. Let's say you had, let's say, I'm just gonna put a number, let's say 10K, right? Let's say you have 10K, and how much would that expand your, because I imagine you can use the EEG data coming out to kind of improve your models, right?
So how does that model feedback loop look like from the initial devices? Like, if you have like, let's say 10,000 devices, because, and I see this all the time, like there's this graph that people have of, the day the iPhone went out, and like 10 years later, we probably create as many photos in a day as like the entire history of humanity before the iPhone. Right, like every single day, there's probably more photos, more photos are taken.
in the entire history of humanity before the iPhone, right? And so I wonder if putting this device out there expands the data set of EEGs so dramatically, so, so dramatically that you can have this enormously larger data set. Like what would having 10,000 of these devices in the market, do you think that would expand your data set significantly for future improvements?
So how does that model feedback loop look like from the initial devices? Like, if you have like, let's say 10,000 devices, because, and I see this all the time, like there's this graph that people have of, the day the iPhone went out, and like 10 years later, we probably create as many photos in a day as like the entire history of humanity before the iPhone. Right, like every single day, there's probably more photos, more photos are taken.
in the entire history of humanity before the iPhone, right? And so I wonder if putting this device out there expands the data set of EEGs so dramatically, so, so dramatically that you can have this enormously larger data set. Like what would having 10,000 of these devices in the market, do you think that would expand your data set significantly for future improvements?
π§ wes: Yeah. So that goes back to the reinforcement layer that really only works at scale, um, it works significantly better at scale, I should say, right? Because we are. The halo is a neural imaging device, right? It's in our stimulation device. Um, and so what we're able to do is look at, you know, basically how
π©βπ€ Ate-A-Pi: Right.
π©βπ€ Ate-A-Pi: Right.
π§ wes: what are the responses you're getting from this TFUS and what, and those responses are measured with EEG. And that's really informative, right? You can learn quite a bit from what works and what doesn't. And so while, I think that the, you know, starting with this open source data set, this is what Simon's saying, EEG and FMRI, you know, we can get terabytes of data that way, but it's really when you target gain into, you know, hundreds of terabytes or petabytes of
π©βπ€ Ate-A-Pi: Mm-hmm.
π©βπ€ Ate-A-Pi: Mm-hmm.
π©βπ€ Ate-A-Pi: Right.
π©βπ€ Ate-A-Pi: Yeah.
Right.
Right.
π§ wes: data, that's really going to come from that people are using this every night and we're collecting just on a nightly basis gigabytes worth of neural imaging data and the responses specifically from TFUS sequences. You're pinging the brain and you're basically seeing how it's reverberating and we're collecting that and it's this continuous thing over and over again. So it would be a very significant amount of data.
π©βπ€ Ate-A-Pi: Yeah.
π©βπ€ Ate-A-Pi: All right.
π©βπ€ Ate-A-Pi: Right.
π©βπ€ Ate-A-Pi: Right. Yeah, yeah, I can I can I can imagine. Yeah, good. Yeah.
π Eric Wollberg: And one thing I also want to talk about is not only scaling up the reinforcement side, but also the initial training data aggregation. So the day after we showcased the model, we released a piece that I think people, we had a good response for it, but I don't think people really realize how profound it potentially is. We call it the quality of factory. Realistically, that's a really cool marketing name for what is a neuro imaging lab.
primarily set up with the simultaneous eGFMRI neuroimaging data setups. And so, you can imagine where we can actually expand also our training dataset where we bring in people who are extraordinary at, whether it's lucid dreaming or meditating or focus or positive mood, et cetera, the entire state space of potentially discrete, universal, but even potentially because we have the scale now, nondiscrete.
primarily set up with the simultaneous eGFMRI neuroimaging data setups. And so, you can imagine where we can actually expand also our training dataset where we bring in people who are extraordinary at, whether it's lucid dreaming or meditating or focus or positive mood, et cetera, the entire state space of potentially discrete, universal, but even potentially because we have the scale now, nondiscrete.
π©βπ€ Ate-A-Pi: Yeah.
π©βπ€ Ate-A-Pi: All right.
π Eric Wollberg: universal brain states and aggregating more and more of this data to train, you know, successive models where you're creating, um, models that can do more and more different experiences with the same piece of hardware. Um, we do aim to hopefully launch the device with more than just lucid dreaming as an experience, uh, definitely focus and, and hopefully also positive mood. So, um, you know, that's also a really critical way you scale up data is on the training data sets.
π©βπ€ Ate-A-Pi: Yep.
π Eric Wollberg: using this quality of factory, which is really a larger neuroimaging lab built for purpose.
π©βπ€ Ate-A-Pi: So let me, maybe I kind of missed that earlier. So the core thing that you're looking at is kind of like on your product roadmap is discrete universal states, as you call them. And of the, again, I think perhaps not everyone is super familiar, but of the discrete universal states, you're saying focus is one of them.
and positive mindset is another? Is that positive mood? Is that another?
and positive mindset is another? Is that positive mood? Is that another?
π Eric Wollberg: Positive mood, yeah. So TFUS has already been used to induce focus and positive mood. So these things have already been validated in the context of utilization with TFUS to do it. So in many ways, we're also just broadly recreating that. But yeah, I mean, listen, we think that Lucid Dreaming is the killer app for this technology from a consumer experience perspective. It is the most profound, extraordinary experience that we think we can give. And so that's why it's our focus.
π©βπ€ Ate-A-Pi: Yeah, positive mood.
π©βπ€ Ate-A-Pi: Mm-hmm.
π Eric Wollberg: But you want to create a system where over time, you can make a more and more generalizable device or generalized device where the number of experiences the same device can give you increases over time. So, we want to aim to have kind of three core experiences release when the device ships. It obviously also increases the TAM of the device. People like to be focused when they do work.
π©βπ€ Ate-A-Pi: Right.
π©βπ€ Ate-A-Pi: Yeah.
π Eric Wollberg: So maybe you can wear it while you're jamming out a spreadsheet or something, I don't know, whatever you're doing for work. So in positive mood, people want to feel happy and so on. So that's kind of the impetus that's really critical about the model. Not only does it obviously create this closed-loop system that creates these targets and reliably creates these experiences,
π©βπ€ Ate-A-Pi: All right.
π Eric Wollberg: but also allows you to onboard more experiences to the system over time.
π©βπ€ Ate-A-Pi: All right.
Reply