EB-2: Prophetic - Transcript - Part 2

Transcript: Prophetic - Part 2

Transcript: Prophetic - Part 2


[29:52.702]🌌 Eric Wollberg: It's not really the job of venture capitalists to underwrite research, right? It's more the universities and governments. And I firmly felt it was an R. So I kind of tabled it, but kept very close to both the neuroscience, neuro simulation, and then later the machine learning architectures. And it wasn't until 2022 when I found transcranial focus ultrasound and the neural transformer that I firmly felt we had entered D. Now, let me just talk a little bit in particular about

[29:57.422]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.

[30:10.144]πŸ‘©β€πŸŽ€ Ate-A-Pi: Yep.

[30:16.687]πŸ‘©β€πŸŽ€ Ate-A-Pi: Mm-hmm.

[30:20.918]🌌 Eric Wollberg: TFUS as it compares to the previous neuro-stimulation modalities, back to those three core previous limiting factors, depth is centimeters into the brain, not invasively. Precision is millimeters. So you're going from no precision to millimeters. It's not orders of magnitude, this is a paradigm shift. And then three is the ability to steer these millimeter pulses in a three-dimensional space. This is critical, right, because your brain fires in three-dimensional neural firing patterns. And so,

[30:27.149]πŸ‘©β€πŸŽ€ Ate-A-Pi: Yeah.

[30:33.053]πŸ‘©β€πŸŽ€ Ate-A-Pi: Yeah.

[30:37.939]πŸ‘©β€πŸŽ€ Ate-A-Pi: Mm-hmm.

[30:49.514]🌌 Eric Wollberg: That paired with the neural transformer architecture, which we should definitely dive deeper into and Wes can go deep on that. He built, we built that architecture from the ground up. That those two things paired together were what gave me the confidence that we had firmly entered Dea and to eventually start the company. And what was amazing is that I found Wes, who was already working on using neural transformers for a variety of different applications in Neurotech.

[30:49.772]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.

[31:20.191]🌌 Eric Wollberg: And was also, you know, just started going down the ultrasound rabbit hole. And so lucky, lucky for us to have found each other.

[31:30.243]πŸ‘©β€πŸŽ€ Ate-A-Pi: So just one more question on the hardware. When did all of this hardware start to kind of make sense in the package that it does? Because I imagine EEGs at one point were very, very big machines. And ultrasound was something that you have a doctor uses it on a patient for a baby or whatever. It's something you hold in your hand. It's not really small enough to put on a headband. So when did these pieces of tech get small enough for this to start to make sense?

[31:41.835]🌌 Eric Wollberg: Yeah.

[31:58.89]🌌 Eric Wollberg: Yeah, great question. So, you know, TFUS for neuromodulation actually started around 2004. And through the oh-oh, the late kind of mid to late oh-ohs and the 2010s, you know, you just saw, you know, again, this is really happening in research, you know, institutions, etc., where they're increasing both, you know, the element count. So the amount, you know, elements are, you know, piezoelectric material, right, a crystal, for example, where you run electrical current through it and that oscillates creating an ultrasonic wave.

[32:25.784]πŸ‘©β€πŸŽ€ Ate-A-Pi: Yeah.

[32:28.566]🌌 Eric Wollberg: the more elements you have, the more ability you are capable of both moving that around and phasing it, as well as the precision. And we kind of note that there's actually like these curves that we kind of observed. I named it Wes's Law because Wes has too much humility to name it after himself. But you're seeing very similar kind of curves, a la like a Moore's Law.
where both the size of the transducers and the number of elements on a given transducer are increasing over time. And so, you always wanna be in a place, right, in Harvard in particular, where you're riding some kind of curve, right? And so you're seeing that happening in transducers where both the number of elements, the cost, and so on and so forth are decreasing over time. And so,

[33:09.167]πŸ‘©β€πŸŽ€ Ate-A-Pi: All right.

[33:16.815]πŸ‘©β€πŸŽ€ Ate-A-Pi: All right.

[33:24.03]🌌 Eric Wollberg: That was a really critical thing that was done, you know, over the course of what's now, you know, probably 20 years where you now see this technology really capable of being, you know, commercialized.

[33:30.432]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.

[33:35.871]πŸ‘©β€πŸŽ€ Ate-A-Pi: So increasing resolution is always nice, right? If you get a boost from a tailwind from the increasing resolution. Okay, so let's get to the build out of the neural model. I mean, when we build kind of like machine learning models, we always start with a messy, large messy data set, hopefully, hopefully not a small one. So talk about like the first kind of like,
alpha version of the neural model. Like, where did the data come from? What were you trying to do? What was the test that you did that was like, okay, you know what, it's worth putting more time and effort into this.

[34:21.117]🧠 wes: Yeah. So V1, I'll start with the V1 of Morpheus. What we did is we basically sourced data from open source data sets. Um, alongside some lucid dreaming data from, from Donner's, but primarily to set surrounds, um, uh, it is really built off of what we found open source. And so the idea.

[34:36.577]πŸ‘©β€πŸŽ€ Ate-A-Pi: Okay.

[34:48.291]πŸ‘©β€πŸŽ€ Ate-A-Pi: So maybe just to take a step back, what is the first thing that you're trying to detect there? Are you trying to detect the entry into REM? What's the first question that you have that you need to solve? Are you looking at the EEGs and then you're trying to classify, build a classifier to figure out whether it's entering REM? Is that the first thing? Or are you trying to build a classifier to detect the lucidity? What are you trying to detect from that data that you first have?

[35:19.401]🧠 wes: The goal of the Morpheus one is to be given a particular brain state and output continuously TFUS instructions to get a response in the brain. That's the number one goal. We're not doing a significant amount of classification. That's the number one goal of the model. The REM stuff, that's, you know, we leave that to other.

[35:37.923]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.

[35:43.113]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.

[35:48.321]🧠 wes: techniques. It's really not something we spend a great deal on. It's really the act, the number one problem is getting something, because that's a solved problem. That's been a problem that's been solved a long time ago. The number one thing is getting

[35:59.495]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right, right, right. So you have a bunch of other tech, which basically detects the RAM, which gets you into the lucid state. And what you're focused on is you have a EEG coming in, and you need to produce a target trans-cranial ultrasound, focus ultrasound kind of like map that your transducers are gonna take and implement. Is that?
Is that an accurate characterization?

[36:29.801]🧠 wes: Yeah, yes.

[36:31.263]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right. So when you start off, do you do the actual work, as in, like, do you actually like, OK, I'm going to take this EEG. I'm going to produce a transcranial focus ultrasound. I'm going to actually deploy it. And I'm going to actually do the reading. Did you actually do the data collection, or do you start off with an open source data set, which where other people have done it, and you're just kind of trying to produce the output first? Like, what is the start of this process for you?

[37:00.765]🧠 wes: Yeah. So the start is that we grab an open source dataset that is simultaneous EEG and fMRI. We do a number of pre-processing techniques to fMRI to basically find targets of height in activity. So, you know, we've masked the prefrontal cortex and we look for what voxels, what 3D pixels exist that are of a height in state, right?

[37:08.088]πŸ‘©β€πŸŽ€ Ate-A-Pi: I see.

[37:27.279]πŸ‘©β€πŸŽ€ Ate-A-Pi: Mm-hmm.

[37:29.049]🧠 wes: intuitively, right, that height and activity is in some way correlated with what's being read in the EEG. So when you have the simultaneous EEG fMRI, what you're able to do is say, okay, at this timestamp, this fMRI scan was ran, and at the same time, this sequence of EEG signals were collected, right?

[37:41.813]πŸ‘©β€πŸŽ€ Ate-A-Pi: Mm-hmm.

[37:56.785]🧠 wes: And what you basically say is, well, you feed this to the model and you basically say, okay, what are the patterns that are existing between this EEG data and the fMRI data, both spatially and temporally? And then from that, you're really approximating something like, what does a model of the prefrontal cortex look like?

[37:56.865]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.

[38:09.215]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.

[38:25.605]🧠 wes: of hiding activity, right? And so the goal being is, well, how do you get a transformer to output instructions to the TFUS to bring the prefrontal cortex into a hiding state? You know, the prefrontal cortex is the key in all of this, right? And if you think of your prefrontal cortex when you are asleep, in a deep sleep, right? You know, everything's slowing down. Activity is very little. But right now, you know,

[38:27.727]πŸ‘©β€πŸŽ€ Ate-A-Pi: Mm-hmm.

[38:34.433]πŸ‘©β€πŸŽ€ Ate-A-Pi: Mm-hmm.
Right.

[38:40.822]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.

[38:53.493]🧠 wes: As we're having this conversation, our prefrontal cortexes are quite active. And so the delta between those states, right, is what we attempt to model, and how do you pull someone upward into a heightened state?

[39:06.239]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right, so when you have this data set, is it a data set of EEG fMRI while transcranial ultrasound is being applied? No. But it is a data set of people experiencing lucidity in dreams. Yes? No.

[39:16.19]🧠 wes: No.

[39:21.609]🧠 wes: No, it's someone in waking state, their prefrontal cortex is active, right? And the goal of the transformer is to then model what does that waking state prefrontal cortex look like, only the prefrontal cortex. And then how do we build a transformer where the TFUS can bring someone to a state where that prefrontal cortex is active?

[39:27.215]πŸ‘©β€πŸŽ€ Ate-A-Pi: Mm-hmm.
I see.

[39:37.537]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.

[39:44.863]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right. So what is the second step of that? Yeah. Great.

[39:48.152]🌌 Eric Wollberg: Let me just jump in for one, for a clarifying point. We are a collaboration with the Daughters Institute, which is probably the top lucid dreaming lab in the world. And just led by a gentleman named Dr. Martin Dressler, whose work in 2012, 2014 was critical into establishing the neural correlates of lucid dreaming and so on.

[39:54.806]🧠 wes: Yeah.

[40:15.358]🌌 Eric Wollberg: We are doing a neuroimaging, the largest neuroimaging aggregation of lucid dreaming data ever done. And we do about four of these a week right now, okay? Simultaneous EGF MRI. We have some data from them, primarily EEG right now. We should be getting our first data set, for example, from them. Unless you should probably mute, just because I think we're in echo.

[40:16.909]🧠 wes: Thanks.

[40:25.663]🧠 wes: Right now.

[40:27.511]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.

[40:31.222]🧠 wes: from them apparently each year right now. We should be getting our first data set for table.

[40:31.847]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right. Yeah.

[40:43.85]🌌 Eric Wollberg: We're getting the first data from them, probably maybe today or tomorrow, actually. I was just on the phone with Dr. Dresser earlier today. And so I wanna be clear, what Wes is talking about is that the model that we showcased a couple of weeks ago is trained on open source waking state data where that prefrontal cortex activation, stimulation can be done. It's also supplemented with EEG data of lucidity, okay?

[40:52.364]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.

[41:01.858]πŸ‘©β€πŸŽ€ Ate-A-Pi: Yeah.

[41:09.923]πŸ‘©β€πŸŽ€ Ate-A-Pi: All right.

[41:10.07]🌌 Eric Wollberg: But the simultaneous EGF MRI data set, we're only just starting to add into the data set, now in the coming weeks and months, and we'll continue to add to that. So that was just the one clarifying point that I wanted to make.

[41:15.343]πŸ‘©β€πŸŽ€ Ate-A-Pi: All right. Right.
All right.

[41:23.851]πŸ‘©β€πŸŽ€ Ate-A-Pi: No, I mean, and just to be, you know, what I always find is that, you know, whenever we try to implement machine learning or AI, we always end up with insufficient data for the actual thing that we're trying to do, right? And we always end up with like trying to find proxies because you need like a large amount of data at first to get like some initial result, which allows you to go forward and, you know, then obtain like specific data on certain things, right?
So I'm just trying to understand how that worked in the beginning stages, right? Which is where you need this initial signal to kind of like, oh, you know what? This is worth our time and effort in order to put something in order to actually collect and annotate much more granular and sophisticated data, which is a huge effort on its own, right? So I was just trying to capture
how that would have felt like at the early stages of the company, which is obviously you've progressed far beyond that at this stage. So you have this EEG FMRI dataset at the very beginning. You have the targets that are being generated. Then from those targets, you generate a transcranial ultrasound map that you need to target. The next step of that is that generation of
the TFUS map, right? Like the targets are identified in the fMRI, and then you have a build out of your TFUS, and basically that TFUS is basically, it's targeting those areas. Is that correct? Is that an accurate characterization?

[43:13.661]🧠 wes: Yeah, yeah.

[43:16.019]πŸ‘©β€πŸŽ€ Ate-A-Pi: Okay, so, and then you have, and then you basically have like small pulses that go in, and then you have another EEG reading, right? The next EEG reading comes out, and then you compare, right? And you compare how close you got it to where it needs to go, and then you modify the TFUS again, and then you send in another pulse. Is that accurate?

[43:39.398]🧠 wes: Yeah, the EEG that is read from the headband prompts the model and the output is then stimulated.

[43:43.243]πŸ‘©β€πŸŽ€ Ate-A-Pi: Yeah.

[43:47.895]πŸ‘©β€πŸŽ€ Ate-A-Pi: So how many times a second are you taking these readings or what is the kind of resolution that you have on this? Are you taking a reading every 10 times a second and then the ultrasound is at 60 hertz? Or what are the numbers that we're talking about here?

[44:08.685]🧠 wes: I mean, the EEG right now samples at 1000 Hz a second. You know, do you need all that, right? You can down sample it. You know, you could do that to reduce complexity if there's a lot of redundancy. So, I mean, it could be anywhere from 256 samples to, you know, 1000, but right now it's 1000.

[44:14.562]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.

[44:23.192]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.
Ate-A-Pi (44:33.) Right now it's 1000, and then the ultrasound is at what kind of frequency are we talking about here?

[44:42.369]🧠 wes: So, I mean, there's a few different frequencies, right? So there's duty cycle, which is like a proxy for, so there's a few things. So there's the frequency that the ultrasound operates at, call that 500 kilohertz, and then there's the pulses themselves. And our goal is gamma, and so that's probably gonna be around 40 pulses per second.

[45:01.836]πŸ‘©β€πŸŽ€ Ate-A-Pi: Yeah.

[45:09.207]πŸ‘©β€πŸŽ€ Ate-A-Pi: 40 pulses per second. So, and each pulse, each of those, you know, 40 pulses could be a different, kind of slightly different as in slightly modified to, you know, get you to where you need to go. Is that correct?

[45:23.105]🧠 wes: Certainly, but I mean, I would say the primary adjustment that's made, you know, per loop cycle is really the spatial targets more than anything.

[45:34.999]πŸ‘©β€πŸŽ€ Ate-A-Pi: The spatial, sorry.

[45:37.877]🧠 wes: the spatial targets as opposed to the temporal.

[45:40.987]πŸ‘©β€πŸŽ€ Ate-A-Pi: I see, I see. So more on like which location in the brain is being targeted rather than the timing. I see, I see. And so, yeah.

[45:50.013]🧠 wes: Yeah, I mean, I think they're both critical, but the spatial is the most important. I would put that as a priority.

[45:54.923]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right. And that brings the other thing, which is even if the halo is slightly situated slightly differently each time, you still kind of take a reading of the EEG in real time. So you can adjust the spatial targets accordingly. Is that correct?
So the next question I have is how much of it is personalized? Because how much difference is there from person to person that you have to adjust for, that the device or the model has to adjust for in real time?

[46:32.169]🧠 wes: Yeah. So the next big thing that we're working on right now is the reinforcement learning layer of all of this. So, you know, there's two types of feedback that the model will receive, right? It's the user explicit feedback. You know, the user, you know, for example, even if they're awake, you know, they run a sequence and they give some feedback, like I felt I didn't feel anything, maybe on a scale of one to 10, you know, or I felt something very significant, you know, so on and so forth.

[46:49.741]πŸ‘©β€πŸŽ€ Ate-A-Pi: Yeah.

[46:59.215]πŸ‘©β€πŸŽ€ Ate-A-Pi: Mm-hmm.

[47:02.325]🧠 wes: they've explicitly ranked a given sequence, or they've explicitly ranked an experience and that they woke up and they've given feedback. So that's number one, right? Number two is the neurofeedback, right? What are the sequences given the TFUS pulses, what is the response in the brain?

[47:19.072]πŸ‘©β€πŸŽ€ Ate-A-Pi: Mm.

[47:28.685]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.

[47:28.725]🧠 wes: you know, there's a number of ways to measure, but you can sort of think of it like, you're sort of pinging the brain and you're sort of listening to the responses, right? And so, you know, spikes in gamma, general patterns between the different electrode placements, you know, that's really informative for the model. Because what then you can then do is measure on a continuous scale, you know, what...

[47:36.703]πŸ‘©β€πŸŽ€ Ate-A-Pi: Mm-hmm.

[47:56.013]🧠 wes: is working, right? You can do sort of binary classification in terms of like, did this work or didn't work on user feedback, but you can take this sort of continuous feedback and, you know, provide that as, you know, either rewards or penalties to the given sequences that the user experienced. And then from that, learn what's sort of hitting with a particular person, you know, in a similar way, you can think of, you know, a TikTok feed or Twitter feed.

[47:57.749]πŸ‘©β€πŸŽ€ Ate-A-Pi: Yep.

[48:02.722]πŸ‘©β€πŸŽ€ Ate-A-Pi: Mm-hmm.

[48:11.829]πŸ‘©β€πŸŽ€ Ate-A-Pi: All right.

[48:24.513]🧠 wes: as the user gives feedback, both, you know, maybe explicitly in the form of a like, or maybe it's, you know, some sort of, you know, continuous factor in like the time that they viewed a particular tweet or video, the model can then learn, you know, what are the preferences or what are sort of the things that in terms of content, which are just tokens that work with a particular person and then from that, you know, the model will learn over time.

[48:52.211]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right on. Yeah, go ahead.

[48:53.883]🌌 Eric Wollberg: Let me just, because I always like to, like I let Wes give the super technical version, but I know you have a very informed audience, but just in the sense for layman, you know, a layman explanation, right? First of all, we only focus on like training models on brain states that are both discrete and universal. To define terms, like when I say discrete, I mean, it is one thing and not another. A counter example being, for example, unfortunately,

[49:14.765]πŸ‘©β€πŸŽ€ Ate-A-Pi: Mm-hmm.

[49:19.008]πŸ‘©β€πŸŽ€ Ate-A-Pi: Mm-hmm.

[49:21.086]🌌 Eric Wollberg: flow states, right? So flow states are not discrete. You can be a surfer and enter a state of flow and what triggered that is your kind of motor cortex, right? Versus like a chess player can enter a state of flow and it's like their spatial reasoning. So you would just need more data to create a generalized model. But the way that lucid dreaming is discrete and universal, it is prefrontal cortex activation during REM, whether you're in a lucid dream or Wes is in a lucid dream or I'm in a lucid dream.

[49:22.808]πŸ‘©β€πŸŽ€ Ate-A-Pi: Mm-hmm.

[49:26.575]πŸ‘©β€πŸŽ€ Ate-A-Pi: Mm-hmm.

[49:32.419]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.

[49:37.079]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.

[49:50.754]🌌 Eric Wollberg: which makes it easier to make these generalizable models. And also I should mention, you know, we should mention, I mean, what we already just did in terms of, you know, the sampling rate of EG, but for these data sets, these some of these EG fMRI data sets, they're extraordinarily information dense, right? A thousand, you know, to give you a sense, a spatial reading in fMRI once every 2.1 seconds, in that same period, you'll have 2,100, you know, EG sampling, so it's extremely information dense.

[49:54.851]πŸ‘©β€πŸŽ€ Ate-A-Pi: All right.

[50:03.904]πŸ‘©β€πŸŽ€ Ate-A-Pi: Yep.
All right.

[50:18.151]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.

[50:20.17]🌌 Eric Wollberg: And so broadly how you could think about this is like, you're creating this kind of vector space of possible, you know, lucid dreaming sequences, right? And so and so forth. But, you know, you might be over here and your reinforcement learning will drive the model, you know, to really focus on targets around here. Whereas Wes or I might be, you know, in other areas in that space. So that's just the important kind of, I don't know, layman explanation that I would give.

[50:27.297]πŸ‘©β€πŸŽ€ Ate-A-Pi: Yeah.

[50:36.975]πŸ‘©β€πŸŽ€ Ate-A-Pi: Mm-hmm.

[50:45.247]πŸ‘©β€πŸŽ€ Ate-A-Pi: Oh, absolutely. How much, like let's say you have, what is your target for like the first run of the Halo? Like is it 10,000 devices? Is it more? Like what is your rough number that you're coming up with?

[51:01.394]🌌 Eric Wollberg: Yeah, you know, we have a reservation program that I started because I talked to a lot of consumer hardware founders and you know, they said, listen, like, one, obviously, de-risk demand, but also for go to manufacturing motion. It's really great if you have an order book that you can kind of, you know, use to approach better manufacturing partners because

[51:10.241]πŸ‘©β€πŸŽ€ Ate-A-Pi: Yeah.

[51:23.91]🌌 Eric Wollberg: You know the bane of all consumer hardware companies and really hardware companies broadly But particularly consumer hardware companies is like these small batch medium batch manufacturers You like you order a thousand five hundred of them don't work you like hey these don't work No, I will send us more money now 250 and don't work etc. It's like death by a thousand paper cuts And so, you know, we've done you know $2.4 million of book revenue through that reservation program. I want to be very clear

[51:33.24]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.

[51:40.64]πŸ‘©β€πŸŽ€ Ate-A-Pi: Yeah.

[51:51.042]🌌 Eric Wollberg: That money's held in a separate bank account. We do not use it for development. It's fully refundable at any time. But I think it shows a good sense of demand. I think in terms of what we want the first run to look like, it's very much gonna be reflective of where that reservation list is at the time. Certainly anyone who puts down a reservation should be getting a device because they took a bet on us when in some of these cases, when we were nothing but a website and a dream, not to be too, you know.

[51:54.603]πŸ‘©β€πŸŽ€ Ate-A-Pi: Yep.

[52:04.835]πŸ‘©β€πŸŽ€ Ate-A-Pi: Hmm.
I see.

[52:13.183]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.

[52:18.805]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right, right, right.

[52:20.666]🌌 Eric Wollberg: on the nose. But, you know, that's really, you know, the critical thing is, you know, building up this reservation program. We spent $0 on marketing so far. And so, you know, I think what this shows right is there's enormous demand for this. We have to make it work. Once we start, you know, we're starting NeuroSimulation, you know, in the spring, we have a beta user program that we launched alongside showcasing the model. We've got like 3,500 people to sign up for that, you know, in five days or something like that.

[52:30.196]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right, right.

[52:34.965]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.

[52:42.009]πŸ‘©β€πŸŽ€ Ate-A-Pi: Yep.

[52:47.748]πŸ‘©β€πŸŽ€ Ate-A-Pi: Yeah.

[52:49.81]🌌 Eric Wollberg: And so we'll go through the process of selecting those people for the beta user program. But anyway, in terms of what the initial run will look like, it'll be very dependent on that reservation list.

[52:50.2]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.

[53:04.787]πŸ‘©β€πŸŽ€ Ate-A-Pi: I mean, I just wanted to draw a metaphor. Let's say you had, let's say, I'm just gonna put a number, let's say 10K, right? Let's say you have 10K, and how much would that expand your, because I imagine you can use the EEG data coming out to kind of improve your models, right?
So how does that model feedback loop look like from the initial devices? Like, if you have like, let's say 10,000 devices, because, and I see this all the time, like there's this graph that people have of, the day the iPhone went out, and like 10 years later, we probably create as many photos in a day as like the entire history of humanity before the iPhone. Right, like every single day, there's probably more photos, more photos are taken.
in the entire history of humanity before the iPhone, right? And so I wonder if putting this device out there expands the data set of EEGs so dramatically, so, so dramatically that you can have this enormously larger data set. Like what would having 10,000 of these devices in the market, do you think that would expand your data set significantly for future improvements?

[54:24.829]🧠 wes: Yeah. So that goes back to the reinforcement layer that really only works at scale, um, it works significantly better at scale, I should say, right? Because we are. The halo is a neural imaging device, right? It's in our stimulation device. Um, and so what we're able to do is look at, you know, basically how

[54:32.611]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.

[54:41.825]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.

[54:52.265]🧠 wes: what are the responses you're getting from this TFUS and what, and those responses are measured with EEG. And that's really informative, right? You can learn quite a bit from what works and what doesn't. And so while, I think that the, you know, starting with this open source data set, this is what Simon's saying, EEG and FMRI, you know, we can get terabytes of data that way, but it's really when you target gain into, you know, hundreds of terabytes or petabytes of

[54:55.64]πŸ‘©β€πŸŽ€ Ate-A-Pi: Mm-hmm.

[54:59.375]πŸ‘©β€πŸŽ€ Ate-A-Pi: Mm-hmm.

[55:05.206]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.

[55:13.101]πŸ‘©β€πŸŽ€ Ate-A-Pi: Yeah.
Right.

[55:22.097]🧠 wes: data, that's really going to come from that people are using this every night and we're collecting just on a nightly basis gigabytes worth of neural imaging data and the responses specifically from TFUS sequences. You're pinging the brain and you're basically seeing how it's reverberating and we're collecting that and it's this continuous thing over and over again. So it would be a very significant amount of data.

[55:27.054]πŸ‘©β€πŸŽ€ Ate-A-Pi: Yeah.

[55:33.615]πŸ‘©β€πŸŽ€ Ate-A-Pi: All right.

[55:39.149]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.

[55:43.527]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right. Yeah, yeah, I can I can I can imagine. Yeah, good. Yeah.

[55:52.522]🌌 Eric Wollberg: And one thing I also want to talk about is not only scaling up the reinforcement side, but also the initial training data aggregation. So the day after we showcased the model, we released a piece that I think people, we had a good response for it, but I don't think people really realize how profound it potentially is. We call it the quality of factory. Realistically, that's a really cool marketing name for what is a neuro imaging lab.
primarily set up with the simultaneous eGFMRI neuroimaging data setups. And so, you can imagine where we can actually expand also our training dataset where we bring in people who are extraordinary at, whether it's lucid dreaming or meditating or focus or positive mood, et cetera, the entire state space of potentially discrete, universal, but even potentially because we have the scale now, nondiscrete.

[56:17.055]πŸ‘©β€πŸŽ€ Ate-A-Pi: Yeah.

[56:22.831]πŸ‘©β€πŸŽ€ Ate-A-Pi: All right.

[56:45.618]🌌 Eric Wollberg: universal brain states and aggregating more and more of this data to train, you know, successive models where you're creating, um, models that can do more and more different experiences with the same piece of hardware. Um, we do aim to hopefully launch the device with more than just lucid dreaming as an experience, uh, definitely focus and, and hopefully also positive mood. So, um, you know, that's also a really critical way you scale up data is on the training data sets.

[56:58.455]πŸ‘©β€πŸŽ€ Ate-A-Pi: Yep.

[57:13.53]🌌 Eric Wollberg: using this quality of factory, which is really a larger neuroimaging lab built for purpose.

[57:21.347]πŸ‘©β€πŸŽ€ Ate-A-Pi: So let me, maybe I kind of missed that earlier. So the core thing that you're looking at is kind of like on your product roadmap is discrete universal states, as you call them. And of the, again, I think perhaps not everyone is super familiar, but of the discrete universal states, you're saying focus is one of them.
and positive mindset is another? Is that positive mood? Is that another?

[57:54.774]🌌 Eric Wollberg: Positive mood, yeah. So TFUS has already been used to induce focus and positive mood. So these things have already been validated in the context of utilization with TFUS to do it. So in many ways, we're also just broadly recreating that. But yeah, I mean, listen, we think that Lucid Dreaming is the killer app for this technology from a consumer experience perspective. It is the most profound, extraordinary experience that we think we can give. And so that's why it's our focus.

[57:57.655]πŸ‘©β€πŸŽ€ Ate-A-Pi: Yeah, positive mood.

[58:20.399]πŸ‘©β€πŸŽ€ Ate-A-Pi: Mm-hmm.

[58:23.926]🌌 Eric Wollberg: But you want to create a system where over time, you can make a more and more generalizable device or generalized device where the number of experiences the same device can give you increases over time. So, we want to aim to have kind of three core experiences release when the device ships. It obviously also increases the TAM of the device. People like to be focused when they do work.

[58:31.401]πŸ‘©β€πŸŽ€ Ate-A-Pi: Right.

[58:49.934]πŸ‘©β€πŸŽ€ Ate-A-Pi: Yeah.

[58:52.474]🌌 Eric Wollberg: So maybe you can wear it while you're jamming out a spreadsheet or something, I don't know, whatever you're doing for work. So in positive mood, people want to feel happy and so on. So that's kind of the impetus that's really critical about the model. Not only does it obviously create this closed-loop system that creates these targets and reliably creates these experiences,

[59:11.695]πŸ‘©β€πŸŽ€ Ate-A-Pi: All right.

[59:19.33]🌌 Eric Wollberg: but also allows you to onboard more experiences to the system over time.

[59:19.983]πŸ‘©β€πŸŽ€ Ate-A-Pi: All right.

Reply

or to participate.