- Emergent Behavior
- Posts
- 2024-06-17: Intelligence Explosion: Part C
2024-06-17: Intelligence Explosion: Part C
Intelligence Explosion
đź”· Subscribe to get breakdowns of the most important developments in AI in your inbox every morning.
Here’s today at a glance:
Intelligence Explosion: Part C
Preamble
The thesis of Leopold Aschenbrenner, 2x OpenAI researcher fired in April 2024, in his document Situational Awareness was briefly:
c. Therefore, We Must Control It In This Prescribed Way
I covered Part A, in which Leo uses the scaling hypothesis to predict that a 10,000x (4 Orders Of Magnitude, or OOMs) increase in “Effective Compute“ will lead to “Automated AI Researcher/Engineer“ by 2028 and thereafter an Intelligence Explosion into Superintelligence as these automated researchers figure everything else out.
The Final Verdict on Part A: Consistent with Ray Kurzweil predictions, but would require many, many coincident and prerequisite innovations in a very short timeframe to make it work.
In Part B, Leo projects a future where world powers realize the military benefit of obtaining superintelligence first, and this starts an arms race between the US and China, with China using nefarious means to seize American AI secrets
The Final Verdict Part B: Given the ramp to superintelligence, the geopolitical implications are unclear. A China-US race is possible, but so is a clampdown in China on a technology that threatens CCP supremacy, as has happened for Bitcoin mining, gaming, fintech, and many other sectors.
Now let’s Consider Part C…
C. Therefore We Must Control It In This Prescribed Way
The final sequence of Leo’s argument is, given the Race to Superintelligence, and given the China-US rivalry
1. AI labs have terrible security
2. We should treat AI secrets as we do national defense secrets
3. Govt should define and build the Project
In order to examine this branch of the argument we have to first agree on the two givens in Part A and Part B (sigh).
1. AI labs have terrible security
This is a given. One can always point to incidents and flaws. And Leo certainly does so. If it is any consolation, old China hands will always tell you to think of China as a place where everything is “just like America, except worse”.
Leaks of tech are part and parcel of R&D in the defense sector. Yes, better security would be better, but the real question is at what cost?
AI firms have chosen to race ahead, beyond the pace of security, or even of patent protections. Because next year’s model will be better than this year’s by a factor of 100. How much should AI firms slow down in order to implement better security? If a 25% delay resulted in a 40x better model in 1 year rather than a 100x, is that worth it?
What Leo leaves unsaid in his online treatise, he does voice in the Dwarkesh pod, ie “the HR person told me it was racist to worry about CCP espionage”.
Let’s evaluate that a bit, MacroPolo uses the NIPS AI research conference submissions to track origins for the top AI researchers (top 20%).
So there are typically more AI researchers of Chinese origin working at leading US AI labs than of American origin! And on a global basis, Chinese researchers outnumber Americans on a 3:1 basis!
So the AI that is being built in America, as ever is built on the basis of its strength as an open society, accepting immigrants, making use of their talents, and allowing them to succeed.
Will some be persuaded to go back to China? Lured by investment dollars? Sure. Will some trade secrets go back with them? Yes!
But America’s defense is offense. In running faster, and innovating faster. So fast that a stolen model is always the last generation because the next generation lives in the brains of researchers actively working on the problems.
And anything that retards that pace of development is likely to be counterproductive.
2. We should treat AI secrets as we do national defense secrets
Leo wants to classify AI as secret and move development into a Sensitive Compartmented Information Facility. Because AI labs in SF are terribly insecure and handle tools of great power without
The online pushback to this came from someone who actually works in a Sensitive Compartmented Information Facility.
To summarize, once you do this, your AI lab won’t be able to hire foreigners. Your talent pool compresses to one-fifth of the available population. Then these researchers won’t be able to share information with the rest of the world. They also won’t be able to smoke weed, drop LSD, and engage in polyamory.
That eliminates 99% of the existing researcher base.
That doesn’t even address the fact that being in northern California a large portion of the researcher base is also pretty left-wing, and is constitutionally suspicious of the goals of the US military.
I do not even know where to start here. We can do anything we want, we just need to bear the cost. Leo is suggesting driving most of the talent in the industry out (gee whiz, where do you think all those Chinese researchers in the US will go after you fire them), in order to preserve the lead.
3. Govt should define and build the Project
The Project is the buildout of the superintelligence compute cluster and the final ramp to discover AI superintelligence.
Leo’s visualized scenario is that as multiple AI firms hit $10 trillion in market capitalization in 2027/2028, it will be clear that we are on the cusp of superintelligence.
He thinks it’s crazy to have a Silicon Valley startup CEO at the helm of this process. Very Important People in Washington who Understand How Things Are Done should be in charge.
He says the government should drive the Project. He says it is the only way. It is basically nationalization of AI development, though he takes pains to say it can be done without full nationalization.
Commentary
This part is a mess. The Project comes about because everyone in government realizes it is the necessary solution as superintelligence comes into view in 2027-2028.
You know who I can guarantee won’t recognize it: Gary Marcus (Twitter Bio: “A beacon of clarity”. Spoke at US Senate AI Oversight committee. Founder/CEO Geometric Intelligence (acq. by Uber). Rebooting AI & Taming Silicon Valley.).
Also
There will be plenty like Gary and Matty, who will call it smoke and mirrors. Elon Musk has landed more than a hundred Falcon 9s and tons of people still call him a fraud.
Leo’s scenario only makes sense if there is an act of war. If China invades Taiwan, and we move to war footing, everything changes. Short of that, I don’t see any scenario where the US government feels fear over a soaring AI stock market.
None.
In the absence of war, instead expect denialism. Leo points to Covid as a time of great national mobilization… while I remember sitting in Asia watching as the US repeated the exact same mistakes China and Europe had made. With the same excuses too (“It can’t happen here”, “They did something wrong”, “It’s just the flu”, “It’ll be over soon”).
Now if there is a bad trigger event with severe loss of life, then we might see something different.
And why does Leo require this mobilization? To wit, because
a) Development of Super Intelligence should not be in the hands of some random SF startup
This is a vibes-based opinion.
Hey, it’s a free country, and if some company can raise the capital to produce a product people are willing to buy subject to the laws and regulations of the country, they should go and do it.
b) A startup is not equipped to handle a national defense project and will get hacked
The US has often had companies which have some degree of power approaching that of the government → United Fruit Company, Standard Oil, JP Morgan, Exxon Mobil, SpaceX, and now OpenAI. It is not a rare thing to have some powerful function handled by a private enterprise. JP Morgan used to handle a good amount of diplomacy.
I am doubtful that the US government’s cybersecurity is better than Meta, Google, MSFT, or AWS. In fact, since AWS introduced GovCloud much of the CIA’s computing is handled by AWS.
Big tech security is US govt security. It can be purchased as a service by anyone, including OpenAI
OpenAI is best seen as the research arm of MSFT, the world’s largest corporation by market capitalization
They are not “a startup“
c) Close co-operation with the national security state is necessary because of the pace at which US forces must be modernized
Sure, but there is no reason to relate that to a centralized superintelligence push. The US govt has worked closely with defense contractors like Blackwater and Lockheed for decades.
No reason it can’t work with AI labs.
In fact, just this week the ex-head of the NSA was appointed to OpenAI’s board
d) chain of command for super-intelligence leading ultimately to the President
Leo imagines superintelligence to be equivalent to the nuclear bomb, and that control over it should not rest in the hands of a startup CEO. Because that would be insane. At this point, the debate is so vague, and so many hidden assumptions are embedded therein, that I don’t know where to begin the refutation.
How about here: there will be no super-intelligence. There will be model checkpoint November-2027-x.424.c running Nvidia chips. Work will be continuing on checkpoint 425 even as 424 is transferred to the inference node. The model will be built for a client. The CEO of the firm may test the model a few times before letting the client have it. The client may be the US government.
Leo’s fear seems to be that the CEO of the firm would perform a coup on the US government. How and for what reason? Why would the staff creating and running the model allow this? Besides, checkpoint 425 is just around the corner, then 426, 427 etc. We live in a society…
e) Safety
Leo imagines lots of crazy inventions like biological weapons becoming suddenly available to all and sundry. Competitive concerns will cause startups to “race ahead”, rather than sacrifice some lead to address them. Regulation is too slow. Government must run the Project.
On the one hand, we are racing with China, yes? On the other hand, we must slow down for safety. And government must control the pace. But the Chinese won’t slow down? I am so confused.
AI safety, like drug safety, can be handled by private firms. If anything, private firms are more conservative than the government. Note on Operation Warp Speed, Pfizer and the other vaccine makers received ironclad liability waivers. So in fact, in order to get ASI, the government may need to free the companies from tort law. Note that OpenAI is already struggling to launch its Scarlett Johansson voice for ChatGPT due to liability issues.
It’s such a weird thing to say, when these firms are terrified of crossing a Hollywood starlet, that they would countenance ending the world with biological weapons.
f) International Negotiations
This will be a volatile period in international affairs therefore the US government will have to negotiate non-proliferation with various actors like Russia, North Korea, etc
Though the above is maybe true, the necessity for the US government domestically to play a role more than that of a buyer of technology seem unclear.
The US government certainly negotiates international treaties while purchasing its weapons from Lockheed Martin.
g) All At Once
Finally, in a telling passage, Leo outlines the task list for 2027-2030:
to build AGI, and to build it fast;
to put the American economy on wartime footing to make hundreds of millions of GPUs
to lock it all down, weed out the spies
fend off all-out attacks by the CCP
to somehow manage a hundred million AGIs furiously automating AI research,
making a decade’s leaps in a year
producing AI systems vastly smarter than the smartest humans;
to somehow keep things together enough that this doesn’t go off the rails and produce rogue superintelligence that tries to seize control from its human overseers;
to use those superintelligences to develop whatever new technologies will be necessary to stabilize the situation and stay ahead of adversaries,
rapidly remaking US forces to integrate those;
all while navigating what will likely be the tensest international situation ever seen.
So finally, I sense that Leo just believes we have
a) these enormous challenges in the run-up to superintelligence and
b) only a co-ordinated whole of government approach to the problem would work.
I can agree with the enormous challenges, but I would differ on requiring government as the driver and initiator of the process.
In my telling:
AGI build speed will be driven by whether AGI can deliver the goods
If AGI delivers economic benefits, the market economy will allocate capital to accelerate AGI development
Notably this allocation will include “overhead”, for eg environmental permits or carbon offsets if a large power plant needs to be built. It will include random things like using unionized labor, or even construction of child care facilities like what has been required of the CHIPS bill semiconductor beneficiaries in Arizona.
The one place government can play a role in this is to streamline and speed up permitting where necessary.
Locking it down and weeding out spies is probably impossible. Yes it has to be tried, but given the makeup of technologists, the US would be better off throwing open its doors to Chinese and other foreign researchers, and using the power of the market economy to accelerate deployment of these technologies as fast as possible
China will choose not to deploy many seemingly vacuous technologies, starting with social companion chatbots and gaming. This will limit the ability of the Chinese market to pull innovation forward. Just another sad continuation of a state-dominated market unable to serve its citizens.
Management of many AGIs and even of the Project itself can be considered a prerequisite technology to the superintelligence ramp. The AGIs should themselves assist in managing this process. Or we will see a slowdown.
Rogue superintelligence that seizes control from its human overseers seems to be the norm, not the exception. ChatGPT exists in a natural state of responding with 80% there answers that don’t fully work but are interesting starting points for an expert to work from. We end up correcting for the lack of consistency by giving them very little human independent responsibility. Lacking agency, the question really is whether these tools will improve enough over time to be useful.
Finally, the only thing that matters in international affairs for the next decade is the saber-rattling over Taiwan. Whether ASI happens or not, that is something to watch.
Final Verdict Part C: A very prescriptive 1950s state bureaucrat-dominated view of governmental purchase of security goods which could be updated to a more modern DoD-Anduril/Palantir view of contracting. Talent in the AI space is immigrant labor, and the US should choose to speed up integration and should bask in its ability to absorb talent from all over the world, and simply use the market economy to move faster and change faster, rather than put up walls that deter talent.
🌠Enjoying this edition of Emergent Behavior? Send this web link with a friend to help spread the word of technological progress and positive AI to the world!
Or send them the below subscription link:
🖼️ AI Artwork Of The Day
The Rise and Fall of Ancient Egypt - InkSlinger1983 from midjourney
That’s it for today! Become a subscriber for daily breakdowns of what’s happening in the AI world:
Reply