- Emergent Behavior
- Posts
- 2024-03-24: Rebuttal to Dan Hendrycks
2024-03-24: Rebuttal to Dan Hendrycks
On power, and the delegation of it
đź”· Subscribe to get breakdowns of the most important developments in AI in your inbox every morning.
Here’s today at a glance:
Should We Fear Military AI
Dan Hendrycks was an accomplished machine learning researcher who invented the Gaussian Error Linear Unit, a mathematical function that improved the ability of neural networks to model complex patterns in data. He then became an AI doomer and is now the Director for the Center of AI Safety, an Effective altruist-linked thinktank that aims to slow AI development.
Dan is concerned about military AI use:
People aren't thinking through the implications of the military controlling AI development. It's plausible AI companies won't be shaping AI development in a few years, and that would dramatically change AI risk management.
In a democracy, we jointly yield the monopoly on violence to the government. We then elect our leaders, have a right to recall or fire them and delegate the authority to make decisions jointly for all of us. This idea that “AI companies“ should be shaping AI development is an elite cabal idea, which we would not countenance to entertain if the cabal consisted of people on the outs with the ethos of Northern California. If our polity ends up handing control of AI development to the military, then so be it. And note that the louder the doomers shout, the more likely this is to happen.
In the next part, he discusses the scary scenario that the military buys AI tech?
Possible trigger: AI might suddenly become viewed as the top priority for national security. This perspective shift could happen when AIs gain the capability of hacking critical infrastructure (~a few years). In this case, the military would want exclusive access to the most power AI systems.
Defense Production Act, budget, data: The US military could compel AI organizations to make their AIs for them. It also could demand that NVIDIA's next GPUs go to their chosen organization. The military also has an enormous budget and could pay hundreds of billions for a GPU cluster. They also can get more training data from the NSA and many companies like Google.
The company that founded Silicon Valley, Fairchild Semiconductor, built its first transistor for IBM’s XB70 avionics system. To unpack that a bit, Gordon Moore and Robert Noyce made their first Fairchild product for the XB-70 nuclear missile-armed supersonic long-range bomber. Specifically for the computers that calculated trajectories for the flight and for the nuclear missiles. Later, those same transistors were deployed on Minuteman ICBM missiles.
The Valley seems to be in the process of returning to its roots as an innovator in defense, so it seems like a bit of pearl-clutching to say scaremonger about this. Yes, we will need to produce armaments to defend America, her allies, and the free world. That’s part of what society wants and is willing to pay for.
He goes on:
Military systems are more hazardous: Military systems are sanctioned to hack and use lethal force, so they will have capabilities that others will not. Moreover, some will explicitly be given ruthless propensities. In an anarchic international system, the main objective of states is to compete for power for self-preservation, according to neorealists. Later-stage AIs could be permitted to be explicitly power-seeking, deceptive, and so on since these propensities make the systems more competitive.
The deadliest object in the American home remains the swimming pool, having killed more kids than guns. Are military systems actually more hazardous? Are we comparing apples with armadillos here?
What is a hack? I once issued a tcp ip command that checked the open ports of a newspaper in another country. I later learned that this constituted “hacking“ as per the laws of that country. Penetration testing, red teaming, and other civilian security practices all involve these kinds of “hacks“ in order to make systems better. Without military involvement at all.
Selection and confirmation of targets by the military remains in the hands of human operators. If there is any organization that is paranoid about chains of command and authorization for use of force, it remains the military, and under firm civilian control, they are not likely to cede that authority to a calculator.
AI scientists though, perhaps should remember, again, that we as citizens, have ceded the monopoly on the use of violence to our government, as directed by our duly elected representatives. When we choose to deny the military the tools that it needs to perform its tasks, we, a shadowy elite cabal, are denying our fellow citizens the force necessary to protect the joint polity.
Futility of AI weapon red lines: Some are trying to create "red lines" that would trigger a pause on AI development. These hoped-for "red lines" often relate to weaponization capabilities such as "when is an AI able to create a novel zero-day?" This red line strategy seems to assume no military involvement. Many of these red lines are actually progress indicators or checkpoints for a military and would not trigger a pause in AI development.
No self-respecting military will respect any “red lines“ unless expressly directed by its civilian leaders. And civilian leaders are easily scared by grizzled Special Forces Colonel Jessup types in times of war.
Indeed, Dan acknowledges that regulation is for naught.
Regulation: When militaries get involved, competitive pressures become more intense. Racing dynamics can't be mitigated with corporate liability laws or various forms of regulation as they don't apply to the military. For example, The EU AI Act and White House Executive Order do not apply to the military. Militaries racing could result in a classic security dilemma, like with nuclear weapons. Much of the playbook for "making AI go well" is impotent.
Finally, Dan says he’s not against the military! It’s just that everyone else he knows thinks we’re in the valley of peace where corporations are autonomously developing the tech.
This is not to suggest that I am against the military. I'm pointing out that everyone is acting as though corporations will forever be allowed to autonomously develop what will become the powerful technology ever.
I think there needs to be a little newsflash, that no company, even Midjourney bootstrapping with no external investors, is “autonomous“.
Every company exists to serve customers. Those customers can be consumers, other corporations, the military, or the government. The capitalistic amoeba of the firm is but a convenient fiction of organization. It exists because it is the most efficient way of delivering the required goods. That is all. There is no morality to the corporation, either good or bad (this was the mistake of the communists… they just didn’t like the vibe, even though they liked the goods).
And so while Dan’s colleagues may think they have full 360 degree of movement, they work within the constrained web of social relations that make us human. There is freedom but within a framework. And in that framework, once again to belabor the point, they have ceded the monopoly on violence to the State, as directed by their elected representatives.
People aren't thinking through the implications of the military controlling AI development. It's plausible AI companies won't be shaping AI development in a few years, and that would dramatically change AI risk management.
Possible trigger: AI might suddenly become viewed as the… twitter.com/i/web/status/1…
— Dan Hendrycks (@DanHendrycks)
4:15 PM • Mar 20, 2024
🌠Enjoying this edition of Emergent Behavior? Send this web link with a friend to help spread the word of technological progress and positive AI to the world!
Or send them the below subscription link:
🗞️ Things Happen
While Elon goes pretty right-wing to reduce the potential tax impacts from a billionaire levy, immigration is probably boosting the US economy dramatically:
"JPMorgan: Immigration is boosting the U.S. economy and has been 'underestimated'" cnbc.com/2024/03/22/jpm…
"From everything that we have seen, the revenues that are generated exceed the expenses."— Scott Lincicome (@scottlincicome)
3:43 PM • Mar 22, 2024
🖼️ AI Artwork Of The Day
Disney’s Frozen Live Action - u/diibiazus from r/midjourney
That’s it for today! Become a subscriber for daily breakdowns of what’s happening in the AI world:
Reply