OpenAI has a genius for producing dystopian headlines. Their latest announcement is no exception: OpenAI appoints former NSA chief to their board.
It’s the kind of sentence to give you pause. And it’s easy to frame the news cynically. The internet commentariat certainly did.
In short, many people made the logical assumption that OpenAI plus the NSA equals creepy data collection and citizen monitoring. If true, the news is just the latest PR gaffe from a company that seems unable to ‘read the room’.
My understanding is a little different.
Situational Awareness
A few weeks ago an essay called Situational Awareness rippled across AI research circles as the latest and best-written addition in the literature of “AI might doom everyone”. It’s 136 pages long, but the argument is pretty simple.
AGI (Artificial General Intelligence) could arrive as soon as 2027, with ASI (Artificial Super Intelligence) to follow within a year.
This technology will have such national security importance that sophisticated state actors (i.e. China, Russia, and the usual suspects) will make it a priority to steal the research and develop it themselves.
Therefore AGI research will necessarily become a National Security matter and move toward something resembling the Manhattan Project with significant government involvement.
And then, The Project will be on.
I skipped a lot of the paper’s detail (you can read a 20-page summary here), but it’s easy to keep the shape of the argument intact: AI models powerful enough to tip the balance of global power may be coming soon. Therefore governments will get involved in their development.
Scott Aaronson, a verifiably superior blogger to me, gives his own summary:
“We’re still treating [AI] as a business and technology story like personal computing or the Internet, rather than also a national security story like the birth of nuclear weapons. And we’re still indexing on LLMs’ current capabilities (“fine, so they can pass physics exams, but they still can’t do original physics research“), rather than looking at the difference between now and five years ago, and then trying our best to project forward an additional five year. […] The development of frontier AI models will inevitably be nationalized, once governments wake up to the implications, so we might as well start planning for that now.”
Of course, all this is conjecture. Not fact. But it’s one of those ‘the smartest people you know are taking it very seriously’ situations reminiscent of certain Internet forums circa 2020. The author, Leopold Aschenbrenner, is a genuine wunderkind and a former member of OpenAI’s Superalignment team (the department responsible for figuring out how to control super intelligent AI).
Experts are debating many points of the paper, especially the speed of AI development. I’d like to mega-stress that to any casual reader of this post.
But other points are finding little pushback. Such as:
China will want super powerful AI before anyone else.
The US will not want China to get super powerful before anyone else
Quirky San Francisco startups aren’t really equipped to deter international espionage.
Private companies may seek odd bedfellows if they provide unlimited money and electricity.
Is the Project on?
So what’s the deal with appointing former NSA chief Paul Nakasone to the board?
Officially OpenAI tapped Nakasone to address how AI can be used in cybersecurity. A wonderfully broad phrase that tells us absolutely nothing.
To me the appointment gestures to Aschenbrenner’s argued scenario. His arguments have circulated in idea form before they circulated in 136-page essay form. Which is to say, AI researchers are aware of them.
It’s worth mentioning Leopold Aschenbrenner was fired from OpenAI’s Superalignment team earlier this month. Ostensibly, he was canned for ‘leaking information’, though he claims he was fired for circulating an internal memo about security concerns without proper approval from executives. He also did not sign the employee letter that called for Sam’s reinstatement as CEO during the November board crisis. In any event, his arguments may have been appreciated internally even if he wasn’t personally.
So why did OpenAI appoint a former NSA Chief? We can make some guesses of increasing intrigue and existential consequence:
OpenAI wants to get juicy cybersecurity & government contracts, so hired a former NSA chief in a totally typical example of the revolving door between government and industry.
OpenAI wants to improve its security against state espionage, so hired a former NSA chief who knows about the state of the art.
OpenAI wants to help the government spy on people, so hired a former NSA chief who’s really good at that.
OpenAI wants to team up with the government to start an AGI Manhattan project.
My opinion is two, with a whiff of four.
In the short-term, OpenAI appreciates that they’re in a global spotlight and recognizes the value of an ex-US intelligence officer who understands the threat landscape of foreign espionage.
In the long-term, OpenAI appreciates that a former NSA chief on the board is a valuable asset in opening the door to government involvement.
For some observers, this looks like the first step of The Project.
In other words. Watch this space.
Great reporting per usual
Why should we assume they only have one thing in mind. Instead, given the changes they made in January to their terms of service, and given their apparent ambitions, why not:
-improve security
--of their products
--out of patriotism
-partner with the government
--to make themselves indispensable and limit effective regulation
--to get military, intelligence, & security contracts
-marketing
-other reasons