Dario Amodeiâs AI safety contingent was growing disquieted with some of Sam Altmanâs behaviors. Shortly after OpenAIâs Microsoft deal was inked in 2019, several of them were stunned to discover the extent of the promises that Altman had made to Microsoft for which technologies it would get access to in return for its investment. The terms of the deal didnât align with what they had understood from Altman. If AI safety issues actually arose in OpenAIâs models, they worried, those commitments would make it far more difficult, if not impossible, to prevent the modelsâ deployment. Amodeiâs contingent began to have serious doubts about Altmanâs honesty.
âWeâre all pragmatic people,â a person in the group says. âWeâre obviously raising money; weâre going to do commercial stuff. It might look very reasonable if youâre someone who makes loads of deals like Sam, to be like, âAll right, letâs make a deal, letâs trade a thing, weâre going to trade the next thing.â And then if you are someone like me, youâre like, âWeâre trading a thing we donât fully understand.â It feels like it commits us to an uncomfortable place.â
This was against the backdrop of a growing paranoia over different issues across the company. Within the AI safety contingent, it centered on what they saw as strengthening evidence that powerful misaligned systems could lead to disastrous outcomes. One bizarre experience in particular had left several of them somewhat nervous. In 2019, on a model trained after GPTâ2 with roughly twice the number of parameters, a group of researchers had begun advancing the AI safety work that Amodei had wanted: testing reinforcement learning from human feedback (RLHF) as a way to guide the model toward generating cheerful and positive content and away from anything offensive.
But late one night, a researcher made an update that included a single typo in his code before leaving the RLHF process to run overnight. That typo was an important one: It was a minus sign flipped to a plus sign that made the RLHF process work in reverse, pushing GPTâ2 to generate more offensive content instead of less. By the next morning, the typo had wreaked its havoc, and GPTâ2 was completing every single prompt with extremely lewd and sexually explicit language. It was hilariousâand also concerning. After identifying the error, the researcher pushed a fix to OpenAIâs code base with a comment: Letâs not make a utility minimizer.
In part fueled by the realization that scaling alone could produce more AI advancements, many employees also worried about what would happen if different companies caught on to OpenAIâs secret. âThe secret of how our stuff works can be written on a grain of rice,â they would say to each other, meaning the single word scale. For the same reason, they worried about powerful capabilities landing in the hands of bad actors. Leadership leaned into this fear, frequently raising the threat of China, Russia, and North Korea and emphasizing the need for AGI development to stay in the hands of a US organization. At times this rankled employees who were not American. During lunches, they would question, Why did it have to be a US organization? remembers a former employee. Why not one from Europe? Why not one from China?
During these heady discussions philosophizing about the longâterm implications of AI research, many employees returned often to Altmanâs early analogies between OpenAI and the Manhattan Project. Was OpenAI really building the equivalent of a nuclear weapon? It was a strange contrast to the plucky, idealistic culture it had built thus far as a largely academic organization. On Fridays, employees would kick back after a long week for music and wine nights, unwinding to the soothing sounds of a rotating cast of colleagues playing the office piano late into the night.