OpenAI: 人の力の勝利

Five days after being abruptly fired, Sam Altman is returning to his old job as boss of OpenAI, the developer of the chatbot ChatGPT.


The most talked about emoji in my work chat so far today is the exploding head emoji.


It is still unclear why he was fired in the first place. But OpenAI's board of directors discovered something that deeply troubled them and took extraordinary steps to oust him, acting quickly and quietly, without letting almost anyone know.


In a statement, they implied that he had somehow not been honest with them and accused him of not being "consistently candid" in his communications.


Despite the gravity of all this, and despite the appointment of two new CEOs in roughly the same number of days, what followed was an explosion of support for Mr. Altman from within the company. , it worked.


Sam Altman Returns as OpenAI Boss
Coded Message,
almost all of the colored Heart staffers co-signed a letter saying they would consider resigning if Mr. Altman did not return. Chief scientist Ilya Setskova, who was part of the board that took the initial decision , was one of these signatories. He later wrote on X (formerly Twitter) that he regretted his role in Altman's resignation.


“OpenAI is nothing without people,” was posted on X by a number of employees, including then-interim CEO Mira Murati. Many colored hearts were also shared. The whole atmosphere had the feel of last year's big Silicon Valley meltdown, the reorganization of Twitter (now X) following Elon Musk's acquisition. The fired Twitter staffer also sent a coded message at the time: a saluting emoji.


Sam Altman, on the other hand, didn't hang around. By Monday, he had accepted a new job at Microsoft, OpenAI's largest investor.


Several people have tweeted to me that this is effectively an acquisition by Microsoft with no acquisition bid. The Seattle giant hired Altman, OpenAI co-founder Greg Brockman and Microsoft's chief engineer over social media posts about X within two days. Board member Kevin Scott said the company is open to others who wish to leave OpenAI. Sorry, join them.


But now he's back. Someone once said to me, "In Matrix form, you can walk through the walls of large corporations if you want." And that seems to be exactly what Sam Altman accomplished here.


The Root of It All
Why is this whole melodrama important?


We may be talking about pioneering technology creators, but there are two very human reasons we should all care. It's money and power.


Let's start with money. OpenAI was valued at $80 billion last month. Investment funds are flowing in, but overhead costs are also high. Someone once explained to me that every time someone entered a query into his ChatGPT, it cost the company "a few pence" in terms of computing power.


Dutch researchers told me in October that if every Google search query cost the same as a chatbot query, even the super-rich Google wouldn't be able to afford to run a search engine.


So it's no wonder that OpenAI has set its sights on making profits and getting investors on its side.


And with money comes power. Despite all the popcorn-eating distractions, let's not forget what OpenAI is actually doing: shaping technology that has the potential to reshape the world.


The power to mess things up
AI is rapidly evolving and becoming increasingly powerful, potentially increasing the threat. In a very recent speech, Altman himself said that what's coming out of OpenAI next year will make the current ChatGPT look like an "archaic relative" - ​​and we're confident that the existing model will I know that I can pass the bar exam I am taking.


There aren't many people driving this unprecedented and disruptive technology, but Sam Altman is one of them. AI promises a smarter, more efficient future if things go right, and destruction if things go wrong. But as the past five days have shown, the power of people remains at the heart of this innovation. And humans still have the power to disrupt things.


I'll leave you with my favorite comment about this whole debacle. “Those building [artificial general intelligence] cannot predict the outcome of their actions three days in advance,” quantum physics professor Andrzej Dragan wrote in X.