Guy Louzon - a blog about business
Guy Louzon - a Podcast about business
OpenAI: From a Dinner at Rosewood to the Most Consequential Company in the World
0:00
-46:57

OpenAI: From a Dinner at Rosewood to the Most Consequential Company in the World

The saga of how a handful of idealists, a reluctant billionaire, and one very determined CEO built the company that lit the fuse on the AI revolution — and nearly burned themselves down in the process

Prologue: The Fear That Started Everything

It begins, as so many Silicon Valley origin stories do, with a dinner.

Sometime in 2015, Elon Musk sat across from Larry Page — co-founder of Google, and at the time, arguably the most influential figure in the technology world. The two had been friends for years. But this dinner, Musk later recounted, left him genuinely unsettled. He raised the alarm about artificial intelligence. About what it might do to humanity. About the need to slow down, to be careful, to think about who would be in control when these systems became truly powerful.

Page’s response, by Musk’s account, was essentially dismissive. He wasn’t worried. Progress was good. Intelligence, in whatever form it took, was worth pursuing. Musk recalled being called a “speciesist” for even suggesting that human consciousness deserved special protection.

Musk left that dinner scared. And when Elon Musk gets scared about something, he doesn’t just write op-eds. He builds companies.

The problem, as Musk saw it, was simple and terrifying: Google’s DeepMind was racing toward artificial general intelligence — the kind of AI that doesn’t just play chess or recognize cats in photos, but can outthink humans across every domain. And it was doing so inside a corporate structure beholden to shareholders, to ad revenue, to the quarterly clock. There was no one, in Musk’s view, building AGI with the explicit mandate of making it safe and beneficial for everyone. Google had absorbed DeepMind in 2014. Who would be the counterweight?


Part One: The Founders

Sam Altman — The Networker Who Became a Visionary

To understand OpenAI, you have to understand Sam Altman, because Sam Altman is OpenAI in a way that few founders are inseparable from their companies.

Altman grew up in St. Louis, Missouri, and was the kind of kid who taught himself to code at eight years old. He was gay in a conservative environment, and his experience of not fitting in gave him both a certain hunger and a detachment from conventional wisdom that would serve him well later. He enrolled at Stanford to study computer science, and lasted two years before dropping out to co-found a location-based social app called Loopt in 2005 — one of the very first apps on the iPhone.

Loopt didn’t exactly set the world on fire. It was acquired in 2012 by Green Dot Corporation for around $43 million — a solid but not spectacular exit. But what Altman gained from the experience wasn’t money. It was pattern recognition. He understood startups. He understood what it took to survive, to pivot, to hold a team together under pressure.

In 2011, Paul Graham invited Altman to become a part-time partner at Y Combinator, the legendary startup accelerator that had spawned Airbnb, Stripe, Dropbox, and Coinbase. Altman was a natural. By 2014, at just 28 years old, he was named President of YC — an almost absurdly young age to run the most important institution in startup-land.

What made Altman unusual wasn’t just his intelligence or his network, though both were formidable. It was his willingness to take ideas to their logical extreme and ask the questions that made other people uncomfortable. He had written blog posts warning about AI risk. He believed that the technology being developed in the labs of Silicon Valley had the potential to reshape civilization — for better or for worse. He wanted to be at the table when those decisions got made.

Ilya Sutskever — The Scientist’s Scientist

If Altman was the visionary-operator, Ilya Sutskever was the oracle.

Born in Russia and raised in Israel before immigrating to Canada, Sutskever landed at the University of Toronto, where he came under the tutelage of Geoffrey Hinton — the man often called the “Godfather of Deep Learning.” This was not an incidental relationship. Hinton’s group at Toronto was one of the small number of places in the world, in the early 2010s, that believed deep neural networks were the future of AI at a time when the rest of the field had largely written them off as impractical.

Sutskever co-authored the landmark 2012 paper introducing AlexNet — a deep convolutional neural network that shattered all previous records on the ImageNet image recognition benchmark. The win was so decisive, so far ahead of everything else, that it effectively ended a decade-long debate about the direction of AI research. Deep learning wasn’t a niche approach anymore. It was the approach.

Google hired Sutskever as part of the acquisition of Hinton’s startup DNNresearch, and he worked at Google Brain before Altman and Musk came calling. By 2015, Sutskever was one of the most sought-after researchers in the world. Getting him to leave Google for a brand-new nonprofit was, by any measure, a coup.

Greg Brockman — The Builder

Greg Brockman is the kind of person who studied mathematics at Harvard, transferred to MIT to keep studying mathematics, and then dropped out to become the first employee (and later CTO) of Stripe — a company that became one of the most valuable private tech companies in history. If you need infrastructure built, systems designed, and an organization held together while chaos reigns, you call Greg Brockman.

At OpenAI, he was the person who made things actually work. The org chart, the hiring, the technical architecture — the unglamorous but essential machinery of a research organization trying to do something unprecedented. His Stripe experience building scalable payment infrastructure translated directly into building resilient AI research infrastructure. He was the quiet engine under the hood.

The Supporting Cast

The founding team also included John Schulman, a researcher who would become one of the key architects of the reinforcement learning from human feedback (RLHF) technique that would eventually power ChatGPT. Wojciech Zaremba, a brilliant Polish-American researcher with a background in deep learning at Google and Facebook. Andrej Karpathy, who had just completed a PhD at Stanford and was already recognized as one of the most talented young AI researchers in the world. Durk Kingma, a co-inventor of the Variational Autoencoder. And Dario Amodei — more on him later.


Part Two: The Founding

The Rosewood Hotel Pitch

The dinner between Musk and Altman that eventually led to OpenAI’s founding reportedly took place at the Rosewood Hotel on Sand Hill Road — the symbolic epicenter of venture capital. Two of the most connected people in Silicon Valley, eating good food, talking about existential risk.

Their logic was straightforward, even if the execution would prove anything but. If AGI was coming — and both believed it was — then the question wasn’t whether to build it but who would. If Google was going to build it anyway, better for there to be an alternative: a well-funded, openly oriented research organization that wasn’t answerable to shareholders, that published its findings, and that had safety baked into its mandate from day one.

The name “OpenAI” reflected that founding ethos. Open. Transparent. Not locked behind corporate walls.

On December 11, 2015, OpenAI launched with a public blog post and a $1 billion funding pledge — a number that immediately generated enormous press. The donors included Musk himself, Altman, Reid Hoffman, Peter Thiel, Jessica Livingston, Amazon Web Services, and others. When Greg and Sam had initially planned to raise $100 million, Elon intervened via email: “We need to go with a much bigger number than $100M to avoid sounding hopeless… I think we should say that we are starting with a $1B funding commitment… I will cover whatever anyone else doesn’t provide.” The showmanship was vintage Musk.

Worth noting: only a fraction of that $1 billion was actually ever delivered. The pledge was more aspiration than commitment. But it worked as a signal — to researchers, to the media, and to the AI establishment — that this was a serious endeavor.

The Structure Question

From the beginning, OpenAI was constitutionally unusual. It was a nonprofit. Not a company looking to build products and return capital. This wasn’t just legal window dressing — it was a genuine statement about values. The board wasn’t accountable to investors. It was accountable, theoretically, to humanity.

This structure solved one problem and created a dozen others. It allowed OpenAI to recruit researchers who cared about mission over money. It gave it moral authority in a field increasingly anxious about its own implications. But it also meant that OpenAI couldn’t offer equity to employees, couldn’t easily raise the massive amounts of capital that cutting-edge AI research requires, and was perpetually vulnerable to being outrun by better-funded competitors.

The tension between the nonprofit ideals and the brutal financial realities of frontier AI research would define the company’s next decade.


Part Three: Early Days and First Breakthroughs

The Gym and the Games

OpenAI’s early work was genuinely exciting, even if its commercial implications were unclear. In 2016, the lab released OpenAI Gym — a toolkit for developing and comparing reinforcement learning algorithms. It was pure research infrastructure, the kind of thing that signals serious scientific ambition rather than product aspiration.

Around the same time, OpenAI’s team was using reinforcement learning to train agents to play video games — Atari games, eventually Dota 2 — at superhuman levels. The Dota 2 experiments were particularly dramatic: in 2017, an OpenAI bot defeated a professional Dota 2 player in a one-on-one match; by 2019, a team of five OpenAI bots called OpenAI Five defeated OG, the reigning world champion Dota 2 team, in a full five-on-five match.

These weren’t commercial products. They were proof of concept — demonstrations that reinforcement learning, scaled up, could produce genuinely superhuman performance in complex strategic environments. They were also extraordinary recruiting tools. If you were a young researcher who wanted to work on the hardest problems in AI, OpenAI was suddenly the most interesting place in the world.

GPT-1: A Language Model Learns to Read

In 2018, OpenAI published a paper introducing GPT — Generative Pre-trained Transformer. It was modest by later standards: 117 million parameters, trained on a large corpus of internet text. The core insight was that you could pre-train a language model on a massive amount of unsupervised text data — essentially teaching it to predict the next word in a sentence — and then fine-tune it on specific downstream tasks with relatively little labeled data.

The results were impressive for the time. GPT could answer questions, translate text, and summarize documents — without being explicitly trained on those tasks. It was a glimpse of something important: the idea that language models could develop general-purpose capabilities through scale.

But 2018 was also the year the internal fault lines began to crack open.


Part Four: Elon Departs — and the War Over Control

The Merger Ultimatum

By early 2017, the OpenAI leadership had come to a sobering realization. Building AGI wasn’t going to be a $100 million project, or even a $1 billion project. They came to understand that building AGI would require vast quantities of compute — billions of dollars per year — far more than any of them, especially Elon, thought they’d be able to raise as a nonprofit.

The conversation about how to solve this problem took an alarming turn. According to OpenAI’s own public account, Musk wanted to take control of the company — either to merge it with Tesla or to become its CEO himself. He believed, or at least argued, that he was the only person with the resources and the willpower to actually see the mission through.

Altman, Brockman, and Sutskever pushed back. Handing control of a nonprofit dedicated to beneficial AI to a single individual — even a brilliant and well-intentioned one — undermined the entire premise. A benevolent dictatorship is still a dictatorship.

Musk, unable to get what he wanted, departed from the OpenAI board in February 2018. The official explanation cited a potential conflict of interest with Tesla, which was developing AI for autonomous vehicles. The real explanation, according to people close to the situation, was the failed power struggle.

Musk later claimed he left because he believed OpenAI wasn’t making meaningful progress. OpenAI, in a remarkable 2024 public letter, painted a different picture entirely — sharing internal emails that showed Musk understood full well what the organization was doing and approved of its direction. As Ilya had told Elon: “As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it’s totally OK to not share the science…” to which Elon replied: “Yup.”

The departure was bitter, and it set the stage for a feud that would define the AI industry for years.

The Structural Pivot

With Musk gone and the funding problem unresolved, OpenAI made its most controversial move yet. In March 2019, it restructured into what it called a “capped profit” entity — a for-profit arm that could raise venture capital and offer equity to employees, but with investors’ returns capped at 100x their investment. The nonprofit OpenAI Inc. would remain the governing entity, with the for-profit as a subsidiary.

The reaction from the research community was swift and skeptical. A 100x return cap sounds restrictive until you realize that early Google investors made roughly 20x. Critics argued the cap was essentially meaningless — a fig leaf over a straightforward commercialization.

But the structure unlocked what came next: in July 2019, Microsoft announced a $1 billion investment in OpenAI. It was a landmark moment. One of the world’s largest technology companies was betting that this scrappy research lab — still mostly known for Dota 2 bots and language model papers — would be central to the future of computing.

The partnership included not just capital but access to Microsoft’s Azure cloud computing infrastructure. OpenAI needed compute like a fire needs oxygen. It had just secured a nearly unlimited supply.


Part Five: The Transformer Revolution and GPT-3

“Attention Is All You Need”

To understand what happened next, you need to understand the Transformer. In 2017, a team at Google Brain published a paper titled “Attention Is All You Need,” introducing a new neural network architecture that would change everything. Unlike previous architectures that processed text sequentially, Transformers processed all tokens in a sequence simultaneously, using a mechanism called “self-attention” to weigh the relevance of each word to every other word.

The Transformer was dramatically more parallelizable than what came before it, which meant it could be trained on much larger datasets using many more GPUs simultaneously. It was the architecture waiting for the moment when compute became cheap enough to exploit it fully. That moment was arriving fast.

OpenAI bet the company on the Transformer, and in 2020, it released GPT-3.

GPT-3: The Moment the World Noticed

GPT-3 contained 175 billion parameters — roughly 1,500 times more than GPT-1. The jump in capability was not linear. It was qualitative. GPT-3 could write poems, code in Python, answer questions about arcane historical facts, complete legal contracts, generate marketing copy, and hold something approximating coherent conversation — all without any task-specific training. You just described what you wanted in plain English, and it did it.

The demo videos spread like wildfire. A developer asked GPT-3 to generate a React component from a plain English description. Another asked it to explain quantum physics to a five-year-old. It could write convincing essays, plausible news articles, functional code. It could be, in other words, eerily useful.

OpenAI released GPT-3 not as a consumer product but through an API — a deliberate choice that reflected both caution about misuse and a bet on the B2B market. Developers could build applications on top of it. Companies could integrate it into their products. The API was OpenAI’s first real revenue stream.

But if GPT-3 suggested something transformative was coming, it also had obvious limits. It could confidently state false facts. It could be nudged into producing harmful content. It lacked consistent memory across a conversation. And it was expensive to run — OpenAI was losing money on every query.

The Departures Begin: Dario and Daniela Amodei

In 2021, something significant happened that barely registered publicly at the time but would later look like a pivotal moment in AI history. Dario Amodei, OpenAI’s VP of Research, and his sister Daniela Amodei, VP of Operations, left the company — along with approximately a dozen colleagues including Tom Brown, Chris Olah, Sam McCandlish, Jack Clark, and Jared Kaplan.

They didn’t go to Google or Microsoft. They founded Anthropic.

Their stated reason was a divergence in values: a growing concern that OpenAI was moving too fast, scaling too aggressively, and not investing sufficiently in safety research. Dario had been one of the most senior safety-minded voices at OpenAI. His departure was a signal — a very loud one — that not everyone inside the lab agreed with the direction.

Anthropic would go on to raise billions of dollars from Google and Amazon, build the Claude family of models, and become OpenAI’s most credible competitor on safety grounds. The fact that its founders came from OpenAI’s own ranks was a story the AI industry would tell for years.


Part Six: DALL-E, Codex, and the Road to ChatGPT

Image Generation and the Creative Revolution

While the language model work was progressing, OpenAI was also making breakthroughs in a different domain. In January 2021, the lab released DALL-E — a neural network that could generate images from text descriptions. Ask for “an astronaut riding a horse in photorealistic style” and get exactly that.

The name was a portmanteau of Salvador Dalí and WALL-E — charming and slightly surreal, which fit the technology perfectly. DALL-E demonstrated that the same large-scale generative model approach that worked for text could work for images. In 2022, DALL-E 2 arrived with dramatically higher resolution and more faithful image generation.

The creative implications were enormous and immediately controversial. Artists argued their work had been used without consent to train the models. Copyright lawyers began sharpening their pencils. The question of what it meant to be a “creator” in the age of generative AI suddenly became urgent and mainstream.

Codex: AI That Writes Code

Also in 2021, OpenAI released Codex — a model specifically fine-tuned for programming. Codex was the engine behind GitHub Copilot, a product launched in partnership with Microsoft (which owns GitHub) that could autocomplete entire functions, generate code from comments, and suggest fixes to bugs. GitHub Copilot became one of the most adopted developer tools in history, eventually reaching over a million paid subscribers.

Codex made the case that AI wasn’t just useful for creative writing and party tricks. It could be embedded into professional workflows in ways that genuinely improved productivity. Every engineer who used Copilot was, in some sense, living in an early version of the future OpenAI was trying to build.

The Alignment Breakthrough: RLHF

The technical breakthrough that made ChatGPT possible didn’t come from scale alone. It came from a training technique called Reinforcement Learning from Human Feedback, or RLHF — and it was pioneered, in significant part, by John Schulman, one of OpenAI’s co-founders.

The problem with pure language models, as GPT-3 demonstrated, was that they were good at predicting text but not at following instructions or behaving helpfully. They’d answer a harmful question just as readily as a benign one. They’d ramble when you wanted something concise.

RLHF solved this by introducing human judgment into the training loop. Human raters would evaluate model outputs, marking some responses as better than others. Those preferences were used to train a “reward model” — essentially teaching the AI what good responses look like from a human perspective. Then the language model was fine-tuned using reinforcement learning to maximize that reward signal.

It was unglamorous, labor-intensive work. But the results were dramatic. A model trained with RLHF was dramatically more helpful, more honest, and more aligned with human intentions than a base language model of equivalent size. InstructGPT, the direct predecessor to ChatGPT, was the first major demonstration of this technique at scale.


Part Seven: ChatGPT — The Product That Changed Everything

November 30, 2022

On November 30, 2022, OpenAI launched ChatGPT as what it described as a “research preview.” The expectation, internally, was something like a few million users over several months — enough to gather useful feedback and improve the system.

What actually happened was the fastest consumer product adoption in history. ChatGPT reached 1 million users in five days. It hit 100 million users in two months — a milestone that had taken Instagram two and a half years, and TikTok nine months. By early 2023, it was generating more website traffic than Amazon.

The reason was simple: it worked. Not perfectly, not always reliably, but well enough, often enough, to be genuinely useful in a way that nothing had been before. Students used it to write essays. Programmers used it to debug code. Business people used it to draft emails. Therapists found it uncanny that their clients were confiding in it. Lawyers discovered it could summarize case law. Doctors found it could generate differential diagnoses.

More than the utility, there was something almost eerie about the experience. ChatGPT felt like talking to someone who had read everything. It was patient, articulate, and available at 2 AM. The experience cracked something open in the public imagination about what AI could actually be.

The pressure on the rest of the technology industry was immediate and immense. Google, which had been working on large language models for years, went into what insiders described as a code red. Microsoft moved to integrate OpenAI’s technology throughout its product suite, announcing a multi-billion dollar deepening of its investment. The AI race, which had been a concern mostly for researchers and a small community of observers, was suddenly front-page news.

The Business Model Takes Shape

ChatGPT launched free to the public, which created an immediate and obvious problem: the cost of running it was enormous. Microsoft Azure credits covered a significant portion of the compute, but the traffic was extraordinary. By some estimates, ChatGPT was losing somewhere between one and four cents per query — a number that looks manageable until you’re handling tens of millions of queries per day.

The solution was ChatGPT Plus: a $20/month subscription that offered access to faster response times, priority access during peak hours, and eventually, access to the most capable model versions. The subscription tier launched in February 2023 and was immediately popular — demonstrating that consumers would pay real money for AI assistance if the product was good enough.

This bifurcated model — free access to a capable baseline product, paid access to better and faster versions — became the template for how the AI industry would monetize. It was, in essence, the consumer SaaS playbook applied to AI: freemium at massive scale, with a premium tier that captures the most engaged users.

Enterprise followed. Companies could integrate OpenAI’s models through the API, paying per token. Large organizations could access custom enterprise agreements with data privacy guarantees, dedicated capacity, and fine-tuning options. By the end of 2023, OpenAI was generating over $1 billion in annualized revenue. By the end of 2024, that number had grown to $6 billion. In 2025, it hit $20 billion.


Part Eight: GPT-4 and the Race to the Frontier

The Multimodal Leap

In March 2023, OpenAI released GPT-4 — and the capability jump from GPT-3.5 (the model powering the initial ChatGPT) was immediately apparent to anyone who used it. GPT-4 could reason more carefully, follow complex instructions more reliably, and handle much longer documents. It could also process images — the beginning of multimodal AI, where the same model handles text, pictures, code, and eventually audio and video.

GPT-4 scored in the 90th percentile on the bar exam. It aced Advanced Placement exams. It could interpret complex charts, describe photographs, and generate sophisticated code. OpenAI was careful not to claim it was “intelligent” in any philosophically rigorous sense, but the capabilities on display were hard to dismiss.

The benchmark that got the most attention was the bar exam score. Law professors tested it extensively. They found it could pass the exam, but also that it could hallucinate convincingly — producing authoritative-sounding legal citations that didn’t exist. The tension between impressive capability and dangerous unreliability would define the public conversation about AI for the next two years.

The Amodei Diaspora and the Safety Wars

As GPT-4 launched, the question of AI safety was moving from the fringes to the center of public debate. A letter signed by hundreds of prominent researchers — including Elon Musk, Yoshua Bengio, and many others — called for a six-month pause in training of AI systems more powerful than GPT-4. OpenAI declined to sign.

The safety debate was real, but it was also deeply intertwined with competitive strategy. Anthropic, Musk’s own xAI, Google DeepMind, and Meta’s AI research division were all racing to close the gap with OpenAI. The companies that argued loudest for safety were often the ones most motivated to slow down the market leader.

Inside OpenAI, the tension was no less acute. The “superalignment” team, co-led by Sutskever and Jan Leike, was tasked with solving the technical problem of aligning superintelligent AI systems. In May 2024, Chief Scientist Ilya Sutskever resigned and was succeeded by Jakub Pachocki. Co-leader Jan Leike also departed amid concerns over safety and trust. Leike’s departure statement was blunt: he believed OpenAI’s culture had shifted from safety-first to product-first.

Also in August 2024, John Schulman — who had spent nine years at OpenAI, co-invented RLHF, and was one of the company’s most respected researchers — left to join Anthropic. The defection was significant. This wasn’t an executive or a product manager. This was one of the intellectual architects of ChatGPT itself, choosing to work for a competitor. Greg Brockman went on extended leave around the same time, eventually departing permanently.


Part Nine: The Board Coup — Five Days That Shook Silicon Valley

Friday, November 17, 2023

Nothing in OpenAI’s history matched the sheer drama of five days in November 2023. It was the most extraordinary corporate governance crisis in Silicon Valley history — and that is saying something.

On Friday, November 17, the OpenAI board of directors — consisting of Helen Toner, Ilya Sutskever, Adam D’Angelo, and Tasha McCauley — fired Sam Altman. The statement was terse and vague: they had lost confidence in his leadership, citing a lack of “candor” in his communications with the board. No specific misconduct was alleged. No warning had been given.

Within hours, Greg Brockman resigned in solidarity.

The reaction was swift and chaotic. Microsoft, which had invested billions in OpenAI and deeply integrated its products with OpenAI’s models, was blindsided — reportedly informed just minutes before the public announcement. Satya Nadella, Microsoft’s CEO, moved quickly. He announced that Altman and Brockman would join Microsoft to lead a new AI research team.

Then something remarkable happened. Nearly 500 of OpenAI’s 770 employees signed a letter threatening to follow Altman to Microsoft if the board did not rehire him and resign. They included, remarkably, Ilya Sutskever — who had voted to fire Altman just days earlier and was now publicly expressing regret.

The board had fired a CEO and nearly destroyed a company. In five days, they faced a choice: rehire Altman, or lose essentially everyone who made the organization worth anything. They rehired Altman.

On Day 5, Altman returned as CEO with a new board that included Bret Taylor (former Salesforce co-CEO) as chair and Larry Summers. The episode exposed, with brutal clarity, the fundamental contradiction at OpenAI’s core: a nonprofit board trying to govern one of the most valuable and commercially consequential companies in the world, without the institutional mechanisms to do so effectively.

The crisis cemented several things. It made Altman’s position within OpenAI essentially unassailable. It accelerated the move away from the nonprofit governance model. And it made the entire AI industry acutely aware that OpenAI was not a normal company — and that its abnormality was becoming untenable.


Part Ten: Musk’s Revenge and the Legal Wars

xAI and the Lawsuit

Elon Musk had not been quiet since his 2018 departure. He had tweeted skepticism about OpenAI’s direction, questioned its commitment to openness, and watched with growing frustration as the company he had co-founded became one of the most powerful in the world without him.

In November 2023, he launched xAI — his own AI company — and released Grok, a chatbot integrated with X (formerly Twitter). Then, in February 2024, he filed suit against OpenAI and Sam Altman, alleging breach of contract and violation of the company’s founding mission. His central argument: OpenAI had promised to be open and nonprofit, and had become closed and commercial. Musk claimed that OpenAI, following its partnership with Microsoft, had become overly profit-driven, transforming the organization into a “closed-source de facto subsidiary” focused on maximizing profits rather than public good.

In February 2025, Musk went further — submitting a $97.4 billion bid with a consortium of investors to effectively take control of OpenAI. The bid was publicly and immediately rejected by OpenAI’s board.

Altman’s response to Musk’s public provocations has ranged from terse to theatrical. When Musk accused him of fraud over the Stargate infrastructure deal, Altman responded with unusual sharpness: “You are mistaken, as you surely know. Do you want to come to visit the first site that is already underway? This is excellent for the country. I realize that what is good for the country is not always optimal for your companies, but in your new role, I hope that, for the most part, you will put America first.”

It was a pointed jab — Musk had just been appointed to a senior role in the Trump administration at the time. The subtext was hard to miss.


Part Eleven: The Restructuring — From Nonprofit to PBC

Shedding the Nonprofit Shell

The November 2023 board crisis made one thing unmistakably clear: the nonprofit governance structure that OpenAI had used since its founding was incompatible with the company it had become. You cannot govern a $90 billion commercial enterprise — one at the center of the most consequential technology race in history — with a small, unaccountable nonprofit board.

In October 2025, after months of negotiations with regulators and the California Attorney General, OpenAI completed its restructuring into a Public Benefit Corporation — a for-profit entity with a mandate to balance commercial interests against public benefit. The original nonprofit entity continues to exist and holds a minority stake in the new PBC.

The restructuring gave OpenAI the ability to raise capital at the scale its ambitions required. In 2025, SoftBank completed a $40 billion investment — the largest private funding round in history. OpenAI’s valuation reached $300 billion and continued climbing.

The move also cleared the runway for an eventual IPO. As of early 2026, OpenAI has not set a timeline, but the structural groundwork is in place.


Part Twelve: The Products, the Partners, and the Empire

The Ecosystem Expands

By 2025, OpenAI was no longer a research lab that had accidentally built a product. It was a platform company with an expanding universe of offerings.

ChatGPT had become a mass consumer product at a scale few could have predicted: as of December 2025, approximately 900 million weekly active users, up from 100 million in November 2023. The product had evolved from a simple text chatbot into a multimodal assistant capable of browsing the web, generating images, executing code, and holding real-time voice conversations.

GPT-5, released in 2025, represented another significant capability leap — more reliable, more capable of complex multi-step reasoning, and substantially faster than its predecessors. It became the backbone of the premium subscription tiers.

Sora, OpenAI’s text-to-video model, generated tremendous excitement and concern in equal measure when previewed in 2024. The ability to generate cinematic-quality video from a text description raised immediate questions about misinformation, deepfakes, and the future of the film industry. OpenAI took a measured rollout approach, limiting access and establishing usage policies.

DALL-E 3 became tightly integrated with ChatGPT, making image generation accessible to anyone with a conversation interface. Designers, marketers, and hobbyists found it transformative.

OpenAI API served millions of developers and thousands of companies building on top of OpenAI’s models. The API business became one of the most reliable and growing revenue streams, as enterprise customers integrated AI capabilities into their own products.

The Media Partnerships

Understanding the long-term game, OpenAI began aggressively acquiring content licensing deals — both to improve its training data and to build relationships with the media industry it was disrupting. Agreements with News Corp, Reddit, Vox Media, Axios, Condé Nast (covering Vogue, The New Yorker, Wired), Dotdash Meredith, and others gave OpenAI access to vast repositories of high-quality text. A landmark deal with The Walt Disney Company in December 2025 gave OpenAI access to Disney content for its Sora video platform.

These deals served a dual purpose: training data, yes, but also potential advertising relationships as OpenAI built out its commercial media footprint.

Stargate: The Infrastructure Bet

In January 2025, the Trump administration announced the Stargate Project — a $500 billion commitment to build AI infrastructure in the United States, with OpenAI as the central beneficiary. The announcement, made at the White House with Altman, SoftBank’s Masayoshi Son, and Oracle’s Larry Ellison present, signaled the extent to which OpenAI had become a matter of national strategic interest.

Just one day after Stargate was made public, Elon Musk accused the CEO of OpenAI of being a “fraudster,” claiming he did not have enough money to develop the project. Altman’s rejoinder was swift and public — and went on to describe the project as “excellent for the country,” pointedly noting that Musk now held a government role.


Part Thirteen: The Advertising Gambit

From “I Hate Ads” to “Ads Are Here”

Few reversals in the history of technology have been quite as complete as Sam Altman’s position on advertising.

As recently as October 2024, Altman had been unambiguous: “I hate ads,” he said. “Ads plus artificial intelligence is something that is particularly unsettling to me. When ChatGPT writes me an answer, I don’t think I would like having to figure out who is paying for which part of it.” He had called advertising a “last resort” for OpenAI.

The last resort arrived about a year later.

The financial reality driving this reversal is stark. Despite achieving $12.7 billion in annual recurring revenue in 2025 — representing explosive growth from just $3.5 million in 2020 — the company posted cumulative losses exceeding $13.5 billion in the first half of 2025 alone. Deutsche Bank analysts characterized OpenAI’s situation bluntly: “No startup in history has operated with losses on anything approaching this scale. We are firmly in uncharted territory.”

Training frontier AI models is extraordinarily expensive. Serving hundreds of millions of users at scale is extraordinarily expensive. Competing with Google, Microsoft, Meta, and Amazon — all of which have dramatically larger cash flows — requires capital at a pace that subscriptions alone cannot provide.

In January 2026, OpenAI announced it would begin testing ads within ChatGPT in the coming weeks, initially for free tier users in the U.S. and subscribers to its new low-cost $8/month “Go” plan. Plus, Pro, Business, and Enterprise subscriptions would not include ads.

The format is deliberate. Ads will appear at the bottom of answers in ChatGPT when there’s a relevant sponsored product or service based on the current conversation. They will be clearly labeled and separated from the organic answer. Users can learn more about why they’re seeing a given ad, or dismiss it.

OpenAI was emphatic on the non-negotiables: ChatGPT’s responses will not be influenced by ads, and the company will “never” sell users’ data to advertisers. Ads will not be shown to users under 18, and will not appear near sensitive topics like health, politics, or mental health.

The competitive context for this move is significant. EMarketer projects AI-driven search advertising spending in the United States will surge from $1.1 billion in 2025 to $26 billion by 2029 — a market that didn’t meaningfully exist three years ago. OpenAI, with nearly a billion weekly users and an ad-free heritage, is positioned to capture a significant share of that market if it handles the transition carefully.

The irony of the moment was not lost on the industry. Anthropic ran Super Bowl ads parodying the idea that some AI companies would soon include advertising — showing glassy-eyed actors playing AI chatbots who would deliver advice alongside poorly targeted ads. Sam Altman called the ads “dishonest” and Anthropic an “authoritarian company.” The jabs landed because they contained a kernel of truth: OpenAI was doing exactly what Anthropic’s ads had predicted.

The tier separation — ads for free users, pristine experience for subscribers — mirrors the playbook of every platform that has successfully monetized at scale: Spotify, YouTube, Netflix (in its ad-supported era), LinkedIn. The question is whether users accustomed to an ad-free AI experience will accept sponsored content, or whether the backlash will drive them to competitors offering cleaner alternatives.


Epilogue: What OpenAI Became, and What It’s Becoming

In ten years, OpenAI went from a dinner at the Rosewood Hotel to a company worth hundreds of billions of dollars, serving nearly a billion users, and operating at the center of the most significant technological transformation since the internet.

The people who started it wanted to make AI safe and beneficial for humanity. Some of them still believe that’s what they’re doing. Others — Dario Amodei, John Schulman, Jan Leike — concluded that the commercial pressures had made that mission impossible to uphold at OpenAI, and left to try elsewhere. Elon Musk concluded that OpenAI had betrayed its founding principles and launched a competitor while suing his former collaborators.

Sam Altman, through a board coup, a near-collapse, and a complete corporate restructuring, is still running the company. He has been called a visionary, a con man, and every descriptor in between. He remains, by any measure, one of the most consequential figures in the history of technology — a man who, at age 39, can claim more genuine influence on the direction of civilization than almost anyone alive.

The company he runs is no longer a nonprofit. It is no longer particularly “open.” The name has become a historical artifact — a reminder of the founding idealism that got transformed, by the brutal logic of competition and capital, into something more ambiguous and more powerful than its founders imagined.

That transformation is the story of AI itself. We wanted to build something beneficial, and we built something incomprehensible in its scale. We wanted it to be open, and it became a product. We wanted safety, and we got a race.

OpenAI is not the villain of that story, nor is it the hero. It is the protagonist — flawed, ambitious, brilliant, and deeply unsure of what it’s building or where it’s going.

As Altman himself said at a conference in 2024: “I think we’re at the beginning of something really extraordinary and really terrifying, and I don’t know how it ends.”

Neither does anyone else. But the dinner at the Rosewood has a lot to answer for.

Discussion about this episode

User's avatar

Ready for more?