Earlier this year, founder-investor Sam Altman left his high-profile role as president of Y Combinator to become the CEO of OpenAI, an AI research team that was founded by some of the most prominent people in the tech industry in late 2015. The idea: ensure that the Artificial intelligence is “Developed in a safe and beneficial way for mankind,” as one of those founders, Elon Musk, told the New York Times.
The move is intriguing for many reasons, including that artificial general intelligence, or the ability for machines to be as smart as humans, doesn’t exist yet, and even leading AI researchers are far from clear when. Under Altman’s leadership, OpenAI, which was originally a not-for-profit organization, has also restructured itself as a for-profit company with some caveats, saying that it “will need to invest billions of dollars over the next few years in computing in the cloud on a large scale, attracting and retaining talented people and building AI supercomputers ”.
Whether OpenAI is capable of attracting so much funding is an open question, but our guess is that it will, if only by Altman himself, a force of nature who easily wowed a crowd during a lengthy interview with this editor Thursday night, in a talk that covered everything from the evolution of YC to Altman’s current work at OpenAI.
At YC, for example, we discussed that moderation and “ramen profitability” were once the goal for graduates of the popular accelerator program, but that a newer goal seems to be to immediately raise venture capital funds, if not dozens. of millions of dollars. dollars (“If you could control the market, obviously the free market is going to do its job, it wouldn’t make YC companies increase the amount of money they raise or the valuations they make,” Altman told attendees at the event. small industry “I think it is, on the net, bad for startups.”
Altman was also candid when asked personal and at times corny questions, even offering a story about the strong relationship he has long enjoyed with his mother, who was in town for the event. Not only did he say that she remains one of the few people he “absolutely” trusts, but acknowledged that over time it has become more difficult to get unvarnished comments from people outside of that small circle. “You get to a point in your career where people are afraid of offending you or saying something you might not want to hear. Definitely, I am aware that at this moment things are filtered for me and I plan ahead.
Altman is certainly more sane than most. This was not only revealed in the way Altman ran Y Combinator for five years, essentially super-simulate it over and over again, but it’s evident from the way OpenAI discusses that its current thinking is no less bold. In fact, much of what Altman said Thursday night would be considered sheer insanity on someone else’s part. Coming from Altman, he just drew raised eyebrows.
Asked, for example, how OpenAI plans to make money (we wondered if he could get a license from his work), Altman replied that “the honest answer is that we have no idea.” We have never made any income. We have no current plans to earn income. “We have no idea how one day we can generate income.”
Altman continued, “We’ve made a soft promise to investors that, ‘Once we build a generally smart system, we’ll basically ask you to find a way to get a return on your investment for yourself.’ laugh (it wasn’t immediately obvious that he was serious), Altman himself offered that it sounds like a “Silicon Valley” episode, but added, “You can laugh.” Everything is fine. But it really is what I really believe. “
We also asked what it means that, under Altman’s leadership, OpenAI has grown into a “limited earnings” company, promising to give investors up to 100 times its return before handing over the excess earnings to the rest of the world. We note that 100x is a very high bar, so high that most investors who invest in for-profit companies with little experience rarely make a 100x return. For example, Sequoia Capital, the only institutional investor in WhatsApp, reportedly saw 50 times the $ 60 million it had invested in the company when it was sold to Facebook for $ 22 billion, an impressive return.
But Altman not only rejected the idea that the idea that “limited profit” is a bit of marketing brilliance, but he also doubled down on why it makes sense. Specifically, he said that the opportunity with artificial general intelligence is so incomprehensible that if OpenAI succeeds in breaking this particular nut, it could “maybe capture the cone of light of all the future value of the universe, and that’s not okay for a group.” . of investors to have “.
He also said that future investors will see their investment capped at a lower return, that OpenAI basically wanted to find a way to reward its early backers, given the risk they are taking.
Before we parted ways, we also shared with Altman several criticisms from AI researchers we interviewed prior to our meeting who complained that, among other things, OpenAI seeks attention for qualitative and non-foundational leaps already proven work, and its mission. discovering a path to “safe” artificial general intelligence raises alarms unnecessarily and hinders your investigation.
Altman absorbed and responded to each point. Nor did he reject them outright, saying of OpenAI’s alarmist bent, for example, that he has “some sympathy for that argument.”
However, Altman insisted that a better argument must be made for thinking, and talking to the media, about the possible social consequences of AI, no matter how un-naive those who encounter it may be. “The same people who say OpenAI is scary or whatever, are the same people who say, ‘Shouldn’t Facebook have thought about this before they did?’ We try to think about it before we do.”
You can watch the full interview below. The first half of our chat is primarily focused on Altman’s career at YC, where he remains president. We started discussing OpenAI in greater detail around the 26 minute mark.