OpenAI has been put back the way that it was, but the governance will be very different and the damage that has been done to its ecosystem aspirations could greatly impact its long-term future.
- Altman and his crew have been reinstated at OpenAI and a new board has been formed. But how the non-profit part of OpenAI’s mission has changed is not clear at this stage.
- The previous structure of OpenAI was a non-profit company with a subsidiary that would be able to make money. It is in this subsidiary that most of the investments have been made.
- The problem is that OpenAI’s mission to develop artificial general intelligence (AGI) has a rapacious appetite for compute resources which is where the vast majority of the over $11 billion that Microsoft has invested has been spent.
- This is where the non-profit and for-profit ideologies bump against each other as Microsoft has a fiduciary duty to make money for its shareholders and shoveling $11 billion into a black hole with no prospect of a return is a breach of that duty.
- This is why there is an unusual situation in the for-profit subsidiary where Microsoft’s return is capped at 100x, which for all intents and purposes is perfectly fine.
- The problem is that the board that oversaw OpenAI (and the for-profit subsidiary) was only supposed to care about AI benefitting humanity, which also means capping AI that it thinks could trigger the machine takeover of the human race.
- I am pretty sure that this was the source of the conflict that led to the firing of Altman, but I suspect that it was rumors of a breakthrough in AI that were the catalyst for the recent events.
- This “breakthrough” in AI, which has been termed Q*, appears to have been enough to make the board nervous and it may have something to do with reasoning.
- I suspect that this “breakthrough” will be an enhancement of GPT models that makes them appear to be better at reasoning.
- So far, I have seen no evidence whatsoever that any deep learning system is capable of reasoning.
- Instead, what they are very good at is learning from examples and then applying that learning in a controlled setting.
- The minute the setting becomes uncontrolled, deep learning systems go off the rails and start making things up or hallucinating or making horrible errors on the road that force the humans to take over.
- This is because they have no causal understanding of the tasks that they are performing and instead only understand the correlation.
- If this “breakthrough” involves reasoning and is real, then this would represent a step along the way to AGI.
- However, all of the evidence I have seen suggests that while the machines can simulate reasoning, they always fall over the minute that they are put to a real test on data that they have not seen before.
- This would also not be the first time that a heralded breakthrough from OpenAI turned out to be a red herring (Robotic Rubik Cube solver).
- Hence, I suspect that all of the fuss about a robot apocalypse may have damaged OpenAI’s long-term outlook and greatly aided its competitors.
- OpenAI launched its play for the AI ecosystem just this month and to make it successful, everyone needs to have complete confidence in OpenAI as a going concern as they will be basing their apps and services upon its foundation models or GPT itself.
- The recent antics have shattered that confidence and now OpenAI will have to work much harder to shore up developer confidence that it will be around for the long term.
- To make matters worse, it will now be much easier for rivals to lure developers, meaning that the whole ecosystem proposition has taken a large hit.
- OpenAI is out of the woods and has a future, but its valuation and the prospect of dominating the AI ecosystem remain in disarray.
(This guest post was written by Richard Windsor, our Research Director at Large. This first appeared on Radio Free Mobile. All views expressed are Richard’s own.)