OpenAI's commercial ambitions jeopardize the future of AI

Trust must be earned and is easily lost. Anybody working in the AI space knows we are in trust building mode with little currency to date. Hard problems around explainability and bias remain unsolved. Furthermore, large scale misuse is possible and hard to contain (e.g. deep fakes, fake news, and auto-generated messages on social platforms). For these reasons, AI developers must be hyper-diligent custodians of any new innovation. Product decisions must give preference to building trust in AI, not putting it in jeopardy.

Jerome Pesenti, VP of AI at Facebook, summarized it well earlier this week on Twitter, saying:

It's in this light that I view OpenAI's commercialization of its GPT-3 language generation model with trepidation.

OpenAI is in the process of making the unique transition from a research based AI non-profit to a "limited" for profit enterprise. The MIT Technology Review did a fantastic job documenting many of the growing pains this transition has presented it.

OpenAI's shift in mentality can clearly be seen through the markedly different releases of the GPT-2 language generation model last year and GPT-3 one this year. Little more than a year ago, GPT-2 was deemed as "AI that's too dangerous to release." However, this year, the far more capable language model, GPT-3, is immediately being put in the hands of developers.

The statements around GPT-2 last year were sensational. 'Technology that poses a threat to humanity' prompted writers to say that the singularity was nigh and that AI was a danger to society. None of this helped build trust in the technology.

Now this year, OpenAI again stoked the public sentiment with another sensational display. By releasing GPT-3 to a limited set of developers, the public caught a tantalizing glimpse of the future. Twitter was quickly flooded with incredible examples of what language generation can do, including:

Generating code based solely on a written description

Or creating a UI in Figma by, again, just describing what you want it to look like

While others showed the tech writing poems in the style of your favorite poet. Or writing deep philosophical statements.

If you didn't know better, which most don't, you'd see these examples and think AI can now reason like a human or that it's only a matter of time before human jobs are automated away en masse. Neither of which is true.

Consequently, the hype reached a point where OpenAI's CEO needed to jump-in and tell the public that:

Then four days later OpenAI Tweeted the following:

Not anticipating the public's reaction and then having to assure it days later that you have everything under control isn't a good sign. OpenAI intentionally released the GPT-3 API to see how it would be used and they knew developers would build varied and interesting prototypes with it.

They knew this because GPT-3 significance is its ability to perform many different language tasks with almost no training. This is why developers have been able to produce so many different examples of its potential within such a short period of time. Previously, each specific task, such as language translation or summarization, required large datasets of examples (i.e. thousands to tens-of-thousands) to produce reasonable results. Secondly, it's trained on the entirety of the internet, presumably giving it the ability to do everything from generate code to write poetry.

Productizing any model with the internet as its corpus is concerning because of the risk of overt bias and misuse.

With all of this in mind, is it responsible to commercialize technology, even if only in a limited and controlled way, that you know is flawed and can be weaponized? Is this being a diligent custodian of trust?

It's not.

OpenAI is pushing the bounds of NLP, which is great. It's also good that they're putting safeguards and guidelines in place, but these are not foolproof. There's no way OpenAI can envision all the ways individuals might interact with a solution. Furthermore, developers will continue to demonstrate and publicize the potential for bad outcomes, thus feeding into the concern the general public has with AI.

If optimizing for trust, a model such as GPT-3 would remain in research until the proper controls are in place to release it more broadly. This is likely the approach the non-profit OpenAI would have taken. They would have known it's too risky to put powerful AI in the market and then react and refine processes for controlling it.

OpenAI needs to do a better job of ushering in their innovations and communicating to the public. If they don't, their commercial ambitions will jeopardize not only the potential of their research, but the very future of AI.