Microsoft’s interest in OpenAI was very clear when it invested $1 Billion in July, 2019. Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text. The pressure to monetize AI research can and will lead to some pretty serious human follies.
At that time, Microsoft and OpenAI announced a new partnership to build artificial general intelligence to tackle more complex tasks than AI. OpenAI claims to want to help in discovering and enacting the path to safe artificial general intelligence, which bodes well for Microsoft’s own AI for good mandate.
However GPT-3 is not without controversy. OpenAI started with a very idealistic vision that has quickly become a little bit suspect. The way this scales could be problematic to the written word online. The largest version of GPT-3 has 175 billion parameters, which are equations that help the algorithm make a more precise prediction. GPT-2 had 1.5 billion.
Microsoft also has a new supercomputer in the works to power OpenAI. Microsoft noted that the supercomputer was built “exclusively” for OpenAI. OpenAI’s mission is to ensure that artificial general intelligence (AGI) — by which we mean highly autonomous systems that outperform humans at most economically valuable work — benefits all of humanity.
However artificial intelligence has not been well regulated, as we can see in the current algorithmic internet of 2020. What happens when those algorithms continue to evolve beyond our legal, regulatory and human intelligence capacity? OpenAI has made a name for itself by training very large models and the point from which this turns from research into commercial applications should concern us.
Microsoft announced yesterday that it will exclusively license GPT-3, one of the most powerful language understanding models in the world, from AI startup OpenAI. What happens to the internet in a world of computer generated human-like text? We’ve already seen what happens when algorithms for profit are let loose on social networks.
Microsoft is teaming up with OpenAI to exclusively license GPT-3, allowing them to leverage its technical innovations to develop and deliver advanced AI solutions for their customers (think Azure), as well as create new solutions that harness the amazing power of advanced natural language generation. But with innovation comes big responsibilities and AI brings a lot of uncertainty to the world that, without better AI regulation, will pose certain risks.
Microsoft recognizes the enormous scope of commercial and creative potential that can be unlocked through the GPT-3 model, with genuinely novel capabilities –- most of which we haven’t even imagined yet. Yet it fails to mention any of the dangers in what that technology could become. In an age where companies call data the new oil and AI the “invention of fire” moment in a technological society, I think we need to imagine how this technology will actually be used. We should not simply glorify the business aspects and praise AI blindly.
What Elon Musk helped found in OpenAI has more or less already gone astray. Such is the power of the profit motive to commercialize AI. In February 2020 Elon Musk criticized OpenAI, one of the world’s top AI research organizations, saying it lacks transparency and that his confidence in its technology’s safety is “not high.” Given huge funding by Microsoft, AI as a service is becoming one of the on ramps for Cloud customers.
When you consider the psycho-babble of the Singularity, OpenAI and DeepMind bear watching. OpenAI is racing to be the first to build a machine with the reasoning powers of a human mind. Alphabet’s pressure on DeepMind to “monetize” must be extreme. What is this pushing companies like Baidu, Huawei or even Apple and others to do in order to keep up?
GPT will evolve. Transparency in AI research is not taking place. AI regulations on a global level that can protect humanity from AI in the 21st century will need to be formed. Otherwise AI could become as existential a threat to humanity as climate change. Few in 2020 consider that possibility seriously yet. Every blog by these companies is written like a PR piece of triumph, without any serious macro overview of what these tools will become.
We need to be more honest about the technologies we are developing. Of course Google, Facebook and Microsoft want to tell us AI will be great for us. GPT-3 will hopefully directly aid human creativity and ingenuity in areas like writing and composition, describing and summarizing large blocks of long-form data (including code), and converting natural language to another language. The possibilities are limited only by the ideas and scenarios that we bring to the table. But what are the risks? What will the internet become? What of the journalists, the media, the content creators of this world?
AI has as much chance of being weaponized as it does empowering the human condition. If the evolution of the history of the internet has taught us anything, it’s that. We’ve created a world that’s more artificial but not necessarily more intelligent. GPT-3 is incredible but that Microsoft and OpenAI don’t even know what it could become is also frightening.
OpenAI is acting like it’s the pioneer of beneficial AGI. What could possibly go wrong? Microsoft has developed its own family of large AI models, the Microsoft Turing models, which it has used to improve many different language understanding tasks across Bing, Office, Dynamics and other productivity products. As it eventually acquires OpenAI they won’t just make Sam Altman richer, they will bring new products to market that could change the world as we know it.
When you say you have exclusive rights on a groundbreaking technology, that just a few years ago was considered too dangerous to be released, you aren’t just raising eyebrows. You are participating in an AI-arms race that could end very badly. Clearly, Microsoft plans to leverage the capabilities of GPT-3 in its own products, services and experiences and to continue to work with OpenAI to commercialize the firm’s AI research.
OpenAI has no idea what its technology will finally be used for. Silicon Valley having mission statements that contradict how it operates is not new. If you say you prioritize transparency and then don’t, how are we supposed to trust the AI that you have created? Does this sound like a transparent company to you? How much longer will we be duped by Silicon Valley propaganda? Something has to change, or else AI will eventually catch up with our lack of integrity.
Training massive AI models requires advanced supercomputing infrastructure, or clusters of state-of-the-art hardware connected by high-bandwidth networks. Microsoft is only too happy to enable OpenAI to complete its manifest destiny. However it may be time we stop pretending AI will democratize opportunities, create a fairer version of capitalism or make a healthier world for people.