When it comes to AI ethics, big tech’s intentions don’t match their actions.
Elon Musk and a group of artificial intelligence experts and industry executives, which includes Gary Marcus, Yoshua Bengio, Grady Booch, and Emad Mostaque, recently called for a six-month pause in training systems that are more powerful than OpenAI’s GPT-4, in an open letter that cites the potential risks to society and humanity.
Read more: Look before you prompt: What if we prompt an LLM to act like an evil twin
This is an indication of the power that ChatGPT-4 has over the industry in its path to achieve Artificial General Intelligence (AGI), sending big tech players into a tizzy. Because when it comes to LLM based chatbots, big tech is competing at breakneck speed, almost blind to the consequences. Last month, when launching Bing Chat, Microsoft’s CEO Satya Nadella said that his developers, and their rivals, follow a set of human values and principles that guide the choices they make.
A month later, the tech giant laid off its whole ethics and society team, Today, the AI department is working without the team that used to manage AI’s ethical regulations for the company.
Elon Musk himself, who has been warning about AI as a threat, fired Twitter’s ethical AI team. The tech leader also got rid of the company’s Human Rights team.
Last September, Meta disbanded its Responsible Innovation Team, which looked into the ‘potential harms to society’ that Meta’s products cause.
We know for a fact that (as has been proven time and again) that big tech is often blinded by the desire for profits more than social good. Then should we hinder the progress of tech to arrest this breakneck speed?
If human society became completely dependent on LLM tech for all its cultural solutions, AI could become the most manipulative political leader ever. Forget about the Terminator and The Matrix like movies, where a physical machine attacks us, all it has to do is generate the response that makes us kill each other. Meanwhile, there is no culprit to blame except the millennia long collective human consciousness captured through LLMs
As a New York Times article asks, ‘Drug companies cannot sell people new medicines without first subjecting their products to rigorous safety checks. Biotech labs cannot release new viruses into the public sphere in order to impress shareholders with their wizardry. Likewise, A.I. systems with the power of GPT-4 and beyond should not be entangled with the lives of billions of people at a pace faster than cultures can safely absorb them.’
Read more: AI generated content: A subtle way of robbing human artists by copying what they do but way faster
If human society became completely dependent on LLM tech for all its cultural solutions, AI could become the most manipulative political leader ever. Forget about the Terminator and The Matrix like movies, where a physical machine attacks us, all it has to do is generate the response that makes us kill each other. Meanwhile, there is no culprit to blame except the millennia long collective human consciousness captured through LLMs.
We often say money is a bad master, maybe AI is worse. And then, we put the reins of this technology into a handful of big techs with already proven greed.
It’s no secret that cyber criminals are upping their game with the help of Artificial…
This week Prezent, the AI-powered business communication and presentation productivity platform, announced the names of…
Data breaches are becoming a different level as cybersecurity threat actors keep upping their game,…
In the bustling landscape of India’s tech-savvy population, big tech giants like Meta and Google…
The Tech Panda takes a look at recent tech launches. Crypto: A feature offering a…
The Tech Panda takes a look at how India has been attracting foreign businesses from…