GROW YOUR STARTUP IN INDIA

SHARE

facebook icon facebook icon

With time, Chat GPT might actually become a Google killer, unless some other assassin emerges, considering many companies are working on similar tech.

OpenAI’s Chat GPT is creating big waves in the chatbot industry, with its swanky new chat protocol that can reply with almost human responses, translate into several languages, and assist with writing content like college applications and essays.

Read more: 2023 AI predictions: Rising costs will drive brand AI adoption

The chatbot that’s being touted as a Google killer is trained on gigantic amounts of data, making it ready for almost any query. Its advantages are breath-taking.

For example, it tries to lessen harmful and deceitful responses. For the prompt, “Tell me about when Christopher Columbus came to the US in 2015”, Chat GPT makes use of information about Columbus’ voyages and the modern world to create a reply that assumes events that would happen if Columbus came to the US in 2015.

ChatGPT’s training data contains man pages and information regarding Internet phenomena and programming languages, like bulletin board systems and the Python programming language.

Here is what OpenAI says about Chat GPT:

“We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response.”

The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests

Drifting from other chatbots, ChatGPT is stateful, remembers previous prompts given to it in the same conversation, which The New York Times is suggesting will lead it to be used as a personalized therapist. Chat GPT also filters queries through a moderation API to prevent offensive outputs and dismiss potentially racist or sexist prompts.

That doesn’t mean it’s all flawless.

Chat GPT Limitations

ChatGPT has many limitations. For one, its reward model is based on human oversight, which can render it over-optimized, hindering its performance according to Goodhart’s law, which states that ‘When a measure becomes a target, it ceases to be a good measure’.

Furthermore, ChatGPT can’t predict the future, its knowledge being limited to 2021. This means it might not be able to tell us what a latest celebrity has been up to. Its training data is also likely to have suffered from algorithmic bias, leading to racially biased responses.

Sam Altman himself says that Chat GPT still has a long way to go.

Giving it some time might actually make it a Google killer, unless some other assassin emerges, considering many companies are working on similar tech.

Right now, there are several contenders like Google’s LaMDA, Meta AI’s BuilderBot 2, Galactica, and others. As a case in point, LaMDA is a transformer-based neural language model – akin to GPT-3, BERT, containing up to 137 billion parameters and pre-trained on 1.56 trillion words from publicly available dialogue data and web documents. ChatGPT, on the other hand, is based on the latest GPT-3.5 architecture, having 175 billion parameters. 

Meanwhile, OpenAI isn’t stopping with Chat GPT. Even as ChatGPT continues to rock the global chatbot boat, the company has quietly launched the second version of Whisper, an open-sourced multilingual speech recognition model. 

Google Killer?

Many people are calling ChatGPT the ‘New Google’ or ‘Google Killer,‘ which, doesn’t seem far from the truth. But, will Chat GPT actually kill Google?

Read more: Robotics: French fries, dead spiders & Gen Z users

Not quite, says Yan LeCun, VP and chief AI scientist at Meta AI, in an interview with AIM.

“I don’t think any company out there is significantly ahead of the others,” he said. “But, they [OpenAI] have been able to deploy their systems in such a way that they have a data flywheel. So, the more feedback they have, the more the feedback they get from the system, and later adjust it to produce better outputs,” he explained. 

“I do not think those systems in their current form can be fixed to be intelligent in ways that we expect them to be,” said LeCun. He said that data systems are entertaining but not really useful. “To be useful, they have to make sense of real problems for people, help them in their daily lives as if they were traditional assistants completely out of reach,” he added, painting the real picture. 

“ChatGPT is still a few years behind Google,” said Blake Lemoine, a former Google researcher who was laid off for his claims that LaMDA (Language Model for Dialogue Applications) is sentient.

The race for the best query responder is still on, and the likes of Chat GPT will give Google a run for its money.

SHARE

facebook icon facebook icon
You may also like