ChatGPT developers worry that AI would cause human extinction; opinions among specialists are varied
Sam Altman, the CEO of OpenAI (creators of ChatGPT), Demis Hassabis, the CEO of Google DeepMind, and Dario Amodei, the CEO of the AI safety firm Anthropic are among the 350 people who have signed the declaration. The statement claimed that, along with other societal-scale hazards like pandemics and nuclear war, "mitigating the risk of extinction from AI should be a global priority."
STATEMENT IS UNCLEAR
Independent specialists and specialists in the industry have issued a warning that the most recent warning needs to be viewed in context. They are questioning the abrupt and strange message's ambiguity. While undoubtedly well-intended, a statement like this is not ideal since it makes no mention of the exact scenario that could result in the extermination of 8 billion people, according to Prof. Nello Cristianini of the University of Bath in the UK, who teaches artificial intelligence. He also emphasised that the signatories bear the responsibility for clarity and should outline the precise steps that led to the event if they observe it. Some experts questioned the motivation behind the claim and noted that it was unusual to hear technology industry leaders push for more government oversight of their commercial endeavours. A particular pleading that is anti-competitive should be watched out for. Artificial intelligence research is advancing quickly. There are various possible applications for AI as it develops. AI, however, has the potential to have negative effects as well. Artificial intelligence may have negative effects on humanity.