The tech ethics group Center for Artificial Intelligence and Digital Policy is asking the U.S. Federal Trade Commission to stop OpenAI from issuing new commercial releases of GPT-4, which has wowed some users and caused distress for others with its quick and human-like responses to queries.
In a complaint to the agency on Thursday, which is on the group’s website, the Center for Artificial Intelligence and Digital Policy called GPT-4 “biased, deceptive, and a risk to privacy and public safety.”
OpenAI, which is based in California and backed by Microsoft Corp. (MSFT.O), unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program in early March, which has excited users by engaging them in human-like conversation, composing songs and summarizing lengthy documents.
The formal complaint to the FTC follows an open letter signed by Elon Musk, artificial intelligence experts and industry executives that called for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, citing potential risks to society.