paper clips, parrots and safety vs. ethics
OpenAI CEO Sam Altman recently testified at a Senate hearing in Washington, D.C., to discuss the potential risks of artificial intelligence. Altman emphasized the importance of regulating so-called “frontier models,” which are advanced AI systems that analyze large amounts of data, such as OpenAI’s GPT-4. Altman and other industry leaders at companies like Google DeepMind and well-capitalized startups are mostly concerned about “AI safety,” or the possibility of building an unfriendly artificial general intelligence (AGI) with unimaginable powers. Others in the technology industry and politics, however, are more concerned with “AI ethics,” focusing on current harms and the need for transparency around how AI systems collect and use data. The debate around AI has given rise to its own lingo and concepts, including those related to explainability, guardrails, and emergent behavior.
FAQs:
What is AGI?
AGI stands for “artificial general intelligence,” a concept referring to an advanced AI that can do most things as well or better than most humans, including improving itself.
What are frontier models?
Frontier models are advanced AI systems that analyze large amounts of data, such as OpenAI’s GPT-4.
What is AI safety?
AI safety refers to concerns among industry leaders about the possibility of building an unfriendly AGI with unimaginable powers, and the need for governments to regulate development and prevent an untimely end to humanity.
What is AI ethics?
AI ethics refers to concerns about current harms from AI systems, the need for transparency around how they collect and use data, and the importance of regulating AI in areas that are subject to anti-discrimination law like housing or employment.
What is explainability in AI results?
Explainability in AI results refers to the ability of researchers and practitioners to point to the exact numbers and path of operations that larger AI models use to derive their output. This issue is important in determining whether AI systems have inherent biases.
What are guardrails in AI?
Guardrails in AI encompass software and policies that companies are building around AI models to ensure that they don’t leak data or produce disturbing content. Guardrails can also refer to specific applications that protect AI software from going off-topic.
The Conflict between Safety and Ethics: Examining the Relationship between Paper Clips, Parrots, and Workplace Practices.
OpenAI CEO Sam Altman recently testified for nearly three hours at a Senate hearing in Washington, D.C., speaking on the potential risks of artificial intelligence (AI). Politicians and industry leaders like Altman are weighing the possible benefits and challenges of AI products such as ChatGPT, which raise questions about the future of creative industries and society’s ability to distinguish fact from fiction. Altman took to Twitter after the hearing to convey his stance on AI regulation, which he concluded should have a particular focus on “artificial general intelligence safety” and “frontier models.” “AGI” refers to a more developed AI capable of performing tasks as well as or better than the average human, while “frontier models” analyze the most data and are the most costly to produce. The AI industry is divided into two groups on the subject of regulation: those worried about “AI safety” and those concerned with “AI ethics.” The latter seeks to ensure transparency around AI systems’ data collection and use, with restrictions in place to prevent discrimination. Congress and the White House have included many AI ethics concerns in ongoing discussions on regulation, such as IBM Chief Privacy Officer Christina Montgomery’s suggestion that companies instate an “AI ethics point of contact.”