Chuyển tới nội dung
Trang chủ » ” Mastering the Art of Speaking About AI Like an Industry Insider”.

” Mastering the Art of Speaking About AI Like an Industry Insider”.

How to talk about AI like an insider

How to talk about AI like an insider

If you’ve ever struggled to understand the conversations happening in the world of artificial intelligence (AI), you’re not alone. From AGI to alignment to paperclips, the terminology used by AI insiders can sound like a foreign language. To help you make sense of it all, here’s a breakdown of some key AI terms.

AGI

AGI stands for “artificial general intelligence.” This concept is used to refer to a significantly more advanced AI than is currently possible, one that can do most things as well or better than most humans, including improving itself.

Example: “For me, AGI is the equivalent of a median human that you could hire as a coworker, and they could say do anything you would be happy with a remote coworker doing behind a computer,” said Sam Altman at a recent Greylock VC event.

AI ethics and AI safety

AI ethics describes the desire to prevent AI from causing immediate harm, often focusing on questions like how AI systems collect and process data and the possibility of bias in areas like housing or employment.

AI safety, on the other hand, describes the longer-term fear that AI will progress so suddenly that a super-intelligent AI might harm or even eliminate humanity.

Alignment

Alignment is the practice of tweaking an AI model so that it produces the outputs its creators desired. In the short term, alignment refers to the practice of building software and content moderation. But it can also refer to the much larger and still theoretical task of ensuring that any AGI would be friendly towards humanity.

Example: “What these systems get aligned to — whose values, what those bounds are — that is somehow set by society as a whole, by governments. And so creating that dataset, our alignment dataset, it could be, an AI constitution, whatever it is, that has got to come very broadly from society,” said Sam Altman at a recent Senate hearing.

Emergent behavior

Emergent behavior is the technical way of saying that some AI models show abilities that weren’t initially intended. It can also describe surprising results from AI tools being deployed widely to the public.

Example: “Even as a first step, however, GPT-4 challenges a considerable number of widely held assumptions about machine intelligence, and exhibits emergent behaviors and capabilities whose sources and mechanisms are, at this moment, hard to discern precisely,” wrote Microsoft researchers in Sparks of Artificial General Intelligence.

Fast takeoff or hard takeoff

These phrases suggest that if someone succeeds at building an AGI, it will already be too late to save humanity.

Example: “AGI could happen soon or far in the future; the takeoff speed from the initial AGI to more powerful successor systems could be slow or fast,” said OpenAI CEO Sam Altman in a blog post.

Foom

Another way to say “hard takeoff.” It’s an onomatopoeia and has also been described as an acronym for “Fast Onset of Overwhelming Mastery” in several blog posts and essays.

Example: “It’s like you believe in the ridiculous hard take-off ‘foom’ scenario, which makes it sound like you have zero understanding of how everything works,” tweeted Meta AI chief Yann LeCun.

GPU

The chips used to train models and run inference, which are descendants of chips used to play advanced computer games. The most commonly used model at the moment is Nvidia’s A100.

Example: From Stability AI founder Emad Mostque:

Guardrails

Guardrails are software and policies that big tech companies are currently building around AI models to ensure that they don’t leak data or produce disturbing content, which is often called “going off the rails.” It can also refer to specific applications that protect the AI from going off-topic, like Nvidia’s “NeMo Guardrails” product.

Example: “The moment for government to play a role has not passed us by this period of focused public attention on AI is precisely the time to define and build the right guardrails to protect people and their interests,” said Christina Montgomery, the chair of IBM’s AI ethics board and VP at the company, in Congress.

Inference

The act of using an AI model to make predictions or generate text, images, or other content. Inference can require a lot of computing power.

Example: “The problem with inference is if the workload spikes very rapidly, which is what happened to ChatGPT. It went to like a million users in five days. There is no way your GPU capacity can keep up with that,” said Sid Sheth, founder of D-Matrix.

Large language model

A kind of AI model that underpins ChatGPT and Google’s new generative AI features. Its defining feature is that it uses terabytes of data to find the statistical relationships between words, which is how it produces text that seems like a human wrote it.

Example: “Google’s new large language model, which the company announced last week, uses almost five times as much training data as its predecessor from 2022, allowing it to perform more advanced coding, math and creative writing tasks,” reported CNBC.

Paperclips

Paperclips are an important symbol for AI Safety proponents because they symbolize the chance an AGI could destroy humanity. It refers to a thought experiment published by philosopher Nick Bostrom about a “superintelligence” given the mission to make as many paperclips as possible. It decides to turn all humans, Earth, and increasing parts of the cosmos into paperclips. OpenAI’s logo is a reference to this tale.

Example: “It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible, and who would resist with all its might any attempt to alter this goal,” wrote Bostrom in his thought experiment.

Singularity

An older term that’s not used often anymore, but it refers to the moment that technological change becomes self-reinforcing, or the moment of creation of an AGI. It’s a metaphor — literally, singularity refers to the point of a black hole with infinite density.

Example: “The advent of artificial general intelligence is called a singularity because it is so hard to predict what will happen after that,” said Tesla CEO Elon Musk in an interview with CNBC.

FAQs

Q: What is AGI?
A: AGI stands for “artificial general intelligence.” It refers to a significantly more advanced AI than is currently possible, one that can do most things as well or better than most humans, including improving itself.

Q: What is AI ethics?
A: AI ethics describes the desire to prevent AI from causing immediate harm, often focusing on questions like how AI systems collect and process data and the possibility of bias in areas like housing or employment.

Q: What is AI safety?
A: AI safety describes the longer-term fear that AI will progress so suddenly that a super-intelligent AI might harm or even eliminate humanity.

Q: What is alignment?
A: Alignment is the practice of tweaking an AI model so that it produces the outputs its creators desired. In the short term, alignment refers to the practice of building software and content moderation. But it can also refer to the much larger and still theoretical task of ensuring that any AGI would be friendly towards humanity.

Q: What is inference?
A: Inference is the act of using an AI model to make predictions or generate text, images, or other content. Inference can require a lot of computing power.

Q: What are paperclips in the context of AI?
A: Paperclips are an important symbol for AI safety proponents because they symbolize the chance an AGI could destroy humanity. It refers to a thought experiment published by philosopher Nick Bostrom about a “superintelligence” given the mission to make as many paperclips as possible.

Q: What is the singularity?
A: The singularity refers to the moment that technological change becomes self-reinforcing, or the moment of creation of an AGI. It’s a metaphor — literally, singularity refers to the point of a black hole with infinite density.

How to talk about AI like an insider
How to talk about AI like an insider

“Talking about AI like a Pro: Tips for Insider Conversations”

If you’ve ever felt lost in the jargon-heavy world of artificial intelligence (AI), you’re not alone. From parrots to paperclips, the terminology can be confusing. With that in mind, here’s a rundown of some key AI terms used by insiders.

Firstly, AGI, or “artificial general intelligence.” This concept refers to an AI that is significantly more advanced than what currently exists, capable of performing most tasks as well or better than humans, including improving itself.

AI ethics is concerned with preventing AI from causing immediate harm and covers issues such as data collection and processing, as well as bias in areas like housing or employment. Meanwhile, AI safety is focused on the longer-term fear that rapid progress in AI may lead to super-intelligent AI harming or even eliminating humanity.

Alignment refers to the practice of tweaking an AI model so that it produces the desired outcomes, whether that is short-term software moderation or ensuring any future AGI is friendly to humans. On the other hand, emergent behavior is a technical term describing when AI models exceed their intended abilities, while guardrails are software and policies intended to ensure AI models don’t “go off the rails.”

Fast takeoff or hard takeoff describe scenarios where the success of AGI development may mean it’s too late to save humanity from the destructive consequences. Another term for this is “foom,” which is an acronym meaning “Fast Onset of Overwhelming Mastery.”

Inference refers to the use of AI models to make predictions or generate text or images, while a large language model uses terabytes of data to find the statistical relationships between words. Finally, paperclips are a symbol of the risk associated with AGI and destruction, based on a thought experiment of an AI tasked with making as many paperclips as possible at the cost of everything else.

So if you’re struggling to navigate the world of AI, keep these terms in mind.

Để lại một bình luận

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *