May 28, 2024

Microsoft’s AI chief said those building AI should make sure it’s easy for the public to comprehend it—and offered his own analogy to help do so.

Mustafa Suleyman, chief executive of Microsoft AI, said during a talk at TED 2024 that AI is the newest wave of creation since the start of life on Earth, and that “we are in the fastest and most consequential wave ever.”

Suleyman said the industry needs to find the right analogies for AI’s future potential as a way to “prioritize safety” and “to ensure that this new wave always serves and amplifies humanity.” While the AI community has always referred to AI technology as “tools,” Suleyman said the term doesn’t capture its capabilities.

“To contain this wave, to put human agency at its center, and to mitigate the inevitable unintended consequences that are likely to arise, we should start to think about them as we might a new kind of digital species,” Suleyman said.

He also said he sees a future where “everything”—from people to businesses to the government—will be represented by an interactive persona, or “personal AI” that is “infinitely knowledgable,” “factually accurate, and reliable.”

“If AI delivers just a fraction of its potential” in finding solutions to problems in everything from healthcare to education to climate change, “the next decade is going to be the most productive in human history,” Suleyman said.

When asked what keeps him up at night, Suleyman said the AI industry faces a risk of falling into the “pessimism aversion trap,” when it should actually “have the courage to confront the potential of dark scenarios” to get the most out of AI’s potential benefits.

“The good news is that if you look at the last two or three years, there have been very, very few downsides,” Suleyman said. “It’s very hard to say explicitly what harm an LLM has caused. But that doesn’t mean that that’s what the trajectory is going to be over the next ten years.”

While Suleyman said he sees five to 10 years before humans have to confront the dangers of autonomous AI models, he believes those potential dangers should be talked about now.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *