May 30, 2024

Microsoft is reportedly training its own artificial intelligence model to compete with models from Google and OpenAI, which it has a multi-year, multibillion-dollar partnership with.

The tech giant’s new, in-house model, internally referred to as MAI-1, is being led by ex-Google AI chief, Mustafa Suleyman, The Information reported, citing people familiar with the matter. Microsoft hired Suleyman, who co-founded the AI startups DeepMind (acquired by Google in 2014) and Inflection, which he led as chief executive, to lead its AI division in March, along with a majority of Inflection’s staff. The company also paid $650 million for Inflection’s intellectual property rights. But the new model is separate from Inflection’s previously released models, people told The Information. However, Microsoft’s new model may be built off of Inflection’s training data and other tech, The Information reported.

Microsoft declined to comment on the report.

Kevin Scott, chief technology officer at Microsoft, wrote in a post on LinkedIn the company builds “big supercomputers to train AI models,” and that OpenAI “uses these supercomputers to train frontier-defining models.”

“Each supercomputer we build for Open AI is a lot bigger than the one that preceded it, and each frontier model they train is a lot more powerful than its predecessors,” Scott wrote. “We will continue to be on this path — building increasingly powerful supercomputer for Open AI to train the models that will set pace for the whole field — well into the future.” Scott added that Microsoft has built AI models for years, and some of the “models have names like Turing, and MAI.”

MAI-1 will be expensive due to requiring large amounts of computing power and training data, as it will be “far larger” than the smaller, open source models Microsoft has trained, people told The Information. Compared to open source models from Meta and Mistral, which have 70 billion parameters — or the variables that models learn during training to make predictions — MAI-1 will reportedly have around 500 billion parameters. OpenAI’s most powerful model, GPT-4, reportedly has over one trillion parameters.

Microsoft could preview the model at its annual developer conference, Build, later this month, The Information reported. The company has kept a large cluster of servers that include Nvidia’s GPUs, or graphics processing units, as well as large amounts of data to train the model, it added.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *