Microsoft has introduced the latest addition to its Phi household of generative AI fashions.
Referred to as Phi-4, the mannequin is improved in a number of areas over its predecessors, Microsoft claims — specifically math downside fixing. That’s partly the results of improved coaching knowledge high quality.
Phi-4 is accessible in very restricted entry as of Thursday night time: solely on Microsoft’s not too long ago launched Azure AI Foundry improvement platform, and just for analysis functions underneath a Microsoft analysis license settlement.
That is Microsoft’s newest small language mannequin, coming in at 14 billion parameters in dimension, and it competes with different small fashions similar to GPT-4o mini, Gemini 2.0 Flash, and Claude 3.5 Haiku. These AI fashions are oftentimes quicker and cheaper to run, however the efficiency of small language fashions has step by step elevated during the last a number of years.
On this case, Microsoft attributes Phi-4’s bounce in efficiency to using “high-quality artificial datasets,” alongside high-quality datasets of human-generated content material and a few unspecified post-training enhancements.
Many AI labs are wanting extra carefully at improvements they’ll make round artificial knowledge and publish coaching as of late. Scale AI CEO Alexandr Wang mentioned in a tweet on Thursday that “we’ve reached a pre-training knowledge wall,” confirming a number of reviews on the subject within the final a number of weeks.
Notably, Phi-4 is the primary Phi-series mannequin to launch following the departure of Sébastien Bubeck. Beforehand an AI VP at Microsoft and a key determine within the firm’s Phi mannequin improvement, Bubeck left Microsoft in October to affix OpenAI.