1 min read

Link: Microsoft releases powerful new Phi-3.5 models | VentureBeat

Microsoft has released three new models of its Phi series of language/multimodal AI: Phi-3.5-mini-instruct, Phi-3.5-MoE-instruct, and Phi-3.5-vision-instruct. Each one supports different capacity for reasoning and vision tasks.

A near state-of-the-art performance is demonstrated across several benchmark tests. The AI models are accessible to developers on HuggingFace under a MIT License.

The 3.8B parameter Phi-3.5 Mini Instruct is optimized for memory- or compute-constrained environments. Despite its compact size, it stands up to multilingual and multi-turn conversational tasks well. Its efficient performance on the RepoQA benchmark overtakes similar models.

The 42B parameter Phi-3.5 MoE (Mixture of Experts) model combines different model types. Specialties ranging from code, math, and language comprehension often outperform larger models. The 5-shot MMLU (Massive Multitask Language Understanding) even surpasses the GPT-4o mini.

Phi-3.5 Vision Instruct model concludes the trio with both text and image processing abilities. Suited for substantial image understanding and other visual tasks, it utilizes synthetic and publicly available datasets in its training.

Open-source under an MIT License, this series ushers in a new era in AI development. By giving developers access to these AI models, Microsoft hopes to facilitate innovation in both commercial and research fields. #

--

Yoooo, this is a quick note on a link that made me go, WTF? Find all past links here.