Nvidia, which builds some of the most highly sought-after GPUs in the AI industry, has announced that it has released an open-source large language model that reportedly performs on par with leading proprietary models from OpenAI, Anthropic, Meta, and Google.
The company introduced its new NVLM 1.0 family in a recently released white paper, and it’s spearheaded by the 72 billion-parameter NVLM-D-72B model. “We introduce NVLM 1.0, a family of frontier-class multimodal large language models that achieve state-of-the-art results on vision-language tasks, rivaling the leading proprietary models (e.g., GPT-4o) and open-access models,” the researchers wrote.
Introducing NVLM 1.0, a family of frontier-class multimodal LLMs that achieve state-of-the-art results on vision-language tasks, rivaling the leading proprietary models (e.g., GPT-4o) and open-access models (e.g., InternVL 2).
Remarkably, NVLM 1.0 shows improved text-only… pic.twitter.com/yKGyOqHnsp— Wei Ping (@_weiping) September 18, 2024
Get your weekly teardown of the tech behind PC gaming
The new model family is reportedly already capable of “production-grade multimodality,” with exceptional performance across a variety of vision and language tasks, in addition to improved text-based responses compared to the base LLM that the NVLM family is based on. “To achieve this, we craft and integrate a high-quality text-only dataset into multimodal training, alongside a substantial amount of multimodal math and reasoning data, leading to enhanced math and coding capabilities across modalities,” the researchers explained.
The result is an LLM that can just as easily explain why a meme is funny as it can solve complex mathematics equations, step by step. Nvidia also managed to increase the model’s text-only accuracy by an average of 4.3 points across common industry benchmarks, thanks to its multimodal training style.
Nvidia appears serious about ensuring that this model meets the Open Source Initiative’s newest definition of “open source” by not only making its training weights available for public review, but also promising to release the model’s source code in the near future. This is a marked departure from the actions of rivals like OpenAI and Google, who jealously guard the details of their LLMs’ weights and source code. In doing so, Nvidia has positioned the NVLM family to not necessarily compete directly against ChatGPT-4o and Gemini 1.5 Pro, but rather serve as a foundation for third-party developers to build their own chatbots and AI applications.