Nvidia, a leading chip designer, has made DeepSeek’s R1 model available to users of its NIM microservice since Thursday. They say this model offers “state-of-the-art reasoning capabilities,” “high inference efficiency,” and “leading accuracy” for tasks that involve logical reasoning, mathematics, coding, and language comprehension.
This development comes after DeepSeek’s quick rise raised alarms that major US tech firms might be overspending on Nvidia’s advanced graphics processing units, leading to a sharp drop in Nvidia’s stock price.
DeepSeek’s open-source R1 model, which was released on January 20, has demonstrated abilities that rival OpenAI’s closed-source GPT models in some areas, but at a fraction of the training costs. Earlier this week, Microsoft, an investor in OpenAI, announced support for R1 on its Azure cloud platform and GitHub, enabling clients to create AI applications that can run locally on Copilot+ PCs. Meanwhile, Amazon.com has allowed developers to build applications using the “powerful, cost-efficient” R1 through Amazon Web Services.