These two diverging camps – the open and the closed –
disagree about whether to build AI in a way that makes the underlying
technology widely accessible. Safety is at the heart of the debate, but so is
who gets to profit from AI’s advances.
Open advocates favor an approach that is “not proprietary
and closed”, said Darío Gil, a senior vice-president at IBM who directs its
research division. “So it’s not like a thing that is locked in a barrel and no
one knows what they are.”
The AI Alliance – led by IBM and Meta and including Dell,
Sony, the chipmakers AMD and Intel and several universities and AI startups –
is “coming together to articulate, simply put, that the future of AI is going
to be built fundamentally on top of the open scientific exchange of ideas and
on open innovation, including open source and open technologies”, Gil said in
an interview with the Associated Press before its unveiling. The alliance is
likely to lobby regulators to ensure new legislation works in their favor.
Meta’s chief AI scientist, Yann LeCun, this autumn took aim
on social media at OpenAI, Google and the startup Anthropic for what he
described as “massive corporate lobbying” to write the rules in a way that
benefits their high-performing AI models and could concentrate their power over
the technology’s development. The three companies, along with OpenAI’s key
partner Microsoft, have formed their own industry group called the Frontier
Model Forum.
LeCun said on X, formerly Twitter, that he worried that
fearmongering from fellow scientists about AI “doomsday scenarios” was giving
ammunition to those who want to ban open-source research and development.
“In a future where AI systems are poised to constitute the
repository of all human knowledge and culture, we need the platforms to be open
source and freely available so that everyone can contribute to them,” LeCun
wrote. “Openness is the only way to make AI platforms reflect the entirety of
human knowledge and culture.”
For IBM, an early supporter of the open-source Linux
operating system in the 1990s, the dispute feeds into a much longer competition
that precedes the AI boom.
“It’s sort of a classic regulatory capture approach of
trying to raise fears about open-source innovation,” said Chris Padilla, who
leads IBM’s global government affairs team. “I mean, this has been the
Microsoft model for decades, right? They always opposed open-source programs
that could compete with Windows or Office. They’re taking a similar approach
here.”
The term “open-source” comes from a decades-old practice of
building software in which the code is free or widely accessible for anyone to
examine, modify and build upon.
Open-source AI involves more than just code and computer
scientists differ on how to define it depending on which components of the
technology are publicly available and if there are restrictions limiting its
use. Some use open science to describe the broader philosophy.
Part of the confusion around open-source AI is that despite
its name, OpenAI – the company behind ChatGPT and the image-generator Dall-E –
builds AI systems that are decidedly closed.
“To state the obvious, there are near-term and commercial
incentives against open source,” said Ilya Sutskever, OpenAI’s chief scientist
and co-founder, in a video interview hosted by Stanford University in April.
But there is also a longer-term worry involving the potential for an AI system
with “mind-bendingly powerful” capabilities that would be too dangerous to make
publicly accessible, he said.
To make his case for open-source dangers, Sutskever posited
an AI system that had learned how to start its own biological laboratory.
Even current AI models pose risks and could be used, for
instance, to ramp up disinformation campaigns to disrupt democratic elections,
said David Evan Harris at the University of California, Berkeley.
“Open source is really great in so many dimensions of
technology,” but AI is different, Harris said.
“Anyone who watched the movie Oppenheimer knows this, that
when big scientific discoveries are being made, there are lots of reasons to
think twice about how broadly to share the details of all of that information
in ways that could get into the wrong hands,” he said.
The Center for Humane Technology, a longtime critic of
Meta’s social media practices, is among the groups drawing attention to the
risks of open-source or leaked AI models.
“As long as there are no guardrails in place right now, it’s
just completely irresponsible to be deploying these models to the public,” said
the group’s Camille Carlton.
An increasingly public debate has emerged over the benefits
or dangers of adopting an open-source approach to AI development.
It was easy to miss the “open-source” debate in the
discussion around Joe Biden’s sweeping executive order on AI.
The US president’s order described open models with the
technical name of “dual-use foundation models with widely available weights”
and said they needed further study. Weights are numerical parameters that
influence how an AI model performs.
When those weights are publicly posted on the internet,
“there can be substantial benefits to innovation, but also substantial security
risks, such as the removal of safeguards within the model,” Biden’s order said.
He gave the commerce secretary, Gina Raimondo, until July to talk to experts
and come back with recommendations on how to manage the potential benefits and
risks.
The European Union has less time to figure it out. In
negotiations coming to a head on Wednesday, officials working to finalize
passage of world-leading AI regulation are still debating a number of
provisions, including one that could exempt certain “free and open-source AI
components” from rules affecting commercial models.