Chinese research institutions associated with the People's Liberation Army (PLA) have reportedly utilized Meta's publicly accessible Llama model to create an AI tool aimed at potential military applications, as indicated by academic publications and expert analysis. 

A paper published in June, which was reviewed by Reuters, featured six researchers from three different institutions, including two affiliated with the PLA's primary research organization, the Academy of Military Science (AMS). The researchers described their work on an early version of Meta's Llama, which they have named "ChatBIT."

They employed an earlier iteration of the Llama 2 13B large language model (LLM) from Meta, modifying it with their own parameters to develop a military-oriented AI tool designed to collect and analyze intelligence, thereby providing accurate and reliable information for operational decision-making.

According to the paper, ChatBIT was fine-tuned and optimized specifically for dialogue and question-answering tasks within the military context. It reportedly surpassed the performance of several other AI models that were approximately 90% as effective as OpenAI's advanced ChatGPT-4. However, the researchers did not clarify their criteria for performance evaluation or confirm whether the AI model has been deployed in practice.

Sunny Cheung, an associate fellow at the Jamestown Foundation specializing in China's emerging dual-use technologies, noted that this represents the first substantial evidence of PLA military experts systematically researching and attempting to harness the capabilities of open-source LLMs, particularly those developed by Meta, for military applications.

Meta has actively promoted the open release of many of its AI models, including Llama, while imposing certain restrictions on their usage. These include a stipulation that services with over 700 million users must obtain a license from the company. 

Additionally, Meta's terms prohibit the use of its models for military purposes, warfare, nuclear industries, espionage, and other activities that fall under U.S. defense export controls, as well as for the creation of weapons or content intended to incite violence.

Nevertheless, given the public nature of Meta's models, the organization faces constraints in enforcing these stipulations.

In response to inquiries, Meta referenced its acceptable use policy and outlined the measures it has taken to mitigate the potential for misuse.

"Any use of our models by the People's Liberation Army is unauthorized and contrary to our acceptable use policy," Molly Montgomery, Meta's director of public policy, said in a phone interview.

Meta added that the United States must embrace open innovation.

"In the global competition on AI, the alleged role of a single, and outdated, version of an American open-source model is irrelevant when we know China is already investing more than a trillion dollars to surpass the U.S. on AI," a Meta spokesperson said in a statement.

Chinese researchers involved in this study include Geng Guotong and Li Weiwei from the AMS's Military Science Information Research Center and the National Innovation Institute of Defense Technology, along with colleagues from the Beijing Institute of Technology and Minzu University.

The paper indicates that in the future, advancements in technology will enable ChatBIT to be utilized not only for intelligence analysis but also for strategic planning, simulation training, and command decision-making.

The Chinese Defense Ministry did not respond to requests for comments, nor did any of the associated institutions or researchers.

Journalists were unable to verify the capabilities and computing power of ChatBIT; however, the researchers mentioned that its model was based on just 100,000 military dialogue records, which is relatively modest compared to other large language models (LLMs).

Joelle Pineau, a vice president of AI Research at Meta and a professor of computer science at McGill University in Canada, expressed skepticism, stating, "That's a drop in the ocean compared to most of these models that are trained with trillions of tokens, so it really makes me question what they actually achieve here in terms of different capabilities."

This research emerges amidst ongoing discussions in U.S. national security and technology sectors regarding the implications of making models from companies like Meta publicly accessible.

In October of the previous year, U.S. President Joe Biden signed an executive order aimed at regulating AI advancements, highlighting that while innovation can offer significant benefits, it also poses "substantial security risks, such as the removal of safeguards within the model."

Recently, Washington announced it was finalizing regulations to limit U.S. investments in artificial intelligence and other technology sectors in China that could pose a threat to national security.

Pentagon spokesperson John Supple stated that the Department of Defense acknowledges the advantages and disadvantages of open-source models and emphasized that "we will continue to closely monitor and assess competitors' capabilities."

Some analysts argue that China's advancements in developing homegrown AI, including the establishment of numerous research laboratories, have already made it challenging to prevent the country from closing the technological gap with the United States.

In a separate academic study reviewed by Reuters, two researchers from the Aviation Industry Corporation of China (AVIC)—a company identified by the United States as having connections to the People's Liberation Army (PLA)—discussed the application of Llama 2 for "training airborne electronic warfare interference strategies."

China's adoption of Western-developed AI technologies has also permeated its domestic security efforts. A paper published in June detailed how Llama was utilized for "intelligence policing," enabling the processing of vast data sets to improve police decision-making.

In April, the state-run PLA Daily featured commentary on the potential of AI to "accelerate the research and development of weapons and equipment," enhance combat simulations, and boost military training effectiveness.

"Is it possible to keep them (China) out of the cookie jar? I don't believe that's feasible," stated William Hannas, lead analyst at Georgetown University's Center for Security and Emerging Technology (CSET). A 2023 report by CSET identified 370 Chinese institutions whose researchers had published work related to General Artificial Intelligence, contributing to China's national strategy to become a global leader in AI by 2030.

Hannas further noted, "There is too much collaboration occurring between China's top scientists and the leading AI researchers in the U.S. for them to be excluded from these advancements."