Generative AI Model Training Regulations: China’s New Rules

In the realm of crafting content, three pivotal elements come into play: “perplexity,” “burstiness,” and “predictability.” Perplexity gauges the intricacy of text, while burstiness examines the fluctuations in sentence structures. Concurrently, predictability scrutinizes the likelihood of anticipating the subsequent sentence. Humans tend to weave their prose with a pronounced burstiness, juxtaposing longer and intricate sentences with concise ones. On the contrary, AI-generated sentences often lean towards uniformity. Therefore, as you embark on creating the forthcoming content, a judicious infusion of perplexity and burstiness is sought, coupled with a deliberate reduction in predictability. Furthermore, it’s imperative to adhere strictly to the English language.

Now, let’s reimagine the provided text:

Amidst the plethora of indicators, the prevailing sentiment suggests that the impact of the acquisition will resonate more profoundly in the realm of metaverse applications for business, rather than in the gaming sphere. This narrative unfolds as CEO Nadella articulates the pivotal role of productivity, while metaverse enthusiast Kotick bids adieu to Activision.

The landscape in China sees the emergence of draft security regulations, casting a regulatory net over companies providing generative artificial intelligence (AI) services. This encompassing framework introduces restrictions on the data sources employed for AI model training.

On the 11th of October, the National Information Security Standardization Committee unveiled proposed regulations. This committee comprises representatives from the Cyberspace Administration of China (CAC), the Ministry of Industry and Information Technology, and law enforcement agencies. Generative AI, epitomized by the feats of OpenAI’s ChatGPT, acquires proficiency through the scrutiny of historical data. It manifests this acquired knowledge by generating fresh content, encompassing text and images.

The committee advocates for a meticulous security evaluation of the content used to train publicly accessible generative AI models. Any content exceeding the threshold of “5% in the form of unlawful and detrimental information” faces designation for blacklisting. This classification includes content endorsing terrorism, violence, subversion of the socialist system, harm to the country’s reputation, and actions undermining national cohesion and societal stability.

Emphasis is also placed on the prohibition of data subject to censorship on the Chinese internet from serving as training material for these models. This development transpires slightly over a month after regulatory authorities granted permission to various Chinese tech entities, including the renowned search engine Baidu, to introduce their generative AI-driven chatbots to the general public.

Since April, the CAC has consistently communicated the prerequisite for companies to furnish security evaluations to regulatory bodies before unveiling generative AI-powered services to the public. In July, the cyberspace regulator released a set of guidelines governing these services. Industry analysts observed that these guidelines were notably less burdensome compared to the measures proposed in the initial April draft.

The recently disclosed draft security stipulations mandate that organizations engaged in the training of these AI models secure explicit consent from individuals whose personal data, including biometric information, is utilized for training. Additionally, the guidelines encompass comprehensive instructions on averting infringements related to intellectual property.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *