AI Regulations Ukraine: A Comprehensive Plan

Ukraine’s Ministry of Digital Transformation unveiled its strategic plan for regulating artificial intelligence (AI) on October 7. This comprehensive plan, available on the ministry’s official website, aims to equip local businesses with the necessary tools to brace themselves for the eventual adoption of AI legislation, mirroring the European Union’s AI Act. Moreover, it seeks to raise awareness among citizens about safeguarding themselves against potential AI-related risks.

The essence of this plan rests on a bottom-up approach, advocating a gradual progression from minimal regulation to more robust oversight. Its primary objective is to empower businesses with the resources needed to preemptively address forthcoming regulatory requirements before they are officially enacted.

The roadmap also allocates a preliminary timeframe for companies to adapt to the potential legal changes expected within the next two to three years. Deputy Minister of Digital Transformation, Oleksandr Borniakov, elaborates:

“Our strategy entails cultivating a culture of self-regulation within the business community. This will be achieved through mechanisms like voluntary codes of conduct, which will serve as evidence of companies’ ethical use of AI. Another valuable tool in our arsenal is a White Paper that will acquaint businesses with the approach, timeline, and phases of regulatory implementation.”

As per the roadmap’s projections, the draft of Ukraine’s AI legislation is anticipated in 2024, aligning itself with the EU’s AI Act but refraining from preemption. This strategic alignment will ensure that Ukraine’s national AI laws take into account the European standards.

In a significant development earlier in June, the EU AI Act received approval from the European Parliament. Once enacted, this legislation will impose bans on specific AI services and products while imposing constraints on others.

Technologies squarely prohibited include biometric surveillance, social scoring systems, predictive policing, “emotion recognition,” and untargeted facial recognition systems. On the flip side, generative AI models like OpenAI’s ChatGPT and Google’s Bard will be permitted to operate, provided their outputs are clearly labeled as AI-generated.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *