[ad_1]
OpenAI, the developer behind artificial intelligence (AI) chatbot ChatGPT, has collaborated with its top investor, Microsoft, to prevent five cyberattacks linked to different malicious actors.
We disrupted five state-affiliated malicious cyber actors’ use of our platform.
Work done in collaboration with Microsoft Threat Intelligence Center. https://t.co/xpEeQDYjrQ
— OpenAI (@OpenAI) February 14, 2024
According to a report released on Wednesday, Microsoft has monitored hacking groups linked to Russian military intelligence, Iran’s Revolutionary Guard, and the Chinese and North Korean governments, which it says have been exploring using AI large language models (LLMs) in their hacking strategies.
LLMs utilize vast text data sets to create responses that sound human-like.
OpenAI reported that the five cyberattacks originated from two groups associated with China: Charcoal Typhoon and Salmon Typhoon.
Attacks were linked to Iran through Crimson Sandstorm, North Korea through Emerald Sleet and Russia through Forest Blizzard.
The groups tried to employ ChatGPT-4 for researching company and cybersecurity tools, debugging code, generating scripts, conducting phishing campaigns, translating technical papers, evading malware detection and studying satellite communication and radar technology, according to OpenAI.
The accounts were deactivated upon detection.
See Also: Beware Of New ERC-404 Token! A Crypto Trader Lost $26K In No Handle (NO) Tokens Rugpull
The company revealed the discovery while implementing a blanket ban on state-backed hacking groups utilizing AI products.
While OpenAI effectively prevented these occurrences, it acknowledged the challenge of avoiding every malicious use of its programs.
Following a surge of AI-generated deepfakes and scams after the launch of ChatGPT, policymakers stepped up scrutiny of generative AI developers.
In June 2023, OpenAI announced a $1 million cybersecurity grant program to enhance and measure the impact of AI-driven cybersecurity technologies.
Despite OpenAI’s efforts in cybersecurity and implementing safeguards to prevent ChatGPT from generating harmful or inappropriate responses, hackers have found methods to bypass these measures and manipulate the chatbot to produce such content.
More than 200 entities, including OpenAI, Microsoft, Anthropic and Google, recently collaborated with the Biden Administration to establish the AI Safety Institute and the United States AI Safety Institute Consortium (AISIC).
See Also: Central Bank Of Russia Noted A Sharp Increase In Crypto Scams And Criminality Within The Country
The groups were established as a result of President Joe Biden’s executive order on AI safety in late October 2023, which aims to promote the safe development of AI, combat AI-generated deepfakes and address cybersecurity issues.
#Binance #WRITE2EARN
[ad_2]
Source link