An employee of a large multinational corporation receives an e-mail asking him to send money. The email appears to come from the company's CFO. The employee becomes suspicious, but this changes when the employee is invited to a video call. This meeting is attended by the CFO, colleagues, and some external participants – or so it seems. But the seemingly real people are just AI versions. The fraudsters had presumably downloaded videos in advance and then used artificial intelligence to add fake voices for the video conference. They convince the employee to carry out 15 transactions totaling HK$200 million (US$40 million) to local bank accounts over the course of a week. Sounds like science fiction? No, this really happened at the beginning of 2024 in Hong Kong, China.

AI (Artificial Intelligence)-based cyberattacks such as this one, which involved deepfakes, are on the rise.
According to a study from Deep Instinct in 2023
, 75% of security professionals witnessed an increase in attacks over the past 12 months, with 85% attributing this rise to bad actors using generative AI.
The Microsoft Digital Defence Report 2024
highlights AI as one of the current and emerging threats to IT security.  

The dangers of AI-powered cyber-attacks are highly evident. AI-based attacks are likely to be even more sophisticated, targeted, and persistent than human attacks to date, making them much harder to detect and defend against. AI-based attacks will therefore make it more common for attackers to infiltrate corporate networks and make them even harder to detect. With greater automation and reliance on AI in critical infrastructures, the risk of AI-based cyberattacks increases even further. As more and more devices are connected to the internet (Internet of Things), the attack surfaces that cyber criminals can target are increasing. AI systems can be used for a number of different dangerous attacks including AI-based ransomware and malware, password cracking, phishing, vishing, and deepfakes. 

It is vital that all organizations know potential AI-powered attacks and ensure they have the right expert support and systems in place to manage these effectively.  

Fighting AI with AI: Why not utilize the enemy's abilities and beat them with your own weapons? Companies may also be able to utilize AI-powered security systems to defend against these threats, using powerful automation to anticipate and fight cyber-attacks effectively.

Key AI threats from cybercriminals

Attacks that target the IT infrastructure

  1. AI-driven malware and ransomware

    AI can be used by cybercriminals to quickly and continuously auto-adapt their malware and ransomware so that it is no longer recognized as malicious by security software. Security researchers recently identified
    'BlackMamba', a new AI-powered malware strain that uses machine learning to evade detection
    . This keylogging attack has the potential to completely evade most existing endpoint detection and response (EDR) security solutions. 

  2. Automated Vulnerability Discovery and Exploitation

    AI systems can scan networks and applications for vulnerabilities and security gaps, known as exploits, much faster and more thoroughly than conventional methods. They can also exploit these vulnerabilities automatically, potentially compromising systems before human defenders can react. The greatest danger for users is posed by ‘zero-day exploits’, i.e. vulnerabilities that were previously unknown to software manufacturers. 

  3. Evasion AI Attacks on Machine Learning Systems

    As businesses increasingly depend on AI for security, attackers are also finding ways to deceive these systems.
    By altering input data
    , they can make AI security systems misclassify threats or overlook them completely. It involves altering malware signatures to bypass machine-learning-based antivirus software or modifying network traffic patterns to evade intrusion detection systems. 

  4. Password cracking

    Cybercriminals are constantly developing ever more sophisticated methods for cracking passwords. It is now possible to crack simple passwords without much effort using artificial intelligence.
    The AI solution PassGAN
    cracked over half of the common passwords entered into its system in less than 60 seconds.
    Advanced AI password cracking tools
    can crack 81% of common passwords within a month and most of these encrypted strings within a day. Attackers can carry out a range of criminal activities with the information gained from cracking passwords. This includes, for example, stealing bank details or using the information for identity theft and fraud. 

Key AI threats from cybercriminals

Attacks that target human beings

  1. Phishing

    Phishing is a form of social engineering in which the hacker poses as a trusted contact and tricks the victim into disclosing confidential information such as passwords or bank account details. Emails, SMS, or other messaging are generated. In the past phishing emails may have been poorly written, but the rise of tools like ChatGPT has made it easier for hackers to send convincing scam emails that closely mimic the tone/language/appearance of legitimate messages.

    Cybercriminals can also use AI to personalize malicious emails based on data dispersed in the public domain through the internet, making them appear more realistic and harder to detect.
    Research by SoSafe shows that AI-generated cyberattacks are extremely successful: Almost 80% of people open phishing emails written by AI
    . In December 2023,
    hackers orchestrated a targeted phishing campaign against Activision
    , the gaming company that created the Call of Duty series. Using AI, the attackers breached Activision by crafting specific phishing SMS messages, ultimately deceiving an HR staff member who fell for the scam. 

  2. Vishing 

    Vishing (voice phishing) is a sub-form of phishing and is conducted via phone calls in which threat actors try to gain their victims’ trust and persuade them to make an action or disclose confidential information. Vishing attacks are potentially more dangerous than regular phishing attacks as they establish a personal connection with the target victim, making the scenario much more believable.  

    Here too, the evolution of AI is leading to ever more perfidious and credible fraud methods. Traditional vishing attacks use automated voice recordings and
    robocalls
    , but an attacker using AI can clone voices and communicate live with victims.
    For example, OpenAI's voice engine can clone another person's voice using a simple 10-15 second audio clip
    . The
    cyberattack on MGM Resorts
    , which caused around
    $100 million
    in damages, was carried out through a vishing call where the attacker posed as a normal employee and called the MGM helpdesk to obtain credentials. 

  3. Deepfakes 

    A deepfake is a digitally generated image, video, or audio recording. Generative AI has made it easy for malicious actors to create deepfakes with relatively little effort. Because they look so convincingly real, they tempt people to reveal sensitive information or do money transactions, as in the example mentioned at the beginning of this article. 


    The total number of deepfake videos on the internet in 2023 was 95,820, an increase of 550% compared to 2019
    . Vishing fraudsters are also increasingly working with deepfakes in the form of fake audio content. 

    Deepfakes could also be used for disinformation, aimed at spreading misinformation, fake endorsements, fake approvals of providers, or propaganda. For example, a deepfake image of Hollywood actor Tom Hanks was used to promote a
    dental products scam in 2024
    .  

  4. Impersonation (Identity theft)

    When fraudsters trick their victims into handing over sensitive data and create deepfakes or voice clones, which they then use in vishing attacks, for example, they often use stolen identities, which they recreate deceptively realistically with the help of AI and create eerily realistic video and audio content. This can be based on people from the immediate environment or trustworthy organizations such as banks or government agencies. The enhanced realism of these identity thefts makes it increasingly difficult for victims to distinguish between real and fake content, making them vulnerable to various forms of fraud and manipulation. 

Evolving legislation to deal with threats

The European Union is taking these potential threats very seriously and has reacted with several regulations. 


It introduced the European Artificial Intelligence Act (AI Act) on 1st August 2024
. The act is designed to foster responsible artificial intelligence development in the EU and addresses potential risks to citizens’ health, safety, and fundamental rights.  


The EU also adopted the Cyber Resilience Act on 10th October 2024
which concerns the protection of products with digital elements (including home cameras, kitchen appliances, televisions, and toys) and ensures they are safe from cyber-attacks before they go on sale.  

Furthermore, the EU directive NIS2 came into force on 16th January 2023 for industries with critical infrastructures. The importance of this has been highlighted by
several spectacular cyberattacks that have targeted critical infrastructure such as power grids, water systems, and transport networks

The AI Arms Race

With both security providers and cybercriminals having access to AI systems this creates an ongoing ‘arms race’ between the two. 

Forbes recently discussed how AI-powered attacks are more dangerous than traditional cyber-attacks in a number of key ways
. AI enables the automation of intricate attack processes, allowing cybercriminals to initiate and manage attacks on an unparalleled scale. AI systems can also learn from previous attempts, continually enhancing their efficiency and ability to avoid detection. At the same time, advanced AI can process large volumes of data to detect vulnerabilities and patterns in target systems more effectively than human attackers, and the use of Natural Language Processing enables cybercriminals to launch more convincing phishing and social engineered attacks that are harder for victims to spot. Hackers will continue to develop even more sophisticated and automated techniques, capable of analyzing software and organizations to identify the most vulnerable entry points.
A recent study by Forrester found that 88% of security experts expect AI-driven attacks to become mainstream
– it's only a matter of time.  
This means that cyber analysts and IT experts can only defeat bad actors by understanding how AI will be weaponized, thus being able to confront cybercriminals head-on and develop and deploy suitable security measures. 
Looking further ahead, the EU also has a focus on quantum-safe encryption to safeguard against future quantum computing risks.
The European Commission recently published a Recommendation on Post-Quantum Cryptography
to encourage Member States to develop and implement a harmonized approach, as the EU transitions to post-quantum cryptography. This is designed to help ensure that the EU's digital infrastructures and services remain secure in the next digital era. 

Konica Minolta

Fighting AI with AI

Different security measures must be taken to defend against attacks depending on which targets – technologies or humans – the AI-based attacks are aimed at. 

Conventional cyber security measures are often inadequate when it comes to detecting and defending against AI attacks. They rely heavily on signature-based detection and rule-based systems that are not equipped for the dynamic and evolving nature of AI cyberattacks. 

It therefore makes sense to beat the enemy with its own weapons: AI can serve as a powerful ally in protecting IT assets. Advanced detection mechanisms that utilize machine learning (ML) and AI should be used to detect nuanced anomalies and patterns that may be missed by traditional methods. Tools like Natural Language Processing (NLP) and image recognition detect threats in various languages and formats.  

Here’s a set of suggestions that every company can utilize to help plan its defenses and make its IT more secure: 

  1. Implementing AI-Enhanced Defense Systems – Combat threats by deploying AI-enhanced security solutions. These include advanced threat detection systems, automated patch management, and AI-driven network monitoring tools that detect and respond to anomalies in real-time. 
  2. Conducting Regular Security Audits and Penetration Testing – Perform frequent system assessments, including AI-powered penetration tests. These can uncover vulnerabilities that traditional methods might overlook and provide insights into how AI could be used to exploit systems. 
  3. Regular updates – To make evasion attacks more difficult, continuous updates and improvements to the AI algorithms are necessary to adapt to new evasion techniques. 
  4. Adversarial training – AI-based defense systems are trained with adversarial examples to improve their resilience against attacks. 
  5. Implementing Deception Technologies – Deploy decoy systems and fake data to confuse and mislead AI-powered attacks. These tools can detect threats early and gather valuable intelligence on attacker tactics. 
  6. Developing an AI-Specific Incident Response Plan – Revise your incident response strategies to address the speed and complexity of AI-driven attacks. This may involve automated response protocols and using the services of specialized teams trained in AI forensics. 
  7. Implementing a Zero Trust security framework – This operates on the principle of "never trust, always verify" and is an excellent approach to enhancing security. Instead of assuming everything inside an organization's network is safe, Zero Trust requires continuous verification of all users, devices, and connections, regardless of their location.  
  8. Use of two-factor authentication – The introduction of two-factor authentication increases security, as employees have to confirm their identity in two different ways in order to gain access to resources and data.
  9. Collaborative Threat Intelligence Sharing – Join industry-wide threat-sharing networks. By pooling knowledge on AI-driven attacks, businesses can stay ahead of emerging threats and benefit from collective defense strategies.  

Improving Employee Training and Awareness

No matter how much technology is used to protect against AI-based attacks, the biggest security risk is still the human element. Where emotions and deception are used to gain trust and trick people into handing over sensitive information, even the most sophisticated technologies can only help to a limited extent. 

It is therefore all the more important that employees are made aware of the forms of cyberattacks and receive regular staff training to help recognize and report potential threats. Employees should be trained to carefully verify all contact attempts (email, SMS, phone calls, etc.) to ensure they know the sender and question the context of the messages sent to them, especially when it comes to money transactions where having a face-to-face conversation or telephone call can be used to double-check. 

Combining AI tools with human teams ensures more comprehensive protection against sophisticated cyber-attacks. Further measures can include: 

  • Adapting existing email and URL filters, 
  • Submitting takedown requests for websites and web hosts, 
  • Reporting the blocking of telephone numbers to mobile phone providers, 
  • Reporting bank details of cyber criminals, money mule authorities, or hacked legitimate email accounts to the respective providers.  

How Konica Minolta can help

Because many businesses and organizations lack cybersecurity knowledge and expertise in-house and may struggle to adequately fill these roles themselves, Konica Minolta’s expert team can deliver support to close this skills gap. 

At a technological level, Konica Minolta offers
Workplace Intrusion Patrol
, a managed security service that recognizes and combats cyber security threats. Fully managed by Konica Minolta experts, this service doesn't require in-house cybersecurity skills or hardware. Workplace Intrusion Patrol secures a wide range of areas: endpoints like servers, computers, and mobile devices, and also eliminates risks originating from phishing emails, malicious links and attachments, leaked user accounts, etc. Advanced AI algorithms continuously analyze data streams to detect threats or anomalies, automatically taking action or engaging the Konica Minolta Security Operations Centre (SOC) for further analysis. If a stealthy attack spreads, the SOC team collaborates with other security specialists to mitigate the incident. 

Konica Minolta also offers
security training
to build awareness and understanding of information security issues among your team. Training is geared specifically to the needs of your business and could involve traditional training sessions about data security, live hacking talks, awareness campaigns, and other initiatives.

Share:

Rate this article