Why cybersecurity and AI expertise belong together

Pascal Rieboldt
25. July 2025
Artificial intelligence is already fundamentally changing cyber security. And at a speed that is presenting companies with new challenges. We show how companies can protect themselves against new types of attacks.
“The cause was human error” – this phrase is often used when it comes to the interaction between man and machine. And indeed, this also applies to cyber security: a study from May 2025 shows that social engineering and phishing, followed by human error and sabotage, are the most common threats to companies.
Most attacks on a company’s IT are successful because people have weak passwords, carelessly click on insecure links or even give unauthorized people access to the building. Something we at ML Gruppe regularly demonstrate at our live hacking events.
And now a new player is entering the field: artificial intelligence. In combination with cyber security, it is both a curse and a blessing.
Let’s take a look together at the opportunities and risks that arise for companies as a result.
What role does AI play in cyber security?
AI is powerful. Attackers therefore use it skillfully to design new attacks. Cyber attacks are becoming increasingly intelligent, faster and harder to detect.
Companies therefore need technologies that not only react, but also act and protect with foresight. Used correctly, AI can also do this.
However, AI is not only changing the technologies we use to protect ourselves, but also the demands placed on people in companies and public authorities.
AI as a potential risk
The great potential of this technology makes it a potential weapon. Attackers use it to create deceptively real deepfakes, to automate phishing emails or to develop malware that adapts to protection mechanisms within seconds.
Here are some examples that show how adept attackers already are at using AI tools:
- Spear phishing with AI: The aim here is to convince certain people in an organization to do something – such as send sensitive data – with a personalized campaign. Large Language Models (LLMs) such as ChatGPT help to personalize these attacks.
- Voice & video cloning: In 2024, cyber criminals imitated the voice of a CEO using AI. With one phone call, they were almost able to get an employee to transfer a quarter of a million dollars to them. In another case, an entire video conference was even faked in order to gain trust.
- Prompt injection: A concept from July 10 shows that prompts can be hidden in emails with white text on a white background and sent to Google Gemini. If the recipient has the mail summarized by the AI, Gemini executes the hidden prompt and displays a message, for example.

AI as a tool for greater safety
But artificial intelligence can also help to fend off attacks. It is capable of sifting through and analyzing huge amounts of data in real time. It recognizes patterns and anomalies faster than any human and thus detects potential threats in seconds. Many modern security systems such as firewalls and email filters are therefore already based on AI.
Taking the spear phishing campaign mentioned above as an example, the AI recognizes even small, unusual speech patterns and sounds the alarm. Or it creates employee behavior patterns and detects deviations such as unusual download behavior. It can also search for typical attack patterns and directly initiate countermeasures itself – all in a matter of seconds.
Strategies for companies: Building AI competence in a targeted manner
In order to protect themselves effectively and at the same time ensure the necessary compliance within the framework of the EU AI Regulation, companies must now invest in the knowledge and expertise of their employees.
AI expertise is an elementary component of modern security strategies, regardless of the industry or department. After all, computers can be found in almost all areas of the working world, which is why the topic is relevant to everyone.
Cybersecurity must therefore be part of the corporate culture and not just the task of the IT department.
Only those who understand how AI works can recognize potential dangers.
There are many workshops, online courses and e-learning coursescirculating on the internet . However, a few training documents do not build up real skills. Competence-oriented training teaches theory and practice in equal measure. It is tailored to the respective company and its security situation.
How companies that strengthen the AI skills of their employees benefit
These are the most important advantages:
1. early detection of threats
Trained team members recognize suspicious activity faster and know how to deal with it. They don’t just click on every link or open every file, even if they are personalized with AI support. This means fewer incidents, less downtime and lower costs.
2. stronger safety culture
When cyber security is not just a “matter for the IT department”, but is thought about by everyone from the accounting department to the clerks to the boardroom, an all-encompassing security network is created. Cyber security is thus understood as a shared responsibility. For example, a team member openly discusses a conspicuous system message in a meeting. Instead of ignoring it, the incident is investigated – and major damage is prevented.
3. better understanding of how to use AI tools
Many companies are already using AI-supported tools, such as chatbots or data analysis applications. Those who understand how these tools work also know where they have potential weaknesses and can better protect sensitive data. For example, a sales employee can recognize early on whether their AI-supported CRM system is integrating incorrect data from external sources and report this.
4. competitive advantage through proven expertise
Companies that demonstrate to the outside world that their employees have not only completed AI training courses to fulfill their duties, but have also developed a genuine understanding of the risks of modern technologies, come across as professional and trustworthy. Responsible handling of sensitive data is a real plus for customers, partners and investors.
5. compliance with legal requirements (EU AI Act)
With targeted training, you can ensure that your company remains on the safe side legally and complies with the new regulatory requirements.
Conclusion: understanding AI is crucial for digital security
Artificial intelligence in itself is neither a threat nor a miracle cure, but simply a tool. Whether AI protects or harms depends on how it is used.
Those who make targeted investments in digital training enable their employees to use AI safely, reduce security risks, meet legal and regulatory requirements, strengthen trust and competitiveness – and increase the overall attractiveness of the company.
Questions and answers on cyber security
Especially industries with sensitive data or critical infrastructure:
- Healthcare
- Finance and insurance
- Industry & Energy
- Public sector
Both. The decisive factor is how companies deal with the technology. Those who understand AI can use it responsibly and safely.
Through targeted, practical training courses that are tailored to the realities of employees’ work – from the basics to industry-specific scenarios.
IT security refers to technical protective measures. Cyber security is more comprehensive. It also takes human behaviour, organization and processes into account.
No. AI can automate processes and identify risks, but the strategic assessment, ethical classification and decision-making responsibility remain human.
- Phishing (e.g. fake emails)
- Ransomware (data is encrypted and a ransom is demanded)
- DDoS attacks (server overload)
- Malware (malicious software of any kind)
One example: During the attack on the German Bundestag in 2015, malware was infiltrated via phishing emails – the damage was enormous.
Cybersecurity refers to all measures that serve to protect networks, systems and data from digital attacks.



