

Artificial Intelligence Boom and its consequences in the cyber world
In today’s world, technological advancements are happening at a rapid pace and are known to many people across the world. Thanks to smartphones and the internet, which have given people across the world access to content, with this in mind, in early 2021 we could see technological advancements in cloud technologies where companies had migrated towards cloud technologies. As early as 2023 and until now, the tech world has been buzzing about generative AI, which has gained wide traction in the tech world. Each new technology stated above poses a significant advancement as well as risks.
In the current scenario where Gen AI has gained huge traction, it has also gained traction among attackers, as, for example, open chat GPT is available for free to all people across the world, making attacks easier to carry out than ever before. While artificial intelligence revolves around feeding data to models,testing models with feed data, and training them, some organisations use public access or
public application programming interface (API) offerings of generative AI providers, leading to potential sensitive data exposure. In this article, let us discuss cyber threats to AI systems.
CYBER THREATS OF AI:
Cyber Threats from AI have become a significant challenge for organisations worldwide as attackers use artificial intelligence to simulate attacks efficiently. Let’s see some of the ways that attackers use AI below:
Social Engineering:
Social engineering often involves impersonating the individual or convincing the individual to carry out certain malicious operations. Phishing often ranks first in social engineering attacks. Attackers use Gen AI like ChatGPT, which converts text and voice to create phishing emails targeting specific organisations or individuals. Though attackers use Gen AI to target organisations, the concept of phishing attacks remains the same. Organisations can still follow the same SOPs to detect and mitigate phishing attacks.
Malware/Code Generation:
Attackers can use Gen AI, where they can give inputs to models to create sophisticated malware that can then be delivered via social engineering. For example, an attacker can create malware in multi stages where he/she can provide inputs to create a command and control server, post which he can provide input to create a malware that would be hosted in C&C server. Then he can ask AI to create a customised URL that could be delivered as part of a phishing email. Now the whole part of the story is that attackers need not put in manual efforts to create these steps; instead, by providing or feeding texts, the AI model itself can create them.
Vulnerability Discovery:
Attackers can use AI tools to discover vulnerabilities and also learn about the exploitation methods for them within a short time frame. For example, attackers can learn about high-severity vulnerabilities, use AI to query the CVE, and ask for exploitation scenarios. Organisations have to ensure that all AI tools are supposed to be used by legitimate individuals, and access to open-source chatbot models must be strictly restricted.
CYBER THREATS TO AI:
While the previous section taught us about how all bad things can be established by threat actors with the help of AI, this section will delve into potential threats to the AI system itself.
Data Poisoning:
Attackers often poison the data that is fed to AI models. For example, let’s say a financial institution feeds data related to anti-fraud anomaly detection. If an attacker poisons the data, then even suspicious behaviour-based detection will be considered normal, leading to false negatives. Organisations should have strict access control over who can access the model. By doing so, we can prevent data poisoning, which in turn could produce false output.
Data Leakage during Inference:
This is the stage where a data-trained machine learning model uses its knowledge to understand new data. For example, musical streaming platform Spotify is still figuring out the root cause of data leakage. Data leakage occurs when data is fed during the training model.
has different data when compared to feeding real-time data. Organisations can use the DLP solution to monitor for any unusual activity like data tampering,etc.
Evasion:
Evasion is basically to alter the behaviour of the model so that it could favour the attacker. For example, an attacker could remove certain malicious Windows API functions in EDR solutions which use AI/ML to detect malicious activities to produce false negatives so that attackers could maintain persistence. Data integrity checks and robust intrusion detection systems, firewalls, etc. can be deployed to monitor for any unusual activity.
Model Extraction:
Model extraction is a method where the data is collected from the victim model and trained on a substitute model in order to steal the functionality of the target model. For example, a financial company can steal its competitor’s AI query to train its model to replicate the functionality of the original model.
Conclusion:
While AI models can aid humans in solving some of the more complex tasks, they can also behave differently when tampered with, leading to significant damage. Organisations should understand the scope of the model they are training and ensure that data is not tampered with before feeding it to the model, as data is the main input source. AI cyber security is still evolving and will grow in the upcoming years.
In today’s rapidly evolving tech landscape, advancements in cloud technologies and generative AI have transformed industries. However, these innovations also introduce significant cyber threats. This article explores how attackers leverage AI for social engineering, malware creation, and vulnerability discovery, and highlights potential threats to AI systems like data poisoning and model extraction. As AI cybersecurity continues to evolve, organizations must stay vigilant to protect their data and systems.
- Tags:
- Technology
- Web