Data theft is a primary objective of cyberattacks. Hackers target sensitive data—social security numbers, passwords that access bank accounts, hospital records—with plans to either steal or encrypt it to obtain a ransom.
However, the goal is usually different when artificial intelligence models are targeted. Rather than theft, hackers are typically focused on poisoning, which involves injecting false or biased data to compromise the performance of the AI model.
Yet, the focus of attacks on AI systems could soon change. As more companies leverage AI-driven systems to boost operational efficiency and achieve productivity gains, those systems create new databases that are valuable to cybercriminals. Consequently, stealing AI data, rather than poisoning it, could soon become the aim of cybercriminals.
Shifting from generic to specific data
The models that drive artificial intelligence are typically trained on generic data. The companies that construct the foundational models scrape data from the internet or other publicly available datasets, pursuing a broad training that allows AI to develop an understanding of general patterns.
As the models evolve, developers evaluate their output and refine it. The goal of refining is often to provide companies with a blank slate to work with that won’t add bias or other preconceptions to the work the AI model will be doing.
Once models are made operational, they begin learning from the specific data they receive to optimize their performance. This can include proprietary data provided by the company—internal documents, product information, customer records—that customizes the model and directs it toward company-specific goals.
As users begin interacting with the model, more specific data is generated. This can include usage patterns, user feedback, and sensitive data unique to specific users. For example, a financial institution using AI to drive personalized experiences for its users might collect data on the type of transactions an account holder makes, how often, with whom, and for how much. That type of data, which trains the AI model on how to serve and guide each account holder, is a favorite target of hackers.
Shifting from human-driven to AI-driven processes
As AI began to make inroads in the business world, it promised to increase operational efficiency and lower costs by reducing the need for human activity. A few years in, it appears that the promise is coming true as more and more companies report AI-driven productivity gains.
Hiring trends, RFPs, and Fortune 500 strategies show that AI is becoming interwoven into production systems in a wide variety of industries, streamlining operations and reducing dependency on human staffing. Over the next 16 to 24 months, the most profitable businesses will increase their dependence on AI, intricately weaving AI tooling into their production systems, and with more usage comes more data.
Companies should anticipate that as they shift their systems from human-centered to AI-centered, cyberattackers will shift their interest to the databases collecting AI-related data. To keep that data secure, cybersecurity strategies must also shift.
Shifting security systems to repel AI-focused attacks
To keep data secure, companies must recognize that each shift in their digital operations triggers another shift in the threat landscape. The value of previous strategies will need to be continually assessed against emerging attack vectors, as cybercriminals are as creative as they are relentless.
The most secure systems will be those that strive to significantly reduce potential attack vectors rather than focusing on bolstering defenses for each possible point of entry. With a rapidly evolving threat landscape, in which companies should expect hackers to shift their focus to extracting AI-related data, identification or assessment libraries often fail to shield businesses from breaches. Additionally, maintaining such a defense framework is expensive and tedious.
Adopting a zero-trust approach can dramatically improve a company’s ability to detect and prevent new threats. Hackers consider every trusted user to be a prime target. Gaining a user’s login credentials through social engineering or another type of attack facilitates a breach, often providing access to hackers in a way that goes unnoticed.
Zero-trust assumes all users are unauthorized. It builds security on the belief that systems should “never trust, always verify.” Using protocols such as multi-factor authentication and “least privilege access” creates a system where access is challenging to achieve and movement within systems is limited.
Incorporating zero-trust features at the engineering level of AI-driven systems takes the approach to the next level. Using Infrastructure-as-Code, for example, to automate the processes that drive system maintenance reduces attack vectors by limiting the amount of access human companies need to provide.
As companies embrace AI integration, they create a situation in which shared vulnerabilities increase security risks. The best way to address those risks is to address security proactively as systems are being designed, rather than as a reactionary add-on. Acknowledging that attacks on AI data will be inevitable is essential for ensuring that breaches are not.
Yashin Manraj, is the CEO of Pvotal Technologies and has served as a computational chemist in academia, an engineer working on novel challenges at the nanoscale, and a thought leader building more secure systems at the world’s best engineering firms. His deep technical knowledge, from product development, design, business insights, and coding, provides a unique nexus to identify and solve gaps in the product pipeline.
Pvotal’s mission is to build sophisticated enterprises with no limits that are built for rapid change, seamless communication, top-notch security, and scalability to infinity. Pvotal’s products and services create Infinite Enterprises that give business leaders total control and peace of mind over their technology systems and their businesses.
Photo by Philip Oroni via Unsplash