laitimes

How evolving AI regulation is impacting cybersecurity

author:Cloud and cloud sentient beings
How evolving AI regulation is impacting cybersecurity

Understanding evolving regulations is key to addressing AI risks. What this means for cybersecurity leaders.

译自 How evolving AI regulations impact cybersecurity,作者 Ram Movva; Aviral Verma。

While their business and technical colleagues are busy trying and developing new applications, cybersecurity leaders are looking for ways to anticipate and respond to new, AI-powered threats.

The impact of AI on cybersecurity has always been clear, but it's a two-way process. AI is increasingly being used to predict and mitigate attacks, but these applications are also inherently vulnerable. The automation, scale, and speed that everyone is excited about also apply equally to cybercriminals and threat actors. While far from mainstream applications, the malicious use of AI has been growing. From generative adversarial networks to large botnets and automated DDoS attacks, AI has the potential to generate a new type of cyberattack that can adapt and learn to evade detection and mitigation.

In this environment, how can we defend AI systems from attacks? What form will offensive AI take? What will the AI model look like for threat actors? Can we do penetration testing on AI? When should we start and why? As businesses and governments expand their AI pipelines, how will we protect the vast amounts of data they rely on?

It is these issues that have prompted the U.S. government and the European Union to prioritize cybersecurity, as they both work to develop guidelines, rules, and regulations to identify and mitigate new risk environments. This isn't the first time that a distinctly different approach has emerged, but that doesn't mean there isn't overlap.

Let's take a brief look at what's involved before moving on to what this means for cybersecurity leaders and CISOs.

An overview of the U.S. approach to AI regulation

In addition to executive orders, the decentralized approach to AI regulation in the U.S. is also reflected in states such as California developing their own legal guidelines. As home to Silicon Valley, California's decision could dramatically impact the way tech companies develop and implement AI, all the way down to the datasets used to train applications. While this definitely affects everyone involved in developing new technologies and applications, from the perspective of a pure CISO or cybersecurity leader, it's important to note that while the U.S. environment emphasizes innovation and self-regulation, the overall approach is risk-based. The regulatory environment in the U.S. emphasizes innovation while addressing the potential risks associated with AI technologies. Regulations focus on promoting responsible AI development and deployment, with a focus on industry self-regulation and voluntary compliance.

For CISOs and other cybersecurity leaders, it's important to note that the executive order directs the National Institute of Standards and Technology (NIST) to develop standards for red-team testing of AI systems. There is also a call for "the most powerful AI system" to conduct penetration tests and share the results with the government.

Overview of the EU Artificial Intelligence Act

The EU's more cautious approach included cybersecurity and data privacy from the outset, with mandatory standards and enforcement mechanisms in place. Like other EU laws, the AI Act is principle-based: organizations have a responsibility to demonstrate compliance through documentation that supports their practices.

For CISOs and other cybersecurity leaders, Section 9.1 raises a lot of attention. It noted

High-risk AI systems should be designed and developed with security design and security by default in accordance with the following principles. Given their intended use, they should achieve appropriate levels of accuracy, robustness, security, cybersecurity, and maintain these levels consistently throughout their lifecycle. Compliance with these requirements should include the implementation of state-of-the-art measures based on a specific market segment or scope of application.

At the most fundamental level, Section 9.1 means that cybersecurity leaders in critical infrastructure and other high-risk organizations will need to conduct AI risk assessments and comply with cybersecurity standards. Section 15 of the Act covers cybersecurity measures that can be taken to protect, mitigate, and contain attacks, including attacks that attempt to manipulate training datasets ("data poisoning") or models. For CISOs, cybersecurity leaders, and AI developers, this means that anyone building high-risk systems needs to consider the impact of cybersecurity from the start.

Key differences between the EU AI Act and the US approach to AI regulation

features EU Artificial Intelligence Act The American approach
Regulatory approach Principle-based, mandatory standards and enforcement mechanisms Risk-based, with an emphasis on innovation and self-regulation
important point Cybersecurity and data privacy Innovative and responsible AI development
Compliance The onus is on the organization to demonstrate compliance Voluntary compliance
Penetration testing Requires penetration testing of high-risk AI systems Penetration testing of the "most powerful AI systems" is encouraged
Data Privacy Strict data privacy regulations Less data privacy protection
implement has been implemented It is under development
The general idea Preventive, risk-based Market-driven, innovation-centric
--- --- ---
statute Specific rules for "high-risk" AI, including cybersecurity Broad principles, sectoral guidelines, with a focus on self-regulation
Data Privacy GDPR applies, strict user rights and transparency There is no comprehensive federal law, and state regulations are cobbled together
Cybersecurity standards Mandatory technical standards for high-risk AI Encourage voluntary best practices, industry standards
execute Fines, injunctions and other sanctions are imposed for non-compliance Agency investigations, potential trade restrictions
transparency Explainability requirements for high-risk AI Requirements are limited and there is a strong focus on consumer protection
Accountability A clear framework of liability for damage caused by AI Responsibility is unclear and is usually attributed to the user or developer

What AI regulations mean for CISOs and other cybersecurity leaders

Although the approaches are different, both the EU and the US advocate a risk-based approach. And, as we've seen with GDPR, there's a lot of room for coordination as we move toward collaboration and consensus on global standards.

From the perspective of a cybersecurity leader, it's clear that AI regulations and standards are in the early stages of maturity and will almost certainly change as we continue to understand the technology and applications. As highlighted by the regulatory approach in the US and EU, cybersecurity and governance regulations are much more mature, at least because the cybersecurity community has invested significant resources, expertise, and effort to raise awareness and knowledge.

The overlap and interdependence between AI and cybersecurity means that cybersecurity leaders are more acutely aware of the emerging consequences. After all, many people have been using artificial intelligence and machine learning for malware detection and mitigation, malicious IP blocking, and threat classification. Currently, the CISO will be responsible for developing a comprehensive AI strategy to ensure privacy, security, and compliance across the business, including the following steps:

  • Identify the use cases where AI will bring the most benefits.
  • Identify the resources needed to successfully implement AI.
  • Establish a governance framework to manage and protect customer/sensitive data and ensure compliance with regulations in each country where your organization operates.
  • Clearly assess and evaluate the impact of AI implementations across your business, including your customers.

Stay up-to-date with the AI threat landscape

As AI regulations continue to evolve, the only thing that is truly certain at the moment is that the United States and the European Union will play a key role in setting standards. Rapid change means we're sure to see changes in regulations, principles, and guidelines. Whether it's autonomous weapons or self-driving cars, cybersecurity will play a central role in the way these challenges are addressed.

Both speed and complexity suggest that we are likely to evolve from country-specific rules to a broader global consensus around key challenges and threats. In the light of the U.S.-EU cooperation to date, there is clear common ground to draw upon. The GDPR (General Data Protection Regulation) shows that the EU's approach ends up having a significant impact on the laws of other jurisdictions. Some degree of coordination seems inevitable, at least because of the gravity of the challenge.

As with GDPR, it's more of a matter of time and cooperation. Again, GDPR provides a useful case history. In this case, cybersecurity has been elevated from a technical requirement to a requirement. Security will become an indispensable requirement in AI applications. In situations where developers or businesses may be held accountable for their products, it's critical that cybersecurity leaders stay up-to-date on the architectures and technologies used in their organizations.

Over the next few months, we will see how EU and US regulations affect organizations that are building AI applications and products, and how the emerging AI threat landscape evolves.

Read on