Since the emergence of OpenAI’s ChatGPT, alongside a slew of other consumer- and enterprise-facing artificial intelligence (AI) products, AI has enjoyed a wave of publicity and visibility as a technology trend.
Because of the current prominence of AI as a buzzword, many commentators pay little heed to its technological details and definitions. But there are differences between true AI, neural networks and what simply amounts to advanced decision-making tools, in the same way that there are differences between generative AI (e.g., ChatGPT) and interpretative AI. For simplicity’s sake, the authors consider AI to encompass all such technologies that go beyond the scope of conventional software tools in terms of the level of sophistication they add to any business or technology processes.
The technology community has a long way to go in terms of understanding the full potential and limitations of AI. AI will require ongoing thinking—and rethinking—not only about methods of doing business and using the technology but also about ensuring the security and trustworthiness of everything that AI touches.
It is worth exploring how select cybersecurity risk management practices and current regulations aim to preempt and limit risk to users and stakeholders, and how they meet—or fail to meet—this required rethinking of strategy. There are fundamental considerations necessary to creating a security and privacy risk management baseline for any AI implementation or deployment.1 When this baseline is understood, users can take advantage of the opportunities AI presents for governance, risk and compliance (GRC).2
Cybersecurity and risk management methodologies fall into two main categories: government-issued regulations and voluntary best practices.
Existing and Evolving Regulatory and Risk Management Structures
Cybersecurity and risk management methodologies fall into two main categories: government-issued regulations and voluntary best practices. Of course, there is overlap between the two. For example, government agencies or suppliers may be required to adhere to existing frameworks, best practice catalogs issued by government agencies and government-issued recommended principles and questionnaires that can be interpreted as de facto rules.
Government-issued regulations are broadly intended to ensure that the rights of individuals are respected and that business practices or failures cannot endanger social or economic stability. This includes reducing the risk of business failure resulting from cyberincidents that can have major impacts far beyond the organization’s own survival. In general, regulations take a high-level approach, providing guard rails that aim to prevent negligent or abusive behavior.
In turn, commonly used operational cybersecurity best practices, risk management frameworks and control catalogs provide more detailed structures that developers and users of technologies, such as AI solutions, can refer to.
Cybersecurity risk management frameworks and control catalogs, including the US National Institute of Standards and Technology (NIST) Risk Management Framework (RMF)3 and the International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC) standards ISO/IEC 270014 and 27002:2022,5 have been complemented by updates such as the NIST AI RMF6 and ISO/IEC 23894:20237 (which builds on the general foundation of ISO 31000:2018)8, respectively.
Although numerous jurisdictions around the world have implemented or are in the process of developing both mandatory rulings and voluntary guidance,9 the European Union’s drafted AI Act is currently among the most prominent, advanced and comprehensive.10
The EU AI Act complements various existing and proposed EU cybersecurity and privacy regulations, including:
- EU General Data Protection Regulation (GDPR)—Mandates fundamental rights of privacy and anonymity for any data controller with any of a wide range of connections to the European Union11
- EU Network and Information Security (NIS2) Directive—Creates a common baseline for cybersecurity and risk management across the European Union, specifically in critical economic sectors12
- EU Digital Operational Resilience Act—Focuses on the financial sector, similarly to NIS213
- EU Cyber Resilience Act—Creates incident reporting requirements and information and communication technology (ICT) product security certification requirements14
Focal Areas of Existing AI Regulations
The state of AI rulings is constantly shifting as regulators seek to adapt to fast technological innovation (and, unfortunately, abusive and risky behavior). Nonetheless, several core attributes related to cybersecurity, data privacy and technology risk management can be identified in most major existing AI regulations. Although not all regulations address every principle, most touch on:
- Transparency and traceability of AI decision making—These principles ensure that decisions made by AI systems can be explained and retraced, promoting openness and understanding for users and regulators alike and fostering trust in the technology.
- Recourse to human decision making in case of error—This ensures that there is a mechanism in place for human intervention to correct or override AI decisions when they are wrong or lead to unintended consequences, providing a safety net that mitigates risk.
- Accountability by qualified people—This underscores the need for professionals with the right skills and knowledge to be responsible for the actions and decisions made by AI, ensuring that there is always someone to answer to any issues or malfunctions that arise.
- Proportionality, clear rules and informed decision making when involving AI in high-risk information and systems—These principles ensure that the involvement of AI is measured and appropriate, particularly in sensitive areas, and that decisions are made based on clear, predefined criteria and comprehensive information to minimize risk.
Combined, these principles are entirely congruent with the fundamental objective of regulation: to limit the risk that a person, tool or organization poses to industry and society.
Applicability of Existing Practices and Rulings
AI is software, regardless of whether it is defined as a product or a service. Thus, many existing principles, rules and practices can also be applied to AI.
However, many of today’s cybersecurity practices and laws were not designed with AI in mind, which can lead to significant challenges. For instance, EU GDPR mandates a right to explanation, requiring organizations to explain how they make decisions about individuals. However, AI algorithms, especially those based on deep learning, often function as a black box with complex, hidden internal workings that are difficult to interpret. This opacity can make it hard to provide a clear explanation of how an AI system made a particular decision.
Another example lies in the realm of adversarial attacks unique to machine learning (ML) systems, wherein small, carefully crafted changes to input data can lead to drastically incorrect outputs. Traditional risk management practices are not prepared for this new class of threats and fail to capture the dynamic and complex risk associated with AI.
Moreover, the speed at which AI technology evolves outpaces the ability of lawmakers to create, enact and enforce suitable regulations, leading to areas of AI use that are effectively unregulated. For example, the use of AI for purposes of deepfakes, autonomous weapons or predictive policing may not be completely covered by current legislation.
Regulation is often most effective when it is based on principles rather than specific, current high-visibility issues. This is why legislative frameworks such as those within which Canada’s Artificial Intelligence and Data Act (AIDA)15 and the EU AI Act have been developed to address basic matters such as the criticality of systems and applications affected by AI applications. Interestingly, by contrast, the proposed US AI Bill of Rights includes more specific elements including a contentious carve-out for law enforcement applications that could bear the risk of incomplete or inconsistent interpretation and application of laws.16
Gaps Needing Attention
Current gaps in AI cybersecurity highlight areas requiring immediate attention. One such gap is the lack of a comprehensive risk management framework that addresses AI-specific threats, including adversarial attacks, bias in AI decision making and the risk of overreliance on AI systems.
Governance and oversight mechanisms must also be updated to effectively monitor AI systems. For example, traditional IT audits may not be sufficient to identify biases in AI algorithms or to detect potential vulnerabilities to adversarial attacks.
Both regulations and commonly accepted best practices rely strongly on ethical guidelines. Many of these are still in their infancy and require further definition and consensus. Issues such as bias, transparency and accountability are particularly complex in the context of AI. For instance, how can it be ensured that AI systems are not unintentionally discriminatory? How can AI systems be made transparent enough to support audits and oversight while still protecting proprietary information? Who is accountable when an AI system makes a decision that leads to harm?
Equally important, how can the speed of innovation be balanced with controls to prevent or mitigate negative impact? Especially in cybersecurity, military, law enforcement and related applications in a public safety context, the ability to quickly respond to new and innovative attacks and other threats that do not respect legal or ethical restrictions is a key consideration when deploying security technology.
AI has enormous potential as a tool for countering not only abusive activities, such as financial or market manipulation, but also any situations wherein a fast, technologically powerful solution can prevent widespread harm even when there is no underlying abusive motivation or actor (e.g., analyzing climate patterns, developing medicines or any other use case relying on quick use of large dataset processing). What is an acceptable delay to review AI solutions to these scenarios to ensure the potential negative impact is minimal?
Future-proofing regulations and best practices in the rapidly evolving field of AI is a challenging task. It necessitates continuous learning, adaptation and agility in responding to emerging threats and challenges.
Consistency of ethical considerations is also affected strongly by cultural norms. AI is software increasingly used across borders. Regulators and users of AI applications must keep in mind that Australia, Brazil, China, the European Union, Japan and the United States have vastly differing attitudes to privacy, individual rights, stability, transparency, the responsibility of providers, regulatory scope and other factors that affect how AI is governed. Therefore, international bodies are unlikely to produce broadly accepted and shared fundamental principles that will lead to more consistent AI rulings.
Creating Future-Proof Regulations and Voluntary Best Practices
Future-proofing regulations and best practices in the rapidly evolving field of AI is a challenging task. It necessitates continuous learning, adaptation and agility in responding to emerging threats and challenges.
To keep pace with technological evolution, organizations and regulatory bodies must adopt a proactive and forward-looking approach. This may involve adopting anticipatory regulations, wherein potential impacts and risk of new technologies are considered before they are fully developed.
Regular revisions of frameworks and regulations are necessary to accommodate advancements in technology. This can be achieved by integrating flexibility into regulations and adopting outcome-based rather than prescriptive regulations, allowing for diverse methods of compliance as technology evolves.
Fostering a culture of education and understanding of AI’s risk and benefits among stakeholders is also essential. This could involve providing regular training updates for employees and ensuring that upper management is informed and aware of the latest developments in AI and cybersecurity.
Lastly, because cyberthreats know no borders, international collaboration and standardized approaches are crucial for consistency and efficiency. International standards can help ensure that AI technologies are secure, ethical and interoperable across different jurisdictions and can aid in cooperation and intelligence sharing as responses to cyberthreats.
Conclusion
AI of all types has proven both a major benefit for and a major threat to a wide range of social and economic activities. AI will profoundly disrupt privacy, security, resilience and integrity, making it no different from past revolutionary, disruptive technological innovations such as the Internet, the transistor, the airplane or the automobile.
Regulations, rules and the increasingly prevalent best practices used by industries to define acceptable applications of technology are fundamentally designed to limit the risk of abusive or irresponsible use of new tools. To do so effectively, they must be clear, consistent, actionable and adaptable to unforeseen developments (to a certain degree). Many rule-making bodies already understand this need and have implemented, or are in the process of implementing, guidance to ensure the least possible harm from AI as it evolves. Based on current trends, there is cause to be optimistic that, at least in the cybersecurity and privacy space, this process will continue in the years to come.
Endnotes
1 Cozzupoli, J.; “Balancing Privacy and Security in AI Systems: Navigating the Cybersecurity Conundrum,” Cybersecurity Advisors Network, 3 May 2023, http://cybersecurityadvisors.network/2023/05/03/balancing-privacy-and-security-in-ai-systems-navigating-the-cybersecurity-conundrum
2 Cozzupoli, J.; “Embracing AI for Enhanced GRC Strategy and Implementation,” Cybersecurity Advisors Network, 31 May 2023, http://cybersecurityadvisors.network/2023/05/31/embracing-ai-for-enhanced-grc-strategy-and-implementation
3 US National Institute of Standards and Technology (NIST), NIST Risk Management Framework, USA, 2016, http://csrc.nist.gov/Projects/risk-management/sp800-53-controls/downloads
4 International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC), ISO/IEC 27001 Information Security, Cybersecurity and Privacy Protection, Switzerland, 2022, http://www.iso.org/standard/27001
5 International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC), ISO/IEC 27002:2022 Information Security, Cybersecurity and Privacy Protection—Information Security Controls, Switzerland, 2022, http://www.iso.org/standard/75652.html
6 US National Institute of Standards and Technology (NIST), Artificial Intelligence Risk Management Framework (AI RMF 1.0), USA, January 2023, http://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf
7 International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC), ISO/IEC 23894:2023 Information Technology–Artificial Intelligence–Guidance on Risk Management, Switzerland, 2023, http://www.iso.org/standard/77304.html
8 International Organization for Standardization (ISO), ISO 31000:2018 Risk Management–Guidelines, Switzerland, 2018, http://www.iso.org/standard/65694.html
9 Kohn, B.; F. U. Pieper; “AI Regulation Around the World,” TaylorWessing, May 2023, http://www.taylorwessing.com/en/interface/2023/ai---are-we-getting-the-balance-between-regulation-and-innovation-right/ai-regulation-around-the-world
10 European Commission, “A European Approach to Artificial Intelligence,” http://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
11 GDPR.eu, “Complete Guide to GDPR Compliance,” http://gdpr.eu/
12 European Parliament, “The NIS2 Directive: A High Common Level of Cybersecurity in the EU,” 8 February 2023, http://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(2021)689333
13 Council of the European Union, “Digital Finance: Council Adopts Digital Operational Resilience Act,” 28 November 2022, http://www.consilium.europa.eu/en/press/press-releases/2022/11/28/digital-finance-council-adopts-digital-operational-resilience-act/
14 European Commission, “Cyber Resilience Act,” 15 September 2022, http://digital-strategy.ec.europa.eu/en/library/cyber-resilience-act
15 Government of Canada, The Artificial Intelligence and Data Act (AIDA)—Companion Document, Canada, 13 March 2023, http://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document
16 The US White House, “Blueprint for an AI Bill of Rights,” http://www.whitehouse.gov/ostp/ai-bill-of-rights/; Lee, N. T.; J. Malamud; “Opportunities and Blind Spots in the White House’s Blueprint for an AI Bill of Rights,” Brookings, 19 December 2022, http://www.brookings.edu/articles/opportunities-and-blind-spots-in-the-white-houses-blueprint-for-an-ai-bill-of-rights/
JOE COZZUPOLI | CISM, CISSP
Is a seasoned security expert advising chief information security officers (CISOs) and top executives on tech-driven business goals with more than 20 years of experience. He is a respected voice in cybersecurity, often keynoting at leading events and consulting for Fortune 500 giants worldwide, guiding top-tier boards on cyberrisk management. He proactively protects against ransomware and fraud threats, and he nurtures future cybersecurity talent via workshops with nonprofits on topics from cloud security to CISO mentoring.
JOHN SALOMON
Has spent more than 25 working in cybersecurity, risk management, operational resilience, technology and strategy around the world. He has extensive experience in financial services and numerous other critical sectors, in roles ranging from keyboard jockey to senior leadership. He is a board advisor, consultant, mentor and investor based in Spain.