Introduction
The cyber domain presents unique and daunting challenges for policymakers seeking to reduce threats and avoid harms caused by malicious cyber activity. Cyber threats are both persistent and omnipresent, as the technical capabilities and the tools required to generate cyber threats are widely available, and access to them is difficult (if not impossible) to strictly control. Malicious cyber activities have thus proven difficult to manage through traditional approaches to deterrence, arms control, or law enforcement. Despite the challenges with addressing cyber threats with traditional security tools, the dangers emanating from malicious cyber activities can produce significant real-world harms for private individuals, businesses, organizations, and governments alike. In extreme cases, they present risks of severe disruptions to critical infrastructure or security systems that have the potential to escalate into major international crises, and even armed conflict.āÆāÆāÆ
Unfortunately, progress in implementing behavioral norms and understanding how States interpret existing international law as applying to cyberspace has been slow and uneven. As a result, calls for accountability are growing, most notably from UN Secretary-General Antonio Guterres. In 2023 he recommended the establishment of āā¦an independent multilateral accountability mechanism for malicious use of cyberspace by States to reduce incentives for such conduct. This mechanism could enhance compliance with agreed norms and principles of responsible State behaviour.ā1Antonio Guterres, A New Agenda for Peace, Policy Brief 9, United Nations, July 2023, p.27.
Since 2022, Stimsonās Cyber Program has been researching the implementation of international norms and law in other international threat areasāfor example, conventional arms control; the regulation of chemical, biological, radiological, and nuclear materials; management of the private military and security industry; and military uses of outer space, to cite a few examples. It has also examined various dimensions of cyber accountability through a webinar series, in-person convenings and events, and a collaboration with the EU Institute for Security Studies (EU ISS).
The primary purpose of our work over the past two years has been to draw out lessons about how transnational threats have been managed in these other issue areasāespecially through the establishment of accountability mechanisms that help shape incentive structures to encourage good practices and discourage harmful or proscribed behaviorsāthat may be appropriately adapted or applied to the cyber domain. This report is thus intended to provide policymakers and other stakeholders in the cyber community with a ātoolkitā of policy approaches that have already been tried and tested in other contexts, and to offer potential pragmatic paths forward to enhance accountability in the cyber domain, and thereby improve deterrence of malicious cyber activities.āÆāÆ
Understanding Accountability and DeterrenceāÆ
This report is focused on the concept of accountability and how it might be applied to malicious activity in cyberspace to create more positive incentive structures and better deter dangerous, disruptive, and destructive behaviors.2Why the emphasis on accountability? According to James A. Lewis, a senior researcher with the Center for Strategic and International Studies, an effective international cyber strategy must focus upon the following three elements: āhow to how build resilience, to create a collaborative defense, and how to produce accountability in cyberspace (and this should include a discussion of when and how to disrupt opponent operations).ā James Andrew Lewis, āDeterrence and Cyber Strategy,ā Center for Strategic and International Studies, November 15, 2023, https://www.csis.org/analysis/deterrence-and-cyber-strategy.
The concept of cyber accountability or rather, interest in how accountability might be strengthened in the cyber policy domain has been steadily growing in recent years. Various scholars and policy experts have contributed to a growing body of literature on the topic, at times in relation to concepts of transparency and responsible behavior.3See, for example: Jason Healey, Creating Accountability for Global Cyber Norms, Center for Strategic and International Studies, February 23, 2022; and Patryk Pawlak, Accountability in Cyberspace: The Holy Grail of Cyber Stability?, EU Cyber Direct, March 2024. Relevant research and events have also been produced by The Hague Program on International Cyber Security at Leiden University, the EU Institute for Security Studiesā EU Cyber Direct Program, and the Royal United Services Instituteās project on Responsible Cyber Behavior. Much of this research builds on other efforts to strengthen awareness and understanding of agreed behavioral norms and the applicability of international law to cyberspace. Understandably, research and dialogue about cyber accountability has often included consideration of cyber attribution, whether that is technical, legal, or political.
In cyberspace, behavioral norms stem from existing state obligations under international law, as well as collective and national interpretations on the application of international law. In 2013, a UN Group of Governmental Experts (GGE) agreed that international law is applicable to state conduct in the use of information and communications technology (ICT).4UN General Assembly, Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security, A/68/98*, June 24, 2013. In 2015, a subsequent GGE outlined and adopted a set of 11 non-binding, voluntary norms for state behavior focused on peacetime use of ICTs.5UN General Assembly, Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security, A/70/174, July 23, 2015. Eight norms are actions that states want to encourage, while the other three involve actions that countries should avoid. The framework is primarily about promoting interstate cooperation, respecting human rights and privacy, protecting critical infrastructure, safeguarding global supply chains, providing assistance when required and preventing the malicious use of digital technologies on states’ national territories. See https://www.aspi.org.au/cybernorms for more detail. The applicability of international law and the norms have been repeatedly endorsed by the entirety of the UN membership through UN General Assembly resolutions. International law and norms, coupled with confidence building measures and underpinned by cyber capacity-building, is increasingly being referred to by diplomats, policymakers, and others as the UN Framework for Responsible Behavior in Cyberspace (hereafter, the Framework). Yet there are other relevant foundations for assessing accountability, including laws on cybercrime, national cyber security strategies, and regional policies or frameworks.
In international relations, accountability is usually viewed through a state-centric lens, as the ability to enforce norms, standards, or rules among and within states by attributing responsibility for undesirable behaviors. It is perhaps most often interpreted in this negative form ā enforcing rules by imposing costs, seeking to prevent ābadā behavior through the threat of consequences.
This type of negative accountability has generally been the preferred model in cyberspace.6R. Crootof. (2018) āInternational Cybertorts: expanding state accountability in cyberspace.ā Cornell Law Review, 103(3), 565ā644.; Increasing International Cooperation in Cybersecurity and Adapting Cyber Norms (Elena Chernenko, Oleg Demidov and Fyodor Lukyanov); Martha Finnemore, Duncan B Hollis, āBeyond Naming and Shaming: Accusations and International Law in Cybersecurity,ā European Journal of International Law, Volume 31, Issue 3, August 2020, Pages 969ā1003, https://doi.org/10.1093/ejil/chaa056. Negative accountability in the cyber context refers to obligations by actors (usually states) to refrain from malicious actions that would violate agreed norms, and to collectively punish such violations by imposing costs and consequences on the perpetrator(s). This definition also features in the way state actors understand accountability and implement it through national cyber strategies.7The White House, National Cybersecurity Strategy of the United States, March 2023, https://www.whitehouse.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf. This conception of accountability is closely related to the concept of deterrence, which relies on the credible threat of imposing costs on those who engage in proscribed behaviors, such that the anticipated costs of the action outweigh the potential benefits.
However, for a variety of reasons, malicious cyber activities have proven especially difficult to deter. Firstly, it is sometimes difficult to determine exactly who is responsible for carrying out malicious cyber actions due to the technical sophistication of the malign actor, or the limited technical capabilities of their intended target(s). Secondly, even in cases where strong technical attribution is possible, the victim(s) may not have the capabilities necessary to impose sufficient costs on the perpetrator(s), or they may be unwilling to do so (or even to make an attribution) based on political considerations. Further, there may not be adequate international agreement on the legal process by which criminal charges or civil suits can be brought, or the standards of evidence to be presented against the alleged perpetrator(s), making it difficult to pursue legal remedy reliably or effectively for harms caused by malicious cyber activities.
This helps give credence to understanding accountability its positive sense: being answerable for fulfilling an obligation, such as actions or policies that must be enacted and upheld to maintain cyber resilience.
Strong defenses against harmful or destabilizing actions can reduce the benefits a malicious actor can expect to gain from such an act and make it costlier for malicious actors to conduct harmful cyber operations.
Broader acceptance of a positive accountability lens might offer a promising direction for deterrence through mechanisms for accountability. By building resilience through positive obligations that actors can voluntarily abide by, supported by a set of incentive structures, and developed with multiple stakeholders in mind, the international community as a whole can deter undesirable or disruptive outcomes in specific contexts or use-cases in the cyber domain. Cyberspace is a domain with multiple stakeholders, including states, intergovernmental organizations, multinational corporations, non-governmental organizations, and even private individuals and groups that operate internationally, and is thus inherently complex. It therefore requires a pluralistic and multi-dimensional approach that goes beyond accountability in the negative sense of enforcing rules by imposing costs.8āAccountability and abuses of power in world politics.ā American Political Science Review, 99(1), 29ā40
Approaches to Cyber Accountability
This leads back to the initial premise of this report in which cyber deterrence is reimagined and repurposed through an accountability lens. Our research considers both forms of accountability in a broader context of global cyber governance and the full ecosystem of actors involved: national governments, international and intergovernmental organizations, the private sector, law enforcement, civil society, and researchers.
While cyberspace possesses unique characteristics, the malicious and strategic use of ICT can also be understood as yet another threat domain posing security challenges and risks. Approaches to managing its threats and risks are can therefore be informed by experiences from other transnational threats.
While cyberspace possesses unique characteristics, the malicious and strategic use of ICT can also be understood as yet another threat domain posing security challenges and risks. Approaches to managing its threats and risks are can therefore be informed by experiences from other transnational threats, from conventional weapons to weapons of mass destruction (WMD) to chlorofluorocarbons. Formal and informal mechanisms exist ranging from diplomatic pressure to sanctions and prosecutions to military actions for holding states and individuals accountable for actions that harm or could cause harm, and for developing networks/expertise to continually improve these mechanisms. Although some work has been undertaken to promote cyber normsā implementation and to progress accountability, there are very few lessons-learned examinations of cases in other domains and application of these lessons to cyberspace that could progress efforts.
Research Methodology and Report Overview
The initial phase of this research project sought to identify areas that have some level of international cooperation around managing risk through establishing norms of behavior and models of accountability that might offer useful analogies to the challenges faced in the cyber domain. A critical consideration for case selection was whether the area under consideration had similar governance challenges to cyber, such as difficulties in attribution, accountability, liability, and reliable consequences; or if it has similar characteristics such as a diversity of actors, including governmental and non-governmental, with disparate interests, varied risks, and cross-domain intersectionality. After an initial review of more than 30 different transnational security challenges and/or issue areas, the following issues were selected for further research through case studies designed to identify potentially useful analogies, lessons, and models which may be instructive for advancing accountability and addressing challenges in cyber diplomacy and governance.
In Chapter 2, Allison Pytlak outlines the extant literature on arms control and nonproliferation. She highlights the main contours of a longstanding debate amongst academics and policymakers on the feasibility of applying traditional arms control and non-proliferation approaches to international cyber security.Ā Ā
Pytlak offers two studies of conventional arms control mechanismsāthe Arms Trade Treaty and the Wassenaar Arrangementāand presents lessons from both mechanisms. A key point from the analysis is the importance of focusing on the regulation of behavior and activity rather than specific technologies. In both instances, Pytlak highlights the importance of incentives for compliance, and points to the need to revisit the concept of dual-use for a cyber and digital context. She concludes that a toolbox approach of formal and informal mechanisms, and tools, will enable policy and regulatory responses that are more focused on specific cyber threats and challenges.
Continuing with the theme of arms control and non-proliferation, in Chapter 3 Debra Decker and Kathryn Rauhut examine UN Security Council Resolution 1540 and its related Matrices. While the risks from WMD non-proliferation regimes are addressed in other treaties and agreements, they were not universally adopted and did not cover all security risks; UNSCR1540 closed those gaps. Decker and Rauhut explain how the 1540 Matrices have helped countries better understand their obligations and posit that comprehensive and independent risk assessments would help to provide consistent metrics and guidance to policymakers. To that end, they argue that a matrix of obligations could help states and stakeholders to demonstrate compliance with cyber norms and international law and, if risk assessments and prioritizations were done, could help guide action plans for implementation.Ā
In Chapter 4, Zhanna L. Malekos Smith examines what governance mechanisms and capacity building initiatives are functioning well in outer space and proceeds to make the case for how these measures might be applied towards promoting accountability in cyberspace. In exploring this question, Malekos Smith examines the Artemis Accords and capacity building mechanisms for the long-term sustainability of space activities. She further highlights the deepening relationship between outer space and cyberspaceāan intersection which may require its own governance and accountability mechanisms, or better integration and application of existing norms and law for this new context.Ā
In Chapter 5, Decker and Rauhut discuss the Montreal Protocol on Substances that Deplete the Ozone Layer. One of the lessons they draw from analyzing the Protocol is that different capacities lead to different risk appetites but that these differences can be balanced by incentive structures. This was a core aspect of the process leading to the adoption of the Protocol. In cyberspace this is also trueāmany countries value low-cost and potentially less secure ICT as possibly worth the gains from quicker and cheaper digitization. Professional panels provide assessments of technology and economic alternatives to help customize approaches for reducing risks within each country as part of reducing overall societal risks.Ā
In Chapter 6, Rosa Celorio outlines the applicability of international human rights law and related peer review mechanisms to cyber and digital security. Celorio argues that there is scope to better integrate concerns about cyber security and cybercrime within existing human rights peer review mechanisms such as the Universal Periodic Review (UPR). Current initiatives like the UN Global Digital Compact can support the identification of the human rights that are at risk in these areas and outline responsibilities for state and nonstate actors. Celorio stresses that UN Charter- and treaty-based bodies can also make critical contributions in specifying and defining terminologies when addressing digital threats to human rights.Ā
In Chapter 7, Reece Iriye looks at the European General Data Protection Regulation (GDPR). He underscores the importance of clear and precise yet adaptable terminology in regulation, as well as centralized and uniform enforcement processes with oversight measures for managing major policy violations. Iriye notes this uniform approach simplifies the regulatory landscape for companies operating in multiple countries. He argues that regional organizations can play a significant role in shaping global policies in the digital and cyber field particularly in areas where private actors frequently operate, and cross-border interactions regularly occur. Iriye also stresses a focus on regulation of activities and roles instead of items and technology and the importance of building in opportunities for review.Ā
In Chapter 8, James Siebens and Anne-Marie Buzatu discuss the approaches to “co-regulation” of the private military and security industry undertaken by states and private companies through The Montreux Document on pertinent international legal obligations and good practices for states related to operations of private military and security companies (PMSCs) during armed conflict and the subsequent establishment of the International Code of Conduct for Private Security Service Providers. The authors argue that these examples may provide a useful model for how to establish common interpretations of how international law applies in cyberspace, as well as clarifying and fostering accountability around the legal obligations and normative commitments of both states and private companies engaged in relevant cyber activities.Ā
In Chapter 9, Moliehi Makumane explains that the experience of the African Peer Review Mechanism has demonstrated that there is a will to participate in regional peer review processes. To integrate cyber security into regional mechanisms at scale, it will be crucial to establish a relationship between regional peer review mechanisms and UN cyber processes which should encourage states to be transparent on their activities in peer review selfāassessments. As with any accountability measure, Makumane notes that a crucial key to success is ambition.Ā
In Chapter 10, Debra Decker argues that market mechanisms can play a pivotal role in promoting accountability for securing cyberspace.Ā Mechanisms including insurance, credit ratings, product/service security ratings and liability adjustments, as well as tax adjustments and grants, are currently relatively untapped incentives that can play a pivotal role in promoting accountability for securing cyberspace. Of these, Decker explores the expanding interest in the role of the insurance industry in driving and defining accountability.
Finally, in Chapter 11, Allison Pytlak and Shreya Lad examine activities and initiatives being implemented by the International Telecommunications Union (ITU) and argue that these could be better leveraged in the pursuit of cyber accountability. This includes the Global Cybersecurity Index, national cyber security strategies, and activities that support National Computer Incident Response Team (CIRTs) in helping states, assess threats, build resilience, and promote accountability.Ā
The above contributors were invited to analyze these specific issue areas to assess the potential relevance to cyber accountability, and the legal, normative, or informal and market-based mechanisms that were developed to address accountability gaps therein. While the topics of the case studies are quite diverse, each addressed consistent research questions: how was the mechanism developed; how it is intended to function; any enforcement mechanisms; and the degree to which they have been effective in structuring incentives and shaping behavior in each issue area. The experts also offer views on how the mechanism under question is relevant to cyber. In some instances, there is a direct relationship while in others it is more about lessons learned that are instructive for a particular cyber governance challenge.
Some experts offer reflections about gender dynamics within the negotiation of these instruments, or about how gender equality and gender-responsiveness are relevant.
This collective analysis has produced a diverse toolkit of accountability models and mechanisms, ranging from legally binding treaties to regional-level peer review processes to insurance standards. Based on these structured analyses, contributors each proposed a set of recommendations linking the case studies to emerging cyber issues, or identifying the unique benefits, successes, and failures of the mechanisms they studied in order to inform similar efforts in cyberspace.
The research and our collective findings are largely predicated on global frameworks and cybersecurity and cybercrime efforts such as existing international law, the UN Framework, the UN Cybercrime Treaty process, and multilateral initiatives such as the Counter Ransomware Initiative and the Pall Mall Process, among others.
Importantly, this report does not seek to identify or recommend one particular pathway or model as the best option to creating accountability for, or deterring, malicious activities in cyberspace. Instead, the report is designed to set out a range of policy approaches and considerations for policymakers, considering both the analyses of the case study authors and the input collected during multistakeholder expert workshops. As such, the report seeks to identify accountability mechanisms with the greatest potential value and relevance to addressing cyber gaps and challenges, while deliberately leaving the assessment of which approaches are most feasible or desirable to those responsible for preserving international peace and security.
This report does not seek to identify or recommend one particular pathway or model as the best option to creating accountability for, or deterring, malicious activities in cyberspace. Instead, the report is designed to set out a range of policy approaches and considerations for policymakers, considering both the analyses of the case study authors and the input collected during multistakeholder expert workshops. As such, the report seeks to identify accountability mechanisms with the greatest potential value and relevance to addressing cyber gaps and challenges, while deliberately leaving the assessment of which approaches are most feasible or desirable to those responsible for preserving international peace and security.
Findings

Drawing on the case study-specific conclusions and recommendations, the report offers six cross-cutting and overarching conclusions for cyber accountability and four other pertinent observations. We also list several possible models (frameworks and mechanisms) that we believe have value for addressing particular accountability cyber gaps and challenges. Finally, we identify other areas for future research.
1. There is no āone size fits allā solution for cyber accountability gaps. An integrated āregime approachā comprised of mutually reinforcing mechanisms, tools, and levers is a promising approach for cyber governance given the diverse needs, threats, and gaps that exist.
Based on multiple case studies, it is evident that mixed and āintegrated regimeā approaches offer unique value for enhancing accountability and reducing harm. It does so by designing and incorporating mechanisms that can be focused on particular concerns or threats, and that account for a landscape with diverse threat actors, as well as diverse stakeholders and participants. For example, such an approach has been useful in arms control and non-proliferation efforts which are similarly composed of formal and informal mechanisms, including legally binding agreements as well as informal trust-building measures and information-sharing, among other activities and agreements. Outer space governance presents a similar approach.
To a large extent, a loose regime approach is what already exists in cyber governance. While international law and the UN norms provide a universal baseline for states, there are a multitude of other mechanisms which variously focus on particular threats and concerns or are open to particular types of constituents. This is also reflected in the approaches of certain regional actors, such as the European Union and its Cyber Diplomacy Toolbox, or the African Unionās shared interpretation about the applicability of international law to state use of ICTs. The emerging integrated regime approach reflects that States and stakeholders have differing priorities and capacities in relation to accountability initiatives.
Yet there is a very real risk of having too many tools in the box, however, and either not employing any of them thoroughly enough or creating loopholes and contradictions between their provisions that might be exploited. Too many tools may also send mixed messages about expectations of what is and what is not acceptable behavior, further exacerbating accountability gaps and risking unanticipated or unintentional escalation. Some of the case studies demonstrated where so-called harmonizing instruments have added value by clarifying and streamlining expectations and closing loopholes.
2. Focusing on activities and behavior offers more potential than focusing on the means and technologies.
Several case studies illustrated the benefits of policy and regulation that focuses on activities and behavior, rather than on items and technologies. This could be impactful for improving cyber accountability because it helps to mitigate the loopholes and challenges posed by constant technological change, in which efforts to prevent or address harms posed by a particular technology will always lag behind policymaking. Such a focus aids in addressing the dilemma posed by dual-use items, since the latter can seamlessly alternate between āmilitaryā and ācivilianā applications, and āwartimeā and āpeacetimeā uses. In this regard, an updated understanding of ādual-useā for the digital era is warranted.
This approach may also be instructive for determining what is unacceptable as based on impact or harm, such as through a human rights or peace and security lens and it offers applications for positive and negative accountability. Yet, despite this finding, it bears mention that views about tech neutrality are not universal. A significant amount of research has illustrated the racial, gender, and other biases implicit in some technologies or the datasets which underpin, or in unequal access to technologies.9See, for example: Ray Acheson (2021), Gender and Bias, Womenās International League for Peace and Freedom, https://www.stopkillerrobots.org/wp-content/uploads/2021/09/Gender-and-Bias.pdf; Ardra Manasi, Subadra Panchanadeswaran, and Emily Sours, āAddressing Gender Bias to Achieve Ethical AI,ā IPI Global Observatory, March 17, 2023, https://theglobalobservatory.org/2023/03/gender-bias-ethical-artificial-intelligence/; M. Broussard. (2023). More than a glitch: Confronting race, gender, and ability bias in tech. MIT Press. Calls from UN experts for moratoria10UN Office for the High Commissioner on Human Rights, āSpyware scandal: UN experts call for moratorium on sale of ālife threateningā surveillance tech,ā Press release, 12 August 2021, https://www.ohchr.org/en/press-releases/2021/08/spyware-scandal-un-experts-call-moratorium-sale-life-threatening. on items like surveillance software spyware has also made the point that some software has only malicious and harmful uses and should not be seen as neutral.
3. Cross-domain governance could be better leveraged to advance accountability.
From human rights to outer space and the use of private security companies, there are cyber and digital related risks inherent in many of the case studies explored. To this end, we propose that cross-domain governance could be better leveraged to advance accountability. For example, a better integration of digital and cyber threats within human rights peer review processes such as the Universal Periodic Review (UPR) could aid in the meaningful operationalization of UN cyber norm (E) on human rights and the broader applicability of international human rights law (IHRL) to cyber.
Stimsonās research has further affirmed that States weight their priorities as a whole, meaning that considerations in cyberspace are often a result of risks faced in other domains or in relation to other objectives and priorities. As such, incentivizing responsible behavior across domains could help fill in cyber accountability gaps.
One potential risk that emerged through our research is that greater cross-domain governance will multiply the number of actors involved, which could make it challenging to bring about a focused result on cyber accountability and the actors particularly involved in cyber activities of concern.
4. Inclusivity is imperative when developing and implementing cyber accountability mechanisms and for ensuring effective governance more broadly.
Across the board, cases studied demonstrated successful engagement with a breadth of non-governmental stakeholders, through a diversity of methods and on issues as equally sensitive as cyber peace and security. In most of these cases the engagement was not merely successful in a superficial or tokenistic way, but rather it added real value and, in many instances, aided in fostering accountability.
What does this mean for cyber accountability? One of the first requirements is to identify and understand the differing roles that diverse stakeholders play in fostering accountability. This includes in relation to building awareness of and supporting implementation of agreed norms and law. Some actors can be held accountable through peer reviews and other mechanisms, which may be initiated or involve non-governmental stakeholders. Tech providers can develop transparent industry performance principles and best practices such as security by design as an integral part of development rather than an add-on. Insurers can develop more standardized processes to hold their clients accountable, for example, through compliance with certain standards and vulnerability risk assessments. More broadly, the enterprises that comprise the layers of the internet ā from IP addresses to routing ā should also be part of accountability. Other stakeholders have a role to play in reporting about cyber harm and raising awareness about responsible behavior and legal obligations.
A particular role those non-governmental actors played in some of the mechanisms studied was in providing independent validation and assessment. In cyber, such independent bodies could be useful for depoliticizing attribution processes. That said, there is a risk that if independent bodies may not be provided with the correct information, or all the information needed to perform an attribution, given that often this comes from classified information and intelligence. Should such bodies arrive at incorrect or false conclusions, accountability and trust would be negatively impacted.
5. Carrots and sticks.
Several case studies illustrate the need for a range of ācarrots and sticksā to drive incentive to comply with commitments and obligations. These range from market incentives to other forms of rewards, such as the potential of capacity-building or other support as well as reputational considerations, but also penalties and costs.
Successful accountability in cyber governance likewise requires consideration of incentives and repercussions, not far removed from the concepts of positive and negative accountability. Such carrots and sticks should be designed with a view towards the specific community of actors involved or targeted.
It will be necessary to find ways to elevate cyber security investment as a key priority for states and stakeholders.
Positive incentives are needed to adopt and operationalize norms, and to interpret and apply international law. States can be supported to prioritize security investments through some explicit means, such as technology transfers, infrastructure development assistance, or sanctions relief. The true cost of malicious cyber incidents, including losses ā in time, money and reputation ā is not well appreciated or documented. Sustained investment in the cyber security of states and other organizations can pay dividends, including for economic development. With adjustments in coverage, insurance could also play a role, as underwriters develop clearer understandings of risks and collaborate among themselves.
6. Political will is a must.
Ultimately, the key variable with all such efforts is political will to not only constrain the behavior of others through regulations and a compelling combination of carrots and sticks, but also for capable actors – governments and private actors alike – to exercise mutual restraint in their uses of such technologies.
We have observed that states have a higher appetite to discuss issues and uphold commitments that do not directly implicate themselves. In cyber, this has tended to include topics like cybercrime, commercially available intrusion capabilities, and not attacking critical infrastructure. While these threats and threat actors do not represent the full spectrum of cyber challenges, they might constitute a starting point for cooperation among states and other stakeholders, and where there is a greater willingness to hold one another to account because political will exists at a higher level.
There does not appear to be a perfect formula for generating political will ā and indeed, in any of the issues areas researched this is a problem- but the research does point to good practice such as ensuring buy-in and ownership when developing policies, regulation or law; considering the barriers for entering into frameworks or mechanisms; and understanding that states and stakeholders will have differing priorities and capacities in relation to accountability initiatives.
Other Pertinent Observations
Capacity Building
The research process affirmed multiple times that capacity building is foundational for all aspects of cyber security and resilience, and therefore has a role in accountability, especially positive accountability and as ādeterrence by denialā.
Adaptability
Adaptability is a fundamental principle of a successful and dynamic governing mechanism which can aid in accountability. Based on our research, mechanisms require clear definitions and an overarching structure to ensure accountability. However, mechanisms also need to consider review processes or other ways to be adaptable in step with new information or technologies. In some ways this also mitigates the challenge of how policy can keep up with technological development, although it bears mention that even with adaptable mechanisms, new technologies can still be misused.
Likeminded Coalitions and Regional Initiatives
A theme that surfaced repeatedly throughout the research process, and with conflicting findings, is around the value of regional approaches and working in smaller, like-minded coalitions. Often these approaches are shown to be relatively more straightforward to establish because they are based on common values, interests, and concerns, or pre-existing channels of communication that aid with trust and confidence. Questions remain about the impact on accountability beyond their membership.
Definitions and Clarifications
Throughout our research process, the topic of definitions surfaced often. Agreed definitions on certain terms and types of operations could bring clarity in some contexts, while others noted this might only further encourage malicious actors to pursue operations in a grey zone. At present, strategic ambiguity appears to be the favored approach. These concerns raised questions regarding whether and how to shift toward an incentive framework that rewards clarity and commitment. More detailed principles and guidance are needed for states to understand how to demonstrate positive behavior, not just acceptable behavior. The need for more consistently defined and/or widely understood definitions of cyber harm or impact was also pointed out as aiding in accountability, because such understanding could help to reinforce thresholds for unacceptable behavior. When defining harm, it is important to recognize the various types and levels of harm. It is also necessary to identify incidents that have a significant impact as well as those that have relatively lower-level effects but affect a larger number of individuals, cumulatively.
Possible Models
As noted earlier, this report does not seek to recommend one mechanism or instrument to āsolveā cyber accountability, or as offering greater value than others. In fact, in keeping with the finding about an integrated regime approach, we believe that different responses and tools are needed for different gaps and challenges. In that spirit, our research process over the last fifteen months has surfaced certain activities or models that could be especially valuable for accountability and transparency.
- Much has been written and said about attribution in relation to accountability. Political attribution statements, building off technical and legal analysis, will continue to be an important aspect of accountability and deterrence efforts, as are technical characterizations, information-sharing mechanisms or repositories about capabilities, threat actors and groups, tactics, and attributive techniques and tools. But more can be done to increase their impact. Many agree that ānaming and shamingā has not been sufficient to deter malicious cyber activity.
- While advances have been made in the rapidity and quality of attribution methods, they can still be improved upon including through better consistency, predictability, and uniformity.
- The possibility of establishing a UN or other multilateral mechanism to aid in attribution has already been explored and largely dismissed as too challenging to create and maintain, although some see value in continuing to explore options of this nature. As attribution capabilities continue to improve, there is merit in revisiting this proposal or approach.
- Leveraging official public political attribution (OPPAs) as a diplomatic tool, through responsible, professional, and verifiable attribution, and mechanisms to negotiate retaliatory measures and resolve disputes.
- There is a need for building capacity to understand technical attribution reports, and/or to issue or join with political attribution statements.
- Cyber sanctions did not surface often or explicitly in the research process, which is itself an interesting finding. Yet they are increasingly a companion to attribution statements amongst the small but growing number of countries and regional with relevant policies and are one of the few accountability āsticksā employed to deter malicious cyber activity.
- While a new universal legal instrument in this area seems unlikely due to geopolitics and undesirable due to the diverse issues it would need to encompass, a framework convention could be an ideal model for addressing the malicious use of ICT, should the context change. Framework conventions are premised on an initial agreement and then are built out with protocols that focus on related challenges or sub-issues, which states are invited to adopt and ratify as the protocols are developed. The United Nations Framework Convention on Climate Change, the Vienna Convention, and the Convention on Certain Conventional Weapons are examples.
- Politically binding agreements ā including declarations and codes of conduct – will continue to be useful for signaling about what is and what is not acceptable behavior, and for designing responses that respond to particular challenges or context. In the current geopolitical context, politically binding agreements may offer the best way forward short of new binding law. Often, signatories and supporters of political declarations or political instruments work together to outline practical actions to operationalize and enforce the commitments they have signed up to, and helping to generate communities of practice. The Programme of Action (PoA) to Prevent, Combat and Eradicate the Illicit Trade in Small Arms and Light Weapons in All Its Aspects (as well as other UN PoAs) and the Montreux Document on Private Military and Security Companies are examples.
- Within the basket of politically binding agreements are regulatory framework(s) on peacetime and dual use cyber/ICT focused on restricting use and activity (development, transfer, etc.) These could be accompanied by moratoria or pledges at the unilateral/bilateral level, or among likeminded actors.
- What constitutes an appropriate and lawful countermeasure in the cyber context is a complex topic worthy of further discussion and analysis. Cyber countermeasures are typically framed or conceived of in ways that reinforce traditional understandings of deterrence and negative accountability, in which the potential for retaliatory cyber activity from an injured state deters an adversary from āattackingā.
- Yet, the effectiveness of countermeasures in deterring malicious offensive cyber operations is unclear. Understood differently, our research highlights that there are a range of policy and diplomatic responses that an injured state or a third party may employ as a countermeasure in response to an āattackā including attribution, sanctions, or retorsion. Such countermeasuresāeven if not often seen as suchāoffer the potential to improve accountability while also demonstrating voluntary restraint and compliance with international law, and reduce the potential for escalation.
- Peer review mechanisms have been largely underexplored in cyber security and cybercrime but could be valuable for assessing the implementation of commitments, such progress on norms, rules and principles of responsible state behavior. They might also be used to assess and ensure conformity with international law in, for example, the use and development of so-called offensive cyber capabilities. Through our research process it has been observed that peer review mechanisms tend to have good levels of participation and buy-in, and the āstate to stateā format of the Universal Periodic Review, as one example, is appealing to many Member States. There is growing interest within the UN system in developing a peer review process for different agencies or issue areas (i.e. the World Health Organization). Yet, it is important to assess what the impact of the reviews has been to change or improve policy and practice and if there are other lessons learned, such as for stakeholder participation, that would be valuable for potential cyber peer review.
- Better leverage existing accountability mechanisms and relevant fora. The cyber community is not starting from scratch, and there are many existing mechanisms, networks, and practices that contribute to accountability, even if they are not always described or viewed as such. These exist at national, regional, and international levels and can include, inter alia: national cyber security strategies; relevant national legislation; national processes for attribution; information-sharing practices and forums that foster trust, including across and within global regions; private sector public reporting; existing cyber confidence-building measures (CCBMs), points of contact directories; initiatives such as the ITUās Global Cybersecurity Index; and UN-level initiatives such as the National Survey of Implementation.
- Yet, some of these can be improved upon or better enforced ā for example, can more be done through parliamentary or legislative oversight of national cyber security strategies? Would triangulating cyber accountability goals into existing regional and national development indicators, or commitments under non-cyber frameworks aid in accountability? Could introducing new measurement tools, such as matrices or national reporting, support operationalization of the UN cyber norms in order to identify needs and gaps? Should the UN Security Council be a stronger voice for cyber accountability?
- Activities that provide clarity about the applicability of international law and statesā interpretation legal applicability to their use of ICTs will improve the prospects of the law being upheld and adhered to, which should form the basis of all accountability efforts.
- Promoting vulnerability disclosures at every level, while ensuring that smaller firms have the capacity to detect and report intrusions without being penalized for conformity.
- There are many existing toeholds within the human rights community and international human rights law (IHRL) that can also be leveraged to enhance accountability, particularly with respect to ensuring respect for IHRL in cybersecurity and cybercrime law and policy, and to prevent harm or support those negatively impacted. For example, UN Charter and Treaty-Based organs should continue urging states to prioritize cybersecurity and protect critical rights. Individual case petition mechanisms when human rights violations take place in the cyberspace at the national level can provide a second avenue of justice when domestic judicial institutions fail to adequately respond to human rights violations.
- As noted in the Findings, enhanced cross domain governance could aid in accountability and deterrence. Possible suggestions from our research include:
- Updating the UNSCR 1540 Matrices to account for ICT-related threats to non-proliferation.
- Better leveraging the Montreux Document and ICoCA to address so-called cyber mercenaries, or the use of ICT by private security contractors.
- Consideration of cybersecurity risks and threats within outer space security policies and law.
- There is scope to better integrate concerns about cyber security and cybercrime within existing human rights peer review mechanisms such as the UPR.
- Ongoing work to update Wassenaar Arrangement control lists, with due involvement of relevant technology experts and communities.
- There are unique risks that some non-governmental stakeholders face because of surveillance operations and spyware yet much available evidence that supports accountability against spyware and other intrusive capabilities has emerged from the work of open-source counterintelligence researchers, rather than regulation. Accountability efforts must therefore include consistent support for the monitoring and reporting initiatives of civil society, industry, and other non-governmental stakeholders as well as protections and safeguards for open-source researchers and counterintelligence specialists, journalists, activists, and whistleblowers.
- Market mechanisms such as insurance, credit ratings, product/service security ratings and liability adjustments, as well as tax adjustments and grants, are currently relatively untapped incentives that can play a pivotal role in promoting accountability for securing cyberspace. Market incentives to promote cybersecurity would help build a case that shifts the perception of security from a burdensome requirement or regulation to a value-added effort. One market incentive that could benefit from more coordinated support and integration into policy discussions is insurance, with that industryās potential to affect cybersecurity going beyond just insurers adjusting policies as underwriters develop clearer understandings of risks.
- The interconnected nature of the internet and the rapid evolution of technological change require a public-private partnership collaboration between governments and private industry.⯠A successful model for accountability in cyberspace should blend the agility and innovation of industry with the regulatory authority and threat intelligence capabilities of governments.⯠Combined with appropriate market incentives this synergy can create an ecosystem that fosters agreement on risk prioritization, mutual trust, and shared leadership.
Notes
- 1Antonio Guterres, A New Agenda for Peace, Policy Brief 9, United Nations, July 2023, p.27.
- 2Why the emphasis on accountability? According to James A. Lewis, a senior researcher with the Center for Strategic and International Studies, an effective international cyber strategy must focus upon the following three elements: āhow to how build resilience, to create a collaborative defense, and how to produce accountability in cyberspace (and this should include a discussion of when and how to disrupt opponent operations).ā James Andrew Lewis, āDeterrence and Cyber Strategy,ā Center for Strategic and International Studies, November 15, 2023, https://www.csis.org/analysis/deterrence-and-cyber-strategy.
- 3See, for example: Jason Healey, Creating Accountability for Global Cyber Norms, Center for Strategic and International Studies, February 23, 2022; and Patryk Pawlak, Accountability in Cyberspace: The Holy Grail of Cyber Stability?, EU Cyber Direct, March 2024. Relevant research and events have also been produced by The Hague Program on International Cyber Security at Leiden University, the EU Institute for Security Studiesā EU Cyber Direct Program, and the Royal United Services Instituteās project on Responsible Cyber Behavior.
- 4UN General Assembly, Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security, A/68/98*, June 24, 2013.
- 5UN General Assembly, Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security, A/70/174, July 23, 2015. Eight norms are actions that states want to encourage, while the other three involve actions that countries should avoid. The framework is primarily about promoting interstate cooperation, respecting human rights and privacy, protecting critical infrastructure, safeguarding global supply chains, providing assistance when required and preventing the malicious use of digital technologies on states’ national territories. See https://www.aspi.org.au/cybernorms for more detail.
- 6R. Crootof. (2018) āInternational Cybertorts: expanding state accountability in cyberspace.ā Cornell Law Review, 103(3), 565ā644.; Increasing International Cooperation in Cybersecurity and Adapting Cyber Norms (Elena Chernenko, Oleg Demidov and Fyodor Lukyanov); Martha Finnemore, Duncan B Hollis, āBeyond Naming and Shaming: Accusations and International Law in Cybersecurity,ā European Journal of International Law, Volume 31, Issue 3, August 2020, Pages 969ā1003, https://doi.org/10.1093/ejil/chaa056.
- 7The White House, National Cybersecurity Strategy of the United States, March 2023, https://www.whitehouse.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf.
- 8āAccountability and abuses of power in world politics.ā American Political Science Review, 99(1), 29ā40
- 9See, for example: Ray Acheson (2021), Gender and Bias, Womenās International League for Peace and Freedom, https://www.stopkillerrobots.org/wp-content/uploads/2021/09/Gender-and-Bias.pdf; Ardra Manasi, Subadra Panchanadeswaran, and Emily Sours, āAddressing Gender Bias to Achieve Ethical AI,ā IPI Global Observatory, March 17, 2023, https://theglobalobservatory.org/2023/03/gender-bias-ethical-artificial-intelligence/; M. Broussard. (2023). More than a glitch: Confronting race, gender, and ability bias in tech. MIT Press.
- 10UN Office for the High Commissioner on Human Rights, āSpyware scandal: UN experts call for moratorium on sale of ālife threateningā surveillance tech,ā Press release, 12 August 2021, https://www.ohchr.org/en/press-releases/2021/08/spyware-scandal-un-experts-call-moratorium-sale-life-threatening.