When Indicators of Compromise Become Indicators of Counterterrorism

First thoughts on disclosure ethics on counterterrorism and law enforcement focused hacking operations.

Somewhere in India is a hacker-for-hire operation that has become my recurrent foil over the past year. Owing to personal networks, the first publications on the threat actor, which we dubbed "Bahamut," focused on attempts to surveil labor rights activists working on Qatar and human rights defenders elsewhere in the Middle East. From the outset, Bahamut was perplexing because its set of interests were not coherent to any particular country or theme. As Claudio Guarnieri and I peeled back the layers, digging deeper into their history and expanding our perspective into their concurrent operations, we found those targeting civil society were also engaged in espionage against terrorist organizations and financial institutions (a common theme elsewhere).

Despite the clearly objectionable stalking of human rights organizations, the prospect of publishing on potential counterterrorist operations posed a difficult ethical challenge. I was surprised to find little public deliberation about the responsibilities posed to researchers that encounter criminal investigations or counterterrorism efforts. Surprisingly, even respected cyber security companies often publish on operations that are exclusively focused on criminal activities. While companies may have internal guidance, certain extreme cases suggest that these frameworks vary substantially, if they do exist.

Relying on public disclosures or detections as ethical instruction also presents a selection bias that does not help outsiders or facilitate the creation of professional norms. How often do researchers encounter a counterterrorism operation and drop their investigation? If researchers are quietly omitting details about certain operations, then are they sharing their reasons for doing so with their peers?

Academic communities have developed formal practices around institutional review boards and peer review requirements that have built norms and procedures on research and publication. While there are selection committees for cyber security publications and conferences, this level of scrutiny and expectation-setting is seemingly absent and inconsistent. As with other technology communities, structural considerations of ethics and responsibilities within the cyber security profession has not always kept apace with the rapid changes of the field. This is not to suggest hard restrictions or formal review boards are needed. Only that the strange position of researchers as de facto private intelligence agents within a domain of real world impacts often requires more support on difficult social and ethical questions that will only become more pressing over time.

This post is a collection of some of preliminary thoughts based on my own (limited) experiences, conversations with more experiences practitioners, principles borrowed from other communities, and a review of existing work. Its primary purpose is to catalyze further discussion, and I invite critique of the scope, assertions, and content written here.

Assertions

At its foundation there are two principles undergirding this discussion –

Is the public's "right to know" absolute and unconstrained? While full disclosure appeals to many in the cyber security community, I suspect the vast majority of researchers might be willing to delay or redact for a compelling public interest, such as allowing police to dismantle a child abuse ring. Just because someone can do something doesn't mean they should. An outside equivalent might be publicizing police positions and actions in a hostage situation – there is little public interest compared to the potential harm posed to hostages or police. Same for publishing pictures of dead bodies. No matter where the line is drawn, any consideration of disclosure and detection quickly necessitates some deliberative process, particularly if those decisions might later need to be defended.

The primary responsibility of a researcher is toward harm reduction. Disrupting a law enforcement or intelligence operations is not a passive act. While an individual may have personal beliefs about whether there is a legitimate place for intrusive surveillance, or even if those operations may be illicit, the decision to interfere has potential ramifications for public safety. Researchers are morally responsible for their actions, even if they are opposed to the original act and even if other calculations legitimize an end decision to intervene. Such actions should be taken on dispassionate consideration of potential impacts, and the researcher has a responsibility to take any reasonable measures to reduce adverse outcomes.

How these principles are weighed will vary wildly and will shape a subsequent deliberative process that will differ based on the individual.

Context

The type of research and remediation routinely conducted within the cyber security community poses profound complications that have been uncommon for individuals to have to address on their own elsewhere. Cyber security professionals are some of the most curiously-minded people I have ever met and are often transparency advocates – instincts that contribute to a strong urge to disclose and detect malicious incidents. Moreover, technical researchers in companies may not be ultimately responsible for decisions – other motivations such as a desire for publicity may shape incentives, almost inevitable toward greater disclosure.

Researchers now frequently engage in the types of counter-intelligence activities that were previously the domain of states, and make decisions that could impact lives. In "The ethics and perils of APT research," Juan Andrés Guerrero-Saade described these as challenges facing an industry that has shifted into an "intelligence broker" role that is "increasingly involved in investigating state-sponsored or geopolitically significant threats."

While publications typically focus on espionage against governments or commercial infrastructure, campaigns involving counterterrorism and law enforcement operations still arise across disclosures from nearly every major vendor. For example, this is extremely common in reports on Russian state-aligned actors. In Kaspersky's "Miniduke is back," the company describes targets of APT29/Cozy Bear as "individuals involved in the traffic and selling of illegal and controlled substances." FireEye notes that APT28/Fancy Bear had targeted the Chechnyan Kavkaz Center, which the Russian government has claimed was extremist and incited ethnic hatred (a definitional problem noted later).

More compellingly, companies even have disclosed operations that solely targeted violent extremist organizations. This is particularly pronounced with ISIS, a terrorist group that has provoked nearly every state known to engage in cyber espionage. McAfee published on a malicious Android agent posing as jihadist material that was distributed through social media by accounts that claimed to be ISIS affiliates. While McAfee did not publish hashes of the malware or other indicators of compromise, they did clearly identify the impersonated application and the Twitter accounts used to distribute the app (which were then disabled). This would have been sufficient to tip off targets.

The potential repercussions are self-evident. Public disclosure of toolkits tend to disrupt operations, even if victims are not notified or become aware – C2s are identified and taken down, malware is identified by antivirus, vulnerabilities are patched, bait social media accounts are disabled, and other TTPs are addressed. Disclosure not only leads to the disruption of active investigations, but also could chill further operations and decrease their effectiveness. As an indicator of such dangers, ISIS forums have informed members about malware attempts and ISIS-affiliated organizations (Amaq News) have self-disclosed malware operations to warn adherents. There are also ample second order effects, Guerrero-Saade notes "repercussions that threaten the standing of the perpetrating agency" – which in the case of abusive operations may be the goal of the researcher.

These dilemmas are not unprecedented – other professions have had to grapple with similar challenges and develop frameworks for evaluating such issues. Namely, the question of how to balance the public's right to know against against national security interests is a common and well explored issue for the press. Professional associations such as the Society of Professional Journalists (SPJ) provide ethics codes and other resources that mirrors the problems in cyber security, starting with a "special obligation to serve as watchdogs over public affairs and government" (a common statement from tech companies).

Importantly, the SPJ notes that access to information differs from an ethical justification to publish, and that journalists must "balance the public's need for information against potential harm or discomfort" – including avoiding pandering to lurid curiosity. As the national security journalist Walter Pincus noted there is a risk some "confuse their own personal interests, as well as their employers' interests, with the public interest and cloak them with First Amendment claims."

Most importantly, the SPJ Ethics Code encourages journalists to explain ethical choices and processes they encounter to audiences to foster a "civil dialogue with the public about journalistic practices, coverage and news content." Companies describing why they disclosed or detect certain operations (such as ISIS-targeted malware) would be a important starting point toward developing better norms on how to respond to sensitive professional matters.

Dilemmas and Questions

The core discussion is what should a researcher do when they believe that they've come across a law enforcement or counterterrorism operation. The short version is that there are no easy answers, and any decision is bound to be subjective to the researcher's position and principles. Any treatment of the issue must start with a recognition that these are always tricky questions and no one is prepared to confidently answer every potential dilemma. Cyber security professionals are not regional studies experts, and may not have contacts to understand context of every environment they encounter. Nor are all the facts always well established. Moreover there are ample stakeholders that are invested in decisions to detect or disclose, and those parties will not always have the (subjective) public interest at heart.

There are a number of potential triggers that would suggest that a campaign is sensitive, the most obvious being through the disclosure of personally-identifiable information. In other cases, filenames of samples or malicious hostnames could contain sensitive terms such as references to terrorist organizations. Bait social media accounts, watering holes, or other pretextual strategies might portray ideological affiliations, e.g. the jihadist Twitter accounts in the aforementioned ISIS-targeted malware. Or beacons from infections could originate from regions associated with political instability (e.g. Kashmir, Balochistan, or Western Sahara). Alternatively, the threat actor behind an operation could be known to be associated with a government agency that is well-regarded. These all provide potential red flags about the nature of the operation that should trigger a deliberative process.

From this scenario, one might want to start to consider the several questions and potential courses of action on a case-by-case basis.

Is the target generally understood to be a legitimate interest of law enforcement operations?

Remediation could interfere with legitimate and properly-executed law enforcement investigations. However, researchers should not be entirely deferential to governmental determinations of legitimacy – they have to have their own barometer. Countries such as Iran and China often claim to have rule of law and carry out judicial processes. This does not mean that those institutions respect international human rights or are free from interference. Designations of ‘terrorist' or ‘criminal' organizations are relative to political and social circumstances, e.g. the Gülen organization in Turkey, Catalonian separatists in Spain, Uighur separatists in China, or Viet Tan in Vietnam. When Ethiopia targeted nonviolent dissidents with malware purchased from FinFisher and Hacking Team, it claimed the organizations were terrorists to discredit them to the international community and justify their repression. Consider precedent but be skeptical.

This critical engagement works both ways. While one might disagree with the goals or strategy of an organization, researchers must operate on a politically-neutral position based on the principle of harm reduction. A personal example is the Mujahadeen-e-Khalq (MEK) group that Iran calls a terrorist organization and that was similarly designated in the West until recently. While I might have strongly-held (negative) opinions about the MEK, it is no longer listed as a terrorist organization in the United States and Europe due to claims that it has abandoned violence. Moreover, if the MEK is compromised by Iranian threat actors (which it often is), its leadership based outside of Iran is unlikely to bear the costs. Instead, those that will be subject of the Iranian regime's brutality will be those inside the country – ordinary people whose opposition to the regime is understandable. Were I to ignore those attacks, I would be leaving them in the hands of a government known to have committed mass executions of MEK members. Distain for leadership or politics can never justify such outcomes.

The space where researchers are willing to ignore operations must be narrowly proscribed. Keeping in mind the prior warnings, a starting point of reference for legitimate targets might be:

Relying on governmental lists naturally have their own biases and may not be in vogue, but my experience has been that those listed have committed unjustifiable acts of violence and other widely-recognizable crimes. After all, the United States designated the MEK for its violent insurrection against the Islamic Republic, despite the American opposition to the regime. Perhaps the larger issue is who is not on the list – cast a wide net of sources. (I would be happy to extend this list to non-governmental and non-Western sources.)

If in doubt, seek the assistance of subject matter experts. Digital rights organizations such as the EFF and Access Now can often be helpful in identifying potential contacts and resources, as are universities. Most people will find your work fascinating and are willing to provide advice (but still be skeptical about their biases – especially with diaspora communities).

Can counterterrorist operations be omitted or differentiated from public interest disclosures?

Intrusion kit is not necessarily partitioned across functions. Quite often the same malware is used in both foreign espionage and domestic counterterrorist operations. Guerrero-Saade describes this as a "mixed use" problem, "where campaign operators will deploy the same malware to infect radical targets along with a diverse swathe of questionable targets," which in turn forces the question "to detect or not to detect?" Is there information that should be omitted from disclosure because there is a compelling public interest in the operation? Consider minimization where there is a compelling need.

This is evident in the aforementioned case of Russian groups targeting narcotics traffickers and hate groups, as well as Iran targeting ISIS and al Qaeda members. Even repressive states have a real need to engage in surveillance against threats to the public. Where the mixed use problem involves the repurposing of these tools against political opponents and minority groups, this should never stifle acting on information. Dissidents should not be further neglected based on their government's reckless misuse of surveillance. This is a burden created by the government's decision, not the researcher.

Can the threat actor be clearly identified and notified prior to disclosure?

One notable exploration of these disclosure and detection issues is Georgetown Professor Catherine Lotrionte's presentation "Threat Intelligence Threatening Intel Operations" last year at Kaspersky SAS. The presentation uses national security journalism as a precedent for how researchers should handle cases that involve government actors, and focuses on how cyber security researchers make decisions to publish or in acts of omission.

Lotrionte describes the process of negotiation between journalists and the intelligence community on identifying sensitive issues, where agencies may privately provide context on why leaks might compromise national security. Based on those discussions, the media may decide to withhold or delay release of particular information to avert risk. This doesn't make media subservient to government agencies – and journalists frequently decide there is a more substantial public interest in publication. This contentious relationship between the press and government over intelligence disclosures is well documented, as James Risen has recently written based on his experiences reporting on CIA operations in Iran and elsewhere.

Any potential cooperation over disclosure and detection is a difficult proposition. There remains a tendency within the community to keep governments at an arms distance (and a suspicion toward those who do not shun such ties). Many have first or second-hand stories about negative experiences with law enforcement, and folklore is pervasive about threats to those who come too close to certain operations. The frenzied reaction to the arrest of Marcus Hutchins based on alleged involvement in banking-focused malware is demonstrative of fraught relationships with authorities: researchers threatened to cut off information sharing with the U.S. government.

Lotrionte's recommendation is best tailored for a limited set of law enforcement and intelligence agencies, where the operators may be reachable by the researchers; the motivations of the operation are clear; and there is some trust in their purpose. Cases where the operators can be identified and perhaps trusted are rare. Forewarning government agencies is a negotiation to accommodate requests. Journalistic accounts suggest that even trusted agencies often conflate situations that present risk with those that are merely an embarrassment. It's not the responsibility of researchers to withhold information that might expose politically problematic operations, such as the British GCHQ's compromise of a Belgium-based telco. For that matter, this post is not intended to cover broader intelligence operations (e.g. Stuxnet), which open more subjective questions about loyalties and law.

Despite the formidable challenges, Lotrionte's argument has not received the attention it deserved. There are plenty of situations where it would be reasonable to forewarn a government agency prior to disclosure or detection. For example, in the "mixed use" scenarios involving privately-development malware used by different countries and where the abuses of one client necessitates action. Since that action could affect all clients, it may be reasonable to warn some customers so that they can shift their operations, if such parties can be trusted keep the information confidential. Even if the premise of omitting information or consulting governments is unappealing, discussions within the national security field are still a valuable resource for the cyber security community.

Can intent be confidently discerned?

The true intent of attacks may not be obvious based of the tactics or targets of a campaign. Guerrero-Saade raises the case of the Gauss malware, a Western intelligence operation that had targeted Middle Eastern banks likely for espionage-related purposes (probably terrorism financing and sanctions related intent). The focus on banks was grounds for significant concern: malware targeting the financial sector could have nefarious outcomes as demonstrated by recent SWIFT thefts. A decision to not publish on Gauss would require confidence in the attribution of the operation and in the intent of the attacker, with high costs for incorrectly handling the incident.

Bahamut poses a complementary example. Among the Android agent samples identified, one, "Devoted to Humanity," impersonated the "Falah-e-Insaniat Foundation" (FIF) notable for its links to the Lashkar-e-Taiba (LeT) terrorist organization. It may be sufficient to say that LeT affiliates are legitimate surveillance interests – even if we don't know who is behind Bahamut – and that is sufficient to determine that there is little disclosure interest (mea culpa).

Can remedial action taken aside of publication?

If the desire is to mitigate abuses, then is disclosure of an operation necessary? Is there a middle ground or alternative action that can be taken? If a sufficient portion of the targets can be identified – can they be personally contacted? Can the attacks be addressed through private notification of application markets, communications platforms, or hosting providers? An example might be the use of exploits against Tor Browser to identify those accessing abusive content on Onion Services. There is a strong argument toward patching these vulnerabilities due to its larger use by dissents, but a less compelling argument to be made about disclosing what dark web sites were targeted (an action that may allow suspects to destroy evidence or flee).

Can disclosure occur without affecting operations?

Likewise, if an incident necessitates public disclosure but contains sensitive issues, it may be useful to considering minimizing the information that is made public. For example, a malware sample could contain an exploit for popular consumer software that should be addressed. Is it necessary to make public the specific target, sample hashes, C2 infrastructure, or TTPs of the operator? To reuse the Tor Browser exploit example, is it necessary to describe which Onion Services were targeted?

What is the harm of disruption or of non-disruption?

What are the likely and worst case outcomes of disclosure or detection. This might include consideration of the nature of the law enforcement operations – differentiating those addressing economic fraud from those against violent extremist organizations. For example, were Belgian authorities hypothetically using FinFisher, I would feel more comfortable publishing on operations targeting tax evasion as opposed to those against domestic affiliates of ISIS.

What are the costs of taking action?

Is the agent no longer in use or has the campaign come to an end?

All campaigns and operations eventually come to an end – malware licenses expire, victims take notice, priorities change, objectives are accomplished, or operators change tactics. Publication of an old operation does not pose the same immediate threat to direct access. Instead the most obvious harm would be putting potential targets on notice and bringing heightened awareness to certain TTPs. In many cases, the premise a particular demographic is of interest to intelligence agencies is self-evident to the targets themselves – ISIS and al Qaeda are well aware they are a surveillance interest. Disclosure is likely less harmful in that case.

For example, Jundallah – a separatist group labeled as a terrorist organization by both Iran and the U.S. – was targeted by a watering hole attack conducted by the Infy group in 2010. While I have refrained from disclosing recent attacks against Jundallah, we could comfortably talk about the Infy case at Black Hat because primary C2 appeared to be down and the malware staging server was no longer active.

Is it technically and operationally necessary to be neutral or consistent in remediation?

While Claudio and I often debate ethics around certain cases and lament others' disinterest toward activists, at the end of the day our core constituency is human rights defenders. I may be more proactive about attacks against foreign policy institutions and Western government targets, but we have no commercial clients nor interest economic espionage. I am not even sure where I would start if I wanted to notify a Saudi government ministry, nor is it clear how that engagement might put me at risk. For that matter, researchers may be constrained by their legal or security environment that limits what they can publish on – while an American company can publish on American operations, an Iranian will find themselves in a difficult position for disclosing an IRGC spearphishing.

This is a privileged position, and one that shouldn't be adopted by others where it isn't necessary.

Instead, a common position tends to be to act on all attacks irrespective of origin or target, with restraint coming into play on the specifics of the disclosure. Guerrero-Saade frames this as maintaining system integrity at any cost, that "regardless of the attacker's intention, malicious code abuses the essential trust that enables technological progress and cannot be evaluated on the basis of case-by-case intentions." It is the primary responsibility of ICT companies or researchers to uphold the integrity of their systems, and not facilitate the breach of their platform – particularly where it may inevitably lead to further infringements by others. An example of this is Google's removal of the Equus Technologies malware from the Play Store – despite no public indication about its use or misuse.

Technological and political trends further reinforce the untenable nature of attempting to accommodate surveillance operations. Security platforms increasingly rely on machine learning, virus definition sharing, and other rapidly changing sources to flag malicious behavior. There is little or no practical option to white-list some government malware, and to do so discreetly. More down the rabbit hole, even were there coordination on restraining from disclosure or detection of certain operations, the creation of a class of ‘exempted malware' could lead to cheating – states using "authorized" malware agents against highly-valuable espionage targets or re-appropriation by other actors.

As recent events demonstrate, companies are increasingly sensitive to the prospective balkanization of markets along geopolitical axes based on fears of products being coopted by national interests. Politics might even incentivize publication on domestic threat actors to prove neutrality – Kaspersky has attempted to defend their reputation against accusations of collaboration with Russian security services based on their history of disclosing Russian operations. Any other position than neutrality is a Congressional (or Duma) investigation waiting to happen.

Moving Forward and Acknowledgements

As mentioned in the introduction, this post is meant simply to prompt a discussion that hasn't prominently occurred in public. There are recommendations and questions in this post that are incomplete, untenable, unworkable, and perhaps even undesirable. However, in absence of open deliberation from those who face such questions on a day-to-day basis, there is little for many to start from as they develop their own ethical frameworks and encounter the same problems. I would encourage more companies and researchers to work through these problems in public, such as justifying their decisions to publish or provide insights into where they omit details. These are hard questions, and there's an open space for shared learning and leadership.

Lastly, thank you to Timo Steffens at CERT-Bund for seriously engaging what was at first only a Twitter comment and providing a starting point to build from. All nonsensical statements are my own responsibility.