“Security by Obscurity”: Journalists’ Mental Models of Information Security

By Susan E. McGregor and Elizabeth Anne Watkins

Despite wide-ranging threats and tangible risks, journalists have not done much to change their information or communications security practices in recent years. Through in-depth interviews, we provide insight into how journalists conceptualize security risk. By applying a mental models framework, we identify a model of “security by obscurity”—one that persists across participants despite varying levels of investigative experience, information security expertise and job responsibilities. We find that the prevalence of this model is attributable at least in part to poor understandings of technological communication systems, and recommend future research directions in developing educational materials focused on these concepts.

 

Introduction

Among the first and most shocking of the Snowden revelations of 2013 was the public disclosure of the U.S. government’s large-scale collection of communications metadata, including the information of U.S. citizens (Greenwald, 2013). While there was nothing to suggest that journalists were particular targets of this effort, the revelations were nonetheless a shock to the U.S. journalism community in particular, which for decades had operated with the understanding that their communications with sources were effectively protected from government interference by the network of so-called “shield laws” that prevented law enforcement from using the legal system to compel journalists to reveal their sources. That the Snowden documents also implied government infiltration of the systems of companies (such as Google) (Greenwald & MacAskill, 2013) to which many journalistic organizations had recently turned over their own email services, was even more unsettling (Wagstaff, 2014).

Moreover, the Snowden revelations came at a time when the journalism industry was feeling particularly sensitive about the government’s collection and use of metadata: only several weeks prior, the U.S. Department of Justice had notified the Associated Press that it had been secretly monitoring both the office and mobile phone lines of several AP journalists as part of a leak investigation (Horowitz, 2013). Though outcry from the industry over this activity eventually resulted in promises from the attorney general that orders for journalists’ information would be reviewed more closely (Savage, 2013), these were only good-faith assurances. Should they be contravened, a news organization could not, as Senior Vice President and General Counsel of Hearst Corporation Eve Burton put it, “march into court and sue the DOJ” (McGregor, 2013, para. 7).

At the same time, the U.S. government’s willingness to treat metadata as legally dispositive was playing a role in multiple high-profile journalistic leak investigations. Though another 18 months remained in the standoff between the Department of Justice and The New York Times reporter James Risen over Risen’s refusal to identify the source of classified information included in his 2006 book State of War, District Judge Leonie Brinkema had quashed the subpoena for Risen’s testimony, on the basis that the “numerous telephone records, e-mail messages, computer files and testimony that strongly indicates that Sterling was Risen’s source” (Brinkema, 2011, p. 23). In 2015, Jeffrey Sterling was convicted and sentenced under the Espionage Act, though Risen never testified (Maass, 2015). Similarly, just a few weeks before the Snowden revelationsThe Washington Post reported on the DOJ’s use of Fox News reporter James Rosen’s telephone and other metadata to build a case against Stephen Jin-Woo Kim in 2010 (Marimow, 2013).

The security risks to journalists and journalistic organizations in recent years have not been confined to legal mechanisms and leak prosecutions, however. During 2013 alone a host of major news organizations—including The New York Times, The Wall Street Journal, Bloomberg, and The Washington Post—revealed that their digital communications systems had been the target of state-sponsored digital attacks (Perlroth, 2013), a pattern that was corroborated by independent security researchers in the spring of 2014 (Marquis-Boire & Huntley, 2014). In at least some cases, the objective of the attacks seemed to be the identification of journalists’ sources. In the case of The New York Times, for example, the timing and pattern of the attack suggested that the motivation was to uncover the identities of sources for a range of embarrassing stories about Chinese government officials (Perlroth, 2013). In other cases, hacking efforts appeared more ad-hoc and retaliatory, as when the Syrian Electronic Army (SEA) defaced the VICE website following a story that allegedly revealed the real identity of SEA member “Th3 Pr0” (Greenberg, 2013), or when the Associated Press’ Twitter account was hacked, leading to false reports of a bomb detonating near the White House (Blake, 2013).

Despite the wide range of threats and tangible consequences of these events (for example, both Sterling and Kim were convicted and sentenced to prison time as a result of their implication as journalistic sources), research shows that in the roughly 30 months since the Snowden revelations, even investigative journalists have not done much to change their practices with respect to information or communications security. For example, a Pew Research Survey of investigative journalists conducted in late 2014 found that fully half of these practitioners did not report using information security tools in their work, and less than 40% reported changing their methods of communicating with sources since the Snowden revelations (Mitchell, Holcomb & Purcell, 2015a). Yet the same research indicates that the majority of investigative journalists believe that the government has collected data about their communications (Mitchell, Holcomb & Purcell, 2015a). And while the Pew survey found that fully 88% of respondents reported “decreasing resources in newsrooms” as the top challenge facing journalists today, more than half (56%) named legal action against journalists as the second.

On its surface, these results offer an apparent contradiction: roughly the same majority proportion of investigative journalists (62%) had not changed the way they communicate with sources in the 18-months after the Snowden revelations, despite the belief that the government is collecting data about their communications (Mitchell, Holcomb & Purcell, 2015a), and that legal action against journalists is the second-biggest challenge faced in the profession today. And, as noted above, these concerns are well founded given the significant reliance by law enforcement on communications’ metadata to prosecute journalistic sources.

Literature Review

Mental Models and Journalists’ Security Practices

Discrepancies between belief and practice are hardly unique to journalists, and a range of frameworks is used in the behavioral sciences to both describe and these gaps and design mechanisms for change (Gastin & Gerjo, 1996; Festinger, 1962). Of these, however, only a mental models framework captures both the systemic and technological nature of journalists’ information security space.

While there are many definitions of the term mental model across fields (Doyle & Ford, 1998), one useful definition comes from Norman, who characterizes a mental model as a construct that a person or group uses to represent a system and make decisions about it (1983, p. 7). Based on our research and the fact that journalists’ security understandings and practices exist at the intersection of multiple technological and human systems of which journalists themselves may have varying levels of understanding (Mitchell, Holcomb & Purcell, 2015a), we find that exploring and characterizing journalists’ mental models of information security helps illuminate how and why journalists make the information security choices that they do.

Growing Digital Risk

The majority of both legal and technological security risks to journalists and sources in recent years have centered on digital communications technology. In the United States, the most high-profile of these were leak prosecutions that relied on digital communications metadata (Horowitz, 2013; Brinkema, 2011), and technical attacks by state actors on U.S. news organizations (Perlroth, 2013a; 2013b).

While such incidents are becoming unsettlingly common, however, this does not mean that they constitute an appropriate proxy for the breadth of security risk actually faced by journalists and journalistic organizations, even in a solely U.S. context. While by 2013 the Obama administration had brought a total of seven cases against journalists’ source under the Espionage Act (Currier, 2013) more than twice that of all previous administrations combined—this record is not of a particular policy decision or a greater absolute number of leaks, but also of more general policies and the greater feasibility of tracking disclosures (Shane & Savage, 2012). As one department official put it:

As a general matter, prosecutions of those who leaked classified information to reporters have been rare, due, in part, to the inherent challenges involved in identifying the person responsible for the illegal disclosure and in compiling the evidence necessary to prove it beyond a reasonable doubt (Liptak, 2012, p.1)

In other words the recent flurry of leak prosecutions is not the result of the administration working harder, but because the process is getting easier, including “a proliferation of e-mail and computer audit trails that increasingly can pinpoint reporters’ sources” (Shane and Savage, 2012, para. 3).

Similarly, while sophisticated technical attacks by nation-states like China (Perlroth, 2013a) and North Korea (Grisham, 2015) have been prominently reported, more commonplace attacks have also become more frequent. For example, more generalized phishing attacks (Greenberg, 2014) and exploitation attacks (Mattise, 2014) have also been on the rise.

Thus, while the industry consciousness has been focused on leak prosecutions and technical attacks relating to national-security beats, the reality is that the general security risk for journalists has been growing in recent years across the board. From SEC investigations (Coronel, 2014; Hurtado, 2014) to phishing attacks (Associated Press, 2013; Greenberg, 2014), evidence suggests that while thus far the consequences of national-security related threats have been more severe, the risks faced by journalists are more general across the board.

Despite both the severity and pervasiveness of these attacks, however, research indicates that journalists believe that information security is “as a serious concern mainly for journalists who cover national security, foreign affairs or the federal government” (Mitchell, Holcomb & Purcell, 2015a, p. 13). Reflecting this attitude, more than 60% of investigative journalists had never participated in any type of information security training (Mitchell, Holcomb & Purcell, 2015a).

Mental Models

Journalists’ failure to engage with information security topics and tools can be explained in a number of ways; indeed, failure to adopt information secure tools and practices has been the subject of substantial research within the security community, especially since Alma Whitten and J. D. Tygar’s seminal paper on the topic, “Why Johnny can’t encrypt” (1999). Like Whitten and Tygar, computer security researchers have tended to focus on either the usability of the security tools available (Renaud, Volkamer & Renkema-Padmos, 2014), or to uncritically label information security failures as user errors (as discussed in Sasse et al., 2001). Even if accurate, however, these explanations do little to explain why journalists may not see information security practices as essential in the first place.

By contrast, understanding journalists’ mental models of information security can provide valuable insight into how they interact with security-related systems and processes. Because mental models comprise “what people really have in their heads and guide their use of things” (Norman, 1983, p. 12), they can offer both “explanatory and predictive power” (Rook & Donnell, 1993, p. 1650) for journalists’ decisions about systems and situations like digital communications and information security.

A complete mental model is usually comprised of one or more system models along with related knowledge and concepts about how that system behaves in particular domains (Brandt & Uden, 2003). For example, a mental model of using a search engine to locate information on the Internet might be comprised of a system model of how the search engine retrieves and ranks information, along with conceptual models about what types of search terms will yield the preferred results. Taken together, these models would constitute the particular users mental model of Internet searching.

Importantly, however, the system models that help make up a given model are not always complete or accurate; while this may reduce the efficacy of the mental model, it does not necessarily render it completely useless. For example, many of us are able to employ sufficiently useful mental models of searching with Google that we can use it to find the Web information we are looking for; given that their search algorithm is both complex and proprietary, however, we do not have a complete system model of how the search engine actually functions. As such, it is possible for users to have mental models based upon inaccurate or missing system models that are still sufficient for use.

Moreover, experience with a system does not necessarily translate to an accurate system or mental model of it. For example, early research on users’ mental models of the Internet found that only a small number of the users surveyed—many of whom used it quite extensively and effectively for their desired purposes—possessed a complete and detailed mental model of how the Internet functioned. This led the researchers to conclude that “frequent use of the Internet appears to be more of a necessary than a sufficient condition for detailed and complete mental models of the Internet” (Thatcher & Greyling, 1998, 304). This finding has been echoed in related findings about users’ mental models of search engines (Brandt & Uden, 2003), email (Renaud et al., 2014) and credential management (Wastlund, Angulo, & Fischer-Hubner, 2012). In the case of encrypted email in particular, even a computer-science background-which might presumably affect participants’ understandings of technical systems-had no apparent impact on the completeness or accuracy of participants’ mental models of email communication (Renaud et al., 2014). These smaller experimental results are also supported by broader, more recent findings. For example, a significant percentage of global social network users are unaware that services like Facebook are on the Internet (Mirani, 2015).

Methodology

In order to learn more about how journalists’ mental models of information security might be influencing their related attitudes and behaviors, we conducted in-depth, semi-structured interviews with journalists (N = 15) and editors (N = 7) about their security preferences, practices and concerns. Although there is no single methodology for working with or identifying mental models (Stevens & Gentner, 1983; Renaud et al., 2014), we determined that in-depth interviews would offer us the most comprehensive view of “what people really have in their heads and guide their use of things” (Norman, 1983, p. 12). To help understand how the interplay between journalists’ individual work with sources and other professional responsibilities—such as editing for and organizing other reporters—shaped their needs and practices with respect to information security, the interview script varied according to each participant’s primary role as a reporter or editor. Thus, while both sets of interview questions focused on security attitudes and behaviors, the “reporter” script focused on questions around individual attitudes and practices while the “editor” script included broader policy questions. We made this distinction based on our understanding of the differing scope of responsibility and awareness between these two roles in journalistic organizations, differences that had some impact on our findings, as discussed below.

Participants

All of the interview subjects were full-time employees at well-respected media organizations, ranging in size and focus from small, U.S.- or issue-focused news outlets to large, international media services with bureaus around the world. While the majority of the participants was located in the United States, some of the participants were located based in Europe (n = 8) and were interviewed in their native language, with the interview responses translated to English during transcription. Ten participants were men and 12 were women.

Ethical Considerations

The entire protocol for this research was conducted under the auspices of the Columbia University IRB, and special care was taken to limit the creation or exposure of any sensitive information during the course of the research process. To this end, participants were often recruited through existing professional networks via person-to-person conversations; as such, the identity of particular interview subjects was often unknown to the researcher prior to the interview itself.

Similarly, we were careful during the interviews to discourage participants from sharing identifying information or sensitive details about particular sources, stories or incidents, in order to limit the risk of compromising any individuals or the efficacy of particular practices.

Participants were also given the option to decline recording of the interview, and to decline to answer any individual questions, though all participants agreed to recording and responded fully. All audio recordings were kept encrypted and labeled only in coded form, both in storage and in transit.

Grounded theory

Once all interviews were complete, the audio recordings were translated, if necessary, and then transcribed in English, and coded by the researchers using a grounded theory approach (Glaser & Strauss, 1967). The grounded theory method is designed to help identify authentic themes from qualitative interview material through successive iterations of coding and synthesis. By beginning with an initial coding process that relies heavily on the actual language used by participants, a grounded theory method helps minimize the influence of researcher expectation and bias when evaluative qualitative results by drawing topic classifications directly from the participants’ interview material, rather than by bucketing responses according to a predetermined rubric. Once a set of themes is identified via the initial coding, these are then synthesized and refined—a process known as “focused coding”—for application across the wider data set.

Participant roles and expertise

In addition to the themes identified through our grounded theory analysis, we also evaluated our results in the context of users’ primary role as a reporter or editor, and on our own analysis of their emergent expertise in information security. As we discuss below, however, none of these factors had a significant interaction with participants’ mental models of security.

Results

Overall, our results indicate that journalists’ mental model of information security can best be characterized as a type of “security by obscurity”: the belief one need not take particular security precautions unless one is involved in work that is sensitive enough to attract the attention of government actors. While we are intentionally using this term in a way that deviates from the typical computer-security definition (Anderson, 2001; Mercuri & Neumann, 2003) we do so in part to acknowledge the tangible security benefits that obscure solutions can offer to organizations in terms of slowing down or reducing the severity of an attack. As we discuss below, however, we find that there is little actual “obscurity” available to journalists, making this conceptually attractive characterization of security risk of little practical value.

“Sensitivity” as a proxy for risk exposure

In line with previous findings (Mitchell, Holcomb & Purcell, 2015a), a recurring theme in our work was participants’ use of the “sensitivity” of particular stories, subjects, sources, or geography as a proxy for security risk exposure, with more than half of our subjects indicating the need for security precautions was dependent on the presence of one of these features. As one participant put it:

It depends on the sector, but not everyone has sensitive information. We have many open sources that don’t require any particular protection…It’s just in certain cases that one really needs to be careful.

This characterization of security risk applied to participants on both sides of the issue, i.e. both journalists on, for example, national-security beats and those on other beats suggested that the need for security was dependent on one’s coverage area. As another participant commented:

If you were on the national security beat [security technology] would be really useful. But I write about domestic social problems, education, crime, poverty.

When asked about the need for specifically information security-related practices, one participant put it even more simply:

I feel like it depends on how much you think someone is actively spying on you.

Overall, these comments indicate that participants perceived security risk to be primarily related to how sensitive or visible one’s subject of reporting may be to powerful actors, rather than the particular vulnerabilities of the collaboration, sharing, recording and transcribing mechanisms through which that reporting is done. Participants who did not consider their coverage areas controversial, then, tended to minimize or dismiss the existence of information security risks to themselves and their sources. Participants who did cover “sensitive” beats, likewise, distinguished their own needs from those of other colleagues who did not do this type of work.

This pattern was pervasive across both reporters and editors, despite the fact that editors knew details of specific security incidents that did not necessarily support a relationship between particular beats and security risk. While both groups adhered to this model of security risk, our research suggests that the two groups rationalized it differently. Many reporters expressed a lack of first-hand experience with security incidents or concerns. As one reporter described it:

I haven’t really dealt with something that was life or death. An extra level of security just didn’t seem necessary.

For editors, however, information security was beat-dependent enough that other, more universal newsroom concerns were a higher priority. As one editor said:

[Information security is] handled kind of on an ad-hoc basis by different reporters and teams depending on the sensitivity of the kind of stories they’re working on … it’s just not a big enough priority for the kind of journalism we do for it to be anywhere near the top of my tech wish list.

In addition to the above, the researchers also evaluated results for an interaction between information-security expertise, investigative experience, and the use of subject “sensitivity” as a proxy for security risk, but found no effect for these characteristics. In other words, participants described security risk in terms of subject sensitivity regardless of their information-security expertise or investigative experience.

Face-to-face conversation as risk mitigation

In keeping with their view of security risk as contingent on the sensitivity of coverage, our participants reported using a wide variety of security-enhancing tools and techniques in particular situations, some of which will be discussed below. One security strategy referenced by the vast majority of participants, however, was the use of face-to-face conversation as a security strategy. One participant described this in the context of working with a sensitive source:

If something is sensitive, I say to that person, I’ll come and see you.

However, this strategy also extended to communications with colleagues when dealing with sensitive sources or topics. As another participant explained:

We don’t put anonymous sources in the emails, we don’t memorialize them in the reporter’s notes—it’s all done verbally.

This strategy of avoiding the use of technology as a privacy or security measure has been previously categorized as a privacy-enhancing avoidance behavior (Caine, 2009, 3146). In this framework, individuals make behavioral choices explicitly intended to avoid situations where privacy could be compromised or violated.

As in previous research (Mitchell, Holcomb & Purcell, 2015a), the majority of our participants spoke of in-person conversations as a go-to security strategy. This was true irrespective of participants’ role, information-security expertise, or experience with investigative journalism. As we will discuss in more detail below, this may at least be in part because this method is guaranteed to be understood by and accessible to all parties. As one editor described it:

I tried to send an encrypted email to a manager, and she doesn’t have [encrypted] email. So, it’s available to our company…but it hasn’t been a priority for that manager. So I sent a note to her reporter…who was encrypted but was not in the office. So I said, “I’ll walk over and have a conversation with you, because I can’t send you what I would like to send you. I don’t want to put this in writing.”

Discussion

Though technically a misappropriation of the computer-science term, we describe journalists’ mental models of information security as “security by obscurity” to reflect the two most salient and common features of journalists’ thinking about security risk and avoidance in relation to digital communications technology. Specifically, this mental model treats as “secure” any type of journalism that is sufficiently “obscure” to not be of interest to powerful actors, such as nation-states. We also note, however, that while “security by obscurity” is largely dismissed in the computer science community as a false promise (Anderson, Neumann & Mercuri, 2003), it has been argued that in real-world applications, “obscure” solutions can help delay the onset or mitigate the severity of an actual attack (Stuttard, 2005). Given the large proportion of our participants and those in previous studies whose mental model of security appears to fit with this characterization, we examine the ways in which this mental model both fits and fails journalists’ actual information-security needs.

The appropriateness of “security by obscurity” as a mental model for journalists’ information-security risk lies in its ability to reflect or predict actual information security risk. Accepting this model as accurate would require two things: first, an indication that being “obscure” as a journalist or journalistic organization is possible, and second, that being lower profile in this way offers a measure of security. If this is so, then it may be that “security by obscurity” is a sensible, if imperfect, mental model of journalists’ information-security risk.

If not, however, it is worth looking deeper into the possible reasons why journalists continue to use this mental model, to appreciate what might replace it, and how.

Are journalists “obscure”?

While research confirms that large news organizations are under regular attack (Marquis-Boire & Huntley, 2014), it is difficult to ascertain the extent to which smaller news organizations may face similar threats. That said, there are certain types of attacks known to affect media organizations in general: third-party malvertising attacks. Small and large news organizations alike tend to rely on third-party platforms to serve ads, and the organizations affected when an ad platform is breached often number in the hundreds (Brandom, 2014; Cox, 2015; Whitwam, 2016). Since employees of a news organization are also likely to constitute its “readers,” the potential for exposure to such risks is arguably higher than the average reader.

Are “obscure” journalists more secure?

Given that all of our participants came from well-recognized media organizations, their assessment of security risk tended to relate to individual topics, beats, regions or stories, rather than applying to the media organization as a whole. As noted above, the vast majority of our participants felt that security was a concern primarily for reporters covering national security-related beats, rather than those covering local or social topics. Under this rubric, do non-national security journalists face fewer security risks?

In this case, the evidence is less equivocal: because many high-profile breaches and hacks are actually perpetrated through spearphishing campaigns, in which “targets” receive emails written to look like they came from a friend or colleague, often addressed directly to the target’s name with a personal-sounding salutation. Virtually anyone with an organizational email address is an equally likely “target”; one need not even be a journalist. Such campaigns have been a documented or posited part of several high-profile media breaches, including the Associated Press’ Twitter account hack (Oremus, 2013), and hacks of VICE (Greenberg, 2013) and Forbes (Greenberg, 2014).

Understanding the “security by obscurity” mental model 

Given the mechanisms through which security breaches at journalistic institutions have been enacted—as well as the more general targeting of journalistic institutions in general—“security by obscurity” appears to be a poor fit for journalists’ actual level of information security risk. Yet while all of the above-cited evidence was publicly reported (much of it before this study began), this mental model of information security risk still persists across both our study population and that of other researchers. To understand the potential sources of this incongruity, we examined our results for themes that might illuminate why this mental model might persist in the face of such limitations.

Insufficient system models

As we noted above, mental models are typically composed of one or more “system models” along with domain-specific knowledge and concepts (Brandt & Uden, 2003). There is, however, no requirement that a given system model be complete or even accurate in order to serve as part of a useful mental model. Of the 22 participants in this study, only a handful of these demonstrated what could be described as coherent and complete systems models of digital communications (this assessment was reached based on comments made throughout the interview regarding both ownership and operation of various systems, as well as their specific functions).

Otherwise, even participants who expressed an interest in greater information security were aware of the challenge presented by their own limited understanding of the systems with which they were dealing. As one participant put it:

I’ve been trying to reduce my Dropbox usage, and so I’ve been using just a USB stick or something. Which, I actually have no idea how safe that is. It seems more safe.

Another participant described information security risk as equally predictable (and, presumably, comprehensible) as a natural disaster:

It’s one of those things, like worrying about earthquakes or hurricanes … It’s the sort of thing where a terrible incident could be catastrophic, and that’s something that you worry about. However, there are lots of other fires to put out every day.

Comments like these also illuminate another aspect of our findings: that the most common security measure mentioned by participants was meeting in person. When contrasted with the opacity and uncertainty of technological systems, meeting face-to-face offers clarity and assurance.

This tendency to rely on security strategies that are well understood was underscored by one participant who shared that where salient explanations for security measures were provided, they were well-accepted and understood:

There’s many ways to roll out security tweaks, and doing them where you make a clear and lucid case for what you’re doing and why—there was just no pushback whatsoever. Everyone was just like, “Okay, great. We’ll do that.”

“Good enough” is good enough

Particularly in complex or ill-defined subject areas, such as information security, it is typical for individuals to build mental models around simple explanations that capture the features of a system or situation that are most readily apparent (Feltovich et al., 1996). While these models can be useful insofar as they provide initial support for reasoning about complex situations, they can also hinder more complete understandings (Feltovich et al., 1996). Once established, moreover, a given mental model is rarely amended. Instead, contradictory evidence is either dismissed or interpreted in such a way that is congruent with the existing mental model.

It is possible, then, to appreciate journalists’ “security by obscurity” mental model as a way to reason about information security risk that is congruent with the most salient and accessible features of high-profile security incidents. For example, while there have been repeated reports of aggressive leak investigations by the SEC (Coronel, 2014; Hurtado, 2016) most recent leak prosecutions were related to national security reporting (e.g. Jeffrey Sterling and Stephen Jin-Woo Kim). Moreover, such cases are often reported on in great detail. By contrast, only rarely do news organizations share details of technical or spearphishing attacks, making such events far less memorable. For most journalists, then, there is a naturally dominant association between national security and other “sensitive” beats and security risk, despite the greater frequency and, arguably, greater threat, posed by simple phishing campaigns, for example.

Conclusions and future research

By employing a mental models framework to journalists’ information security attitudes and behaviors, we identify an approach to information security risk that can best be described as “security by obscurity”: the belief that journalists do not need to concern themselves with information security unless they are working on topics of perceived interest to nation-state actors. Although this model is a demonstrably poor fit for the actual security risk faced by our participants (who are all part of well-recognized media organizations), this “security by obscurity” model may persist because it is congruent with the most high-profile security incidents in recent years, and because journalists have poor systems models of digital communications technology.

At the same time, given that one’s actual security risk is more likely to be related to one’s work as a journalist no matter the capacity, the question remains of how journalists’ mental models of information security risk can be updated to reflect their actual threat landscape. Based on our findings, we recommend further study with a focus on developing training modules and educational interventions designed to improve journalists’ systems models of digital communications and understanding of threats.

References

Anderson, R. (2001). Why information security is hard: An economic perspective.Proceedings of the 17th Annual Computer Security Applications Conference. 358 doi:http://dl.acm.org/citation.cfm?id=872016.872155

Associated Press (2013, April 23). Hackers compromise AP Twitter account. Associated Press. Retrieved from http://bigstory.ap.org/article/hackers-compromise-ap-twitter-account

Blake, A. (2013, April 23). AP Twitter account hacked; hacker tweets of ‘explosions in the White House’. The Washington Post. Retrieved fromhttps://www.washingtonpost.com/news/post-politics/wp/2013/ 04/23/ap-twitter-account-hacked-hacker-tweets-of-explosions -in-the-white-house/

Brandom, R. (2014, September 19). Google’s doubleclick ad servers exposed millions of computers to malware. The Verge. Retrieved fromhttp://www.theverge.com/2014/9/19/6537511/google-ad-network-exposed-millions-of-computers-to-malware

Brandt, D. S., & Uden, L. (2003, July). Insight into the mental models of novice Internet searchers. Communciations of the ACM (7). 133-136.

Brinkema, J. L. (2011). U.S. v. Sterling, Fourth Circuit. Retrieved from http://www.documentcloud.org/documents/229733-judge-leonie-brinkemas-ruling-quashing-subpoena.html

Caine, K. E. (2009). Supporting privacy by preventing misclosure. Extended abstracts of the ACM conference on human factors in computing systems. (Doctoral Consortium).

Coronel, S. S. (2014, August 13). SEC aggressively investigates media leaks. Columbia Journalism Review. Retrieved fromhttp://www.cjr.org/the_kicker/sec_investigation_media_leaks_reuters.php

Cox, J. (2015, October 13). Malvertising hits ‘The Daily Mail,’ one of the biggest news sites on the Web. Motherboard. Retrieved fromhttp://motherboard.vice.com/read/malvertising-hits-the-daily-mail-one-of-the-biggest-news-sites-on-the-web

Currier, C. (2013, July 30). Charting Obama’s crackdown on national security leaks.ProPublica. Retrieved from https://www.propublica.org/special/sealing-loose-lips-charting-obamas-crackdown-on-national-security-leaks

Doyle. J. K. & Ford, D. N. (1998). Mental models concepts for system dynamics research.System Dynamics Review, 14, 3-29.

Feltovich, P. J., Spiro, R. J., Coulson, R. L. & Feltovich, J. (1996). Collaboration within and among minds: Mastering complexity, within and among groups. In T. Koschmann (Ed.)CSCL: Theory and Practice of an Emerging Paradigm. (pp. 27-34). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Festinger, L. (1962). Cognitive dissonance. Scientific American, 207(4), 93-107.http://dx.doi.org/10.1038/scientificamerican1062-93

Gaston, G. & Gerjo, K. (1996). The theory of planned behavior: A review of its applications to health-related behaviors. American Journal of Health Promotion, 11(2), 87-98 doi: http://dx.doi.org/10.4278/0890-1171-11.2.87

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. Chicago, IL: Aldine Publishing Company.

Greenberg, A. (2014, February 20). How the Syrian electronic army hacked us: A detailed timeline. Forbes. Retrieved from http://www.forbes.com/ sites/andygreenberg/2014/02/20/how-the-syrian-electronic -army-hacked-us-a-detailed-timeline/

Greenberg, A. (2013, November 11). Vice.com hacked by Syrian Electronic Army.SCMagazine. Retrieved from http://www.scmagazine.com/vicecom-hacked-by -syrian-electronic-army/article/320466/

Greenwald, G. (2013, June 6). NSA collecting phone records of millions of Verizon customers daily. The Guardian. Retrieved fromhttp://www.theguardian.com/world/2013/jun/06/nsa-phone-records-verizon-court-order

Greenwald, G. & MacAskill, E. (2013, June 7). NSA Prism program taps in to user data of Apple, Google and others. The Guardian. Retrieved fromhttp://www.theguardian.com/world/2013/jun/06/us-tech-giants-nsa-data

Grisham, L. (2015, January 5). Timeline: North Korea and the Sony Pictures hack. USA Today. Retrieved from http://www.usatoday.com/story/news/nation-now/2014/12/18/sony-hack-timeline-interview-north-korea/20601645/

Gross, J. B., & Rosson, M. B. (2007). Looking for trouble: Understanding end-user security management. Proceedings of the 2007 Symposium on Computer Human Interaction for the Management of Information Technology. doi: 10.1145/1234772.1234786

Holmes, H., & McGregor, S. E. (2015, February 5). Making online chats really ‘off the record’. Tow Center. Retrieved from http://towcenter.org/ making-online-chats-really-off-the-record/

Horowitz, S. (2013, May 13). Under sweeping subpoenas, Justice Department obtained AP phone records in leak investigation. The Washington Post. Retrieved fromhttps://www.washingtonpost.com/world/national-security/under-sweeping-subpoenas-justice-department-obtained-ap-phone-records-in-leak-investigation/2013/05/13/11d1bb82-bc11-11e2-89c9-3be8095fe767_story.html

Hurtado, P. (2016, February 23). The London whale. Bloomberg. Retrieved fromhttp://www.bloombergview.com/quicktake/the-london-whale

Kerr, J. C. (2013, June 19). AP president Pruitt accuses DOJ of rule violations in phone records case; source intimidation. The Associated Press. Retrieved fromhttp://www.ap.org/Content/AP-In-The-News/2013/AP-President-Pruitt-accuses-DOJ-of-rule-violations-in-phone-records-case-source-intimidation

Kulwin, N. (2015, May 13). Encrypting your email: What is PGP? Why is it important? And how do I use it? re/code. Retrieved from http://recode.net/2015/05/13/encrypting-your-email-what-is-pgp-why-is-it-important-and-how-do-i-use-it/

Liptak, A. (2012, February 11). A high-tech war on leaks. The New York Times. Retrieved from: http://www.nytimes.com/2012/02/12/sunday-review/a-high-tech-war-on-leaks.html

Marimow, A. E. (2013, May 20). Justice Department’s scrutiny of Fox News reporter James Rosen in leak case draws fire. The Washington Post. Retrieved fromhttps://www.washingtonpost.com/local/justice-departments-scrutiny-of-fox-news-reporter-james-rosen-in-leak-case-draws-fire/2013/05/20/c6289eba-c162-11e2-8bd8-2788030e6b44_story.html

Marquis-Boire, M., & Huntley, S. (2014, March). Tomorrow’s news is today’s Intel: Journalists as targets and compromise vectors. Black Hat Asia 2014. Retrieved fromhttps://www.blackhat.com/docs/asia-14/materials/Huntley/BH_Asia_2014_Boire_Huntley.pdf

Mass, P. (2015, May 11). CIA’s Jeffrey Sterling sentenced to 42 months for leaking to New York Times journalist. The Intercept. Retrieved fromhttps://theintercept.com/2015/05/11/sterling-sentenced-for-cia-leak-to-nyt/

Mattise, N. (2014, June 22). Syrian electronic army targets Reuters again but ad network provided the leak. ArsTechnica. Retrieved fromhttp://arstechnica.com/security/2014/06/syrian-electronic-army-targets-reuters-again-but-ad-network-provided-the-leak/

McGregor, S. (2013, May 15). AP phone records seizure reveals telecoms risks for journalists. Columbia Journalism Review. Retrieved fromhttp://www.cjr.org/cloud_control/ap_phone_records_seizure_revea.php

McGregor, S. T. H., Charters, P. & Roesner, F. (2015). Investigating the security needs and practices of journalists. Proceedings of the 24th USENIX Security Symposium.

Mercuri, R. T. & Neumann, P. G. (2003) Security by obscurity. Communications of the ACM, 46(1).

Mirani, L. (2015, February 9). Millions of Facebook users have no idea they’re using the Internet. Quartz. Retrieved from http://qz.com/333313/milliions-of-facebook-users-have-no-idea-theyre-using-the-internet/

Mitchell, A., Holcomb, J., & Purcell, K. (2015a, February). Investigative journalists and digital security: Perceptions of vulnerability and changes in behavior. Pew Research Center. Retrieved from http://www.journalism.org/files/2015/02/PJ_InvestigativeJournalists_0205152.pdf

Mitchell, A., Holcomb, J., & Purcell, K. (2015b, February). Journalist training and knowledge about digital security. Pew Research Center. Retrieved from http://www.journalism.org/2015/02/05/journalist-training-and-knowledge-about-digital-security/

Norman, D. A. (1983). Some observations on mental models. In A. L. Stevens & D. Gentner (Eds.), Mental models (pp. 7-14). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Oremus, W. (2013, April 23). Would you click the link in this email that apparently tricked the AP? Slate. Retrieved fromhttp://www.slate.com/blogs/future_tense/2013/04/23/ap_twitter_hack_would_you_click_the_link_in_this_phishing_email.html

Perlroth, N. (2013, January 31). Hackers in China attacked The Times for last 4 months.The New York Times. Retrieved from http://www.nytimes.com/2013/01/31/technology/chinese-hackers-infiltrate-new-york-times-computers.html

Perlroth, N. (2013, July 12). Washington Post joins list of news media hacked by the Chinese. The New York Times. Retrieved from http://www.nytimes.com/2013/02/02/technology/washington-posts-joins-list-of-media-hacked-by-the-chinese.html?_r=0

Renaud, K., Volkamer, M., & Renkema-Padmos, A. (2014). Why doesn’t Jane protect her privacy? Proceedings of the 2014 Privacy Enhancing Technology Symposium. (Amsterdam, Netherlands).

Rook, F. W., & Donnell, M. L. (1993). Human cognition and the expert system interface: Mental models and inference explanations. IEEE Transactions on Systems, Man, and Cybernetics (6), 1649-1661.

Ruane, K. A. (2011). Journalists’ privilege: Overview of the law and legislation in recent Congresses. Congressional Research Service.

Sasse, M. A., Brostoff, S., & Weirich, D. (2001). Transforming the ‘weakest link’: A human/computer interaction approach to usable and effective security. B T Technology Journal, 19(3), 122-131.

Savage, C. (2013, July 12). Holder tightens rules on getting reporters’ data. The New York Times. Retrieved from http://www.nytimes.com/2013/07/13/us/holder-to-tighten-rules-for-obtaining-reporters-data.html

Shane, S. & Savage, C. (2012, June 19). Administration took accidental path to setting record for leak cases. The New York Times. Retrieved from http://www.nytimes.com/2012/06/20/us/politics/accidental-path-to-record-leak-cases-under-obama.html?_r=0

Staggers, N., & Norcio, A. F. (1993). Mental models: Concepts for human-computer interaction research. International Journal of Man-Machine Studies 38(4) 587-605. doi:10.1006/imms.1993.1028

Stevens, A. L., & Gentner, D. (1983). Introduction. In A. L. Stevens & D. Gentner (Eds.),Mental models (pp. 1-6). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Stuttard, D. (2005). Security & obscurity. Network Security, 2005 (7), 10-12. doi:10.1016/S1353-4858(05)70259-2

Thatcher, A., & Greyling, M. (1998). Mental models of the Internet. International Journal of Industrial Ergonomics, 22(4-5), 299-305. doi:10.1016/S0169-8141(97)00081-4

Wagstaff, J. (2014, March 28). Journalists, media under attack from hackers: Google researchers. Reuters. Retrieved from http://www.reuters.com/article/us-media-cybercrime-idUSBREA2R0EU20140328

Wastlund, E., Angulo, J., & Fischer-Hubner, S. (2012). Evoking comprehensive mental models of anonymous credentials. iNetSec 2011, 1-14. doi: 10.1007/978-3-642-27585-2_1

Whitten, A., & Tygar, J. D. (1999). Why Johnny can’t encrypt: A usability evaluation of PGP 5.0. Proceedings of the 8th USENIX Security Symposium.

Whitwam, R. (2016, January 10) Forbes forced readers to disable ad-blocking, then served them malware ads. Geek.com. Retrieved from http://www.geek.com/news/forbes-forced-readers-to-disable-ad-blocking-then-served-them-malware-ads-1644231/

 

Susan E. McGregor is assistant director of the Tow Center for Digital Journalism and assistant professor at Columbia Journalism School, where she helps supervise the dual-degree program in Journalism & Computer Science. She teaches primarily in areas of data journalism & information visualization, with a research interests in information security, knowledge management and alternative forms of digital distribution. McGregor was the Senior Programmer on the News Graphics team at the Wall Street Journal Online for four years before joining Columbia Journalism School in 2011. In 2012, McGregor received a Magic Grant from the Brown Institute for Media Innovation for her work on Dispatch, a mobile app for secure source communication. In June of 2014 she published the Tow/Knight report “Digital Security and Source Protection for Journalists,” which explores the legal and technical underpinnings of the challenges journalists face in protecting sources while using digital tools to report. In the fall of 2015, the National Science Foundation funded McGregor and collaborators Drs. Kelly Caine and Franzi Roesner to research and develop secure, usable communications tools for journalists and others. She conducts regular trainings with journalists and academics on practical strategies for protecting sources and research subjects.

  

Elizabeth Anne Watkins is a maker, writer, and researcher interested in the future of collaborative, meaningful work in digital ecosystems. Using a mixed-methods approach, she stitches research in knowledge management and organizational behavior together with insights gleaned from innovative community practices in art-making and storytelling. Her written case studies have been published by Harvard Business School, where she also worked with startups at the Harvard Innovation Lab and the Berkman Center for Internet and Society. She studied video art at the University of California at Irvine and received a Master of Science degree in Art, Culture, and Technology at MIT. She’s currently pursuing a PhD in Communications at Columbia University in the city of New York, where she’s a Research Assistant affiliated with the Tow Center for Digital Journalism contributing to studies in information security.