Resources and readings: General cybersecurity

Image by Giammarco Boscaro via Unsplash

This is a basic collection of general cybersecurity readings with an abstract/preview and very lazy APA formatting. I’ll be adding to this as time goes by and creating more specific reading lists, as well.


Al-khateeb, S., Hussain, M. N., & Agarwal, N. (2017). Social Cyber Forensics Approach to Study Twitter’s and Blogs’ Influence on Propaganda Campaigns. In D. Lee, Y.-R. Lin, N. Osgood, & R. Thomson (Eds.), Social, Cultural, and Behavioral Modeling (Vol. 10354, pp. 108–113). Springer International Publishing. https://doi.org/10.1007/978-3-319-60240-0_13

In today’s information technology age our political discourse is shrinking to fit our smartphone screens. Online Deviant Groups (ODGs) use social media to coordinate cyber propaganda campaigns to achieve strategic and political goals, influence mass thinking, and steer behaviors. In this research, we study the ODGs who conducted cyber propaganda campaigns against NATO’s Trident Juncture Exercise 2015 (TRJE 2015) and how they used Twitter and blogs to drive the campaigns. Using a blended Social Network Analysis (SNA) and Social Cyber Forensics (SCF) approaches, “anti-NATO” narratives were identified on blogs. The narratives intensified as the TRJE 2015 approached. The most influential narrative identified by the proposed methodology called for civil disobedience and direct actions against TRJE 2015 specifically and NATO in general. We use SCF analysis to extract metadata associated with propaganda-riddled websites. The metadata helps in the collection of social and communication network information. By applying SNA on the data, we identify influential users and powerful groups (or, focal structures) coordinating the propaganda campaigns. Data for this research (including blogs and metadata) is accessible through our in-house developed Blogtrackers tool.

Al-Shaer, E., Wei, J., Hamlen, K. W., & Wang, C. (Eds.). (2019). Autonomous Cyber Deception: Reasoning, Adaptive Planning, and Evaluation of HoneyThings. Springer International Publishing. https://doi.org/10.1007/978-3-030-02110-8

Why Cyber Deception? Cyberattacks have evolved to be highly evasive against traditional prevention and detection techniques, such as antivirus, perimeter firewalls, and intrusion detection systems. At least 360,000 new malicious files were detected every day, and one ransomware attack was reported every 40 s in 2017 (Chap. 10). An estimated 69% of breaches go undetected by victims but are spotted by an external party, and 66% of breaches remained undiscovered for more than 5 months (Chap. 10). Asymmetries between attacker and defender information and resources are often identified as root causes behind many of these alarming statistics. Cybercriminals frequently reconnoiter and probe victim defenses for days or years prior to mounting attacks, whereas defenders may only have minutes or seconds to respond to each newly emerging threat. Defenders seek to protect infrastructures consisting of thousands or millions of assets, whereas attackers can often leak sensitive information or conduct sabotage by penetrating just one critical asset. Finding ways to level these ubiquitous asymmetries has therefore become one of the central challenges of the digital age.

What Is Cyber Deception? Cyber deception has emerged as an effective and complementary defense technique to overcome asymmetry challenges faced by traditional detection and prevention strategies. Approaches in this domain deliberately introduce misinformation or misleading functionality into cyberspace in order to trick adversaries in ways that render attacks ineffective or infeasible. These reciprocal asymmetries pose scalability problems for attackers similar to the ones traditionally faced by defenders, thereby leveling the battlefield.

Cyber Deception Models Cyber deception can be accomplished in two major ways: (1) mutation, to frequently change the ground truth (i.e., the real value) of cyber parameters such as cyber configuration, IP addresses, file names, and URLs, and (2) misrepresentation, to change or corrupt only the value returned of cyber parameters to the attacker without changing the ground truth such as false fingerprinting, files, and decoy services.We therefore call the cyber parameters used for deceiving the attackers HoneyThings. Using the concept of HoneyThings in both approaches expands the cyber exploration space for adversaries to launch effective attacks.

Brady, W. J., Wills, J. A., Burkart, D., Jost, J. T., & Van Bavel, J. J. (2019). An ideological asymmetry in the diffusion of moralized content on social media among political leaders. Journal of Experimental Psychology: General, 148(10), 1802–1813. https://doi.org/10.1037/xge0000532

Online social networks constitute a major platform for the exchange of moral and political ideas, and political elites increasingly rely on social media platforms to communicate directly with the public. However, little is known about the processes that render some political elites more influential than others when it comes to online communication. Here, we gauge influence of political elites on social media by examining how message factors (characteristics of the communication) interact with source factors (characteristics of elites) to impact the diffusion of elites’ messages through Twitter. We analyzed messages (N ϭ 286,255) sent from federal politicians (presidential candidates, members of the Senate and House of Representatives) in the year leading up to the 2016 U.S. presidential election—a period in which Democrats and Republicans sought to maximize their influence over potential voters. Across all types of elites, we found a “moral contagion” effect: elites’ use of moral-emotional language was robustly associated with increases in message diffusion. We also discovered an ideological asymmetry: conservative elites gained greater diffusion when using moral-emotional language compared to liberal elites, even when accounting for extremity of ideology and other source cues. Specific moral emotion expressions related to moral outrage—namely, moral anger and disgust—were impactful for elites across the political spectrum, whereas moral emotion expression related to religion and patriotism were more impactful for conservative elites. These findings help inform the scientific understanding of political propaganda in the digital age, and the antecedents of political polarization in American politics.

Brady, W. J., Wills, J. A., Jost, J. T., Tucker, J. A., & Van Bavel, J. J. (2017). Emotion shapes the diffusion of moralized content in social networks. Proceedings of the National Academy of Sciences, 114(28), 7313–7318. https://doi.org/10.1073/pnas.1618923114

Political debate concerning moralized issues is increasingly common in online social networks. However, moral psychology has yet to incorporate the study of social networks to investigate processes by which some moral ideas spread more rapidly or broadly than others. Here, we show that the expression of moral emotion is key for the spread of moral and political ideas in online social networks, a process we call “moral contagion.” Using a large sample of social media communications about three polarizing moral/political issues (n = 563,312), we observed that the presence of moral-emotional words in messages increased their diffusion by a factor of 20% for each additional word. Furthermore, we found that moral contagion was bounded by group membership; moral-emotional language increased diffusion more strongly within liberal and conservative networks, and less between them. Our results highlight the importance of emotion in the social transmission of moral ideas and also demonstrate the utility of social network methods for studying morality. These findings offer insights into how people are exposed to moral and political ideas through social networks, thus expanding models of social influence and group polarization as people become increasingly immersed in social media networks.

Dutta, S., Chakraborty, T., & Das, D. (2019). How Did the Discussion Go: Discourse Act Classification in Social Media Conversations. In D. P & A. Jurek-Loughrey (Eds.), Linking and Mining Heterogeneous and Multi-view Data (pp. 137–160). Springer International Publishing. https://doi.org/10.1007/978-3-030-01872-6_6

Over the last two decades, social media has emerged as almost an alternate world where people communicate with each other and express opinions about almost anything. This makes platforms like Facebook, Reddit, Twitter, Myspace, etc., a rich bank of heterogeneous data, primarily expressed via text but reflecting all textual and non-textual data that human interaction can produce. We propose a novel attention-based hierarchical LSTM model to classify discourse act sequences in social media conversations, aimed at mining data from online discussion using textual meanings beyond sentence level. The very uniqueness of the task is the complete categorization of possible pragmatic roles in informal textual discussions, contrary to extraction of question–answers, stance detection, or sarcasm identification which are very much role specific tasks. Early attempt was made on a Reddit discussion dataset. We train our model on the same data, and present test results on two different datasets, one from Reddit and one from Facebook. Our proposed model outperformed the previous one in terms of domain independence; without using platform-dependent structural features, our hierarchical LSTM with word relevance attention mechanism achieved F1-scores of 71% and 66%, respectively, to predict discourse roles of comments in Reddit and Facebook discussions. Efficiency of recurrent and convolutional architectures in order to learn discursive representation on the same task has been presented and analyzed, with different word and comment embedding schemes. Our attention mechanism enables us to inquire into relevance ordering of text segments according to their roles in discourse. We present a human annotator experiment to unveil important observations about modeling and data annotation. Equipped with our text-based discourse identification model, we inquire into how heterogeneous non-textual features like location, time, leaning of information, etc. play their roles in characterizing online discussions on Facebook.

Farkas, J., & Neumayer, C. (2020). Disguised Propaganda from Digital to Social Media. In J. Hunsinger, M. M. Allen, & L. Klastrup (Eds.), Second International Handbook of Internet Research (pp. 707–723). Springer Netherlands. https://doi.org/10.1007/978-94-024-1555-1_33

Disguised propaganda and political deception in digital media have been studied since the early days of the World Wide Web. At the intersection of internet research and propaganda studies, this chapter explores disguised propaganda on websites and social media platforms. Based on a discussion of key concepts and terminology, this chapter outlines how new modes of deception and source obfuscation emerge in digital and social media environments, and how this development complicates existing conceptual and epistemological frameworks in propaganda studies. The chapter concludes by arguing that contemporary challenges of detecting and countering disguised propaganda can only be resolved, if social media companies are held accountable and provide the necessary support for user contestation.

Jajodia, S., Subrahmanian, V. S., Swarup, V., & Wang, C. (Eds.). (2016). Cyber Deception: Building the Scientific Foundation. Springer International Publishing. https://doi.org/10.1007/978-3-319-32699-3

This volume is designed to take a step toward establishing scientific foundations for cyber deception. Here we present a collection of the latest basic research results toward establishing such a foundation from several top researchers around the world. This volume includes papers that rigorously analyze many important aspects of cyber deception including the incorporation of effective cyber denial and deception for cyber defense, cyber deception tools and techniques, identification and detection of attacker cyber deception, quantification of deceptive cyber operations, deception strategies in wireless networks, positioning of honeypots, human factors, anonymity, and the attribution problem. Further, we have made an effort to not only sample different aspects of cyber deception, but also highlight a wide variety of scientific techniques that an be used to study these problems. It is our sincere hope that this volume inspires researchers to build upon the knowledge we present to further establish scientific foundations for cyber deception and ultimately bring about a more secure and reliable Internet.

Kwon, S., Cha, M., Jung, K., Chen, W., & Wang, Y. (2013). Prominent Features of Rumor Propagation in Online Social Media. 2013 IEEE 13th International Conference on Data Mining, 1103–1108. https://doi.org/10.1109/ICDM.2013.61

The problem of identifying rumors is of practical importance especially in online social networks, since information can diffuse more rapidly and widely than the offline counterpart. In this paper, we identify characteristics of rumors by examining the following three aspects of diffusion: temporal, structural, and linguistic. For the temporal characteristics, we propose a new periodic time series model that considers daily and external shock cycles, where the model demonstrates that rumor likely have fluctuations over time. We also identify key structural and linguistic differences in the spread of rumors and non-rumors. Our selected features classify rumors with high precision and recall in the range of 87% to 92%, that is higher than other states of the arts on rumor classification.

Mazarr, M., Bauer, R., Casey, A., Heintz, S., & Matthews, L. (2019). The Emerging Risk of Virtual Societal Warfare: Social Manipulation in a Changing Information Environment. RAND Corporation. https://doi.org/10.7249/RR2714

This analysis is part of a larger study on techniques of social manipulation and was motivated by recent Russian efforts to manipulate Western information environments. This study focuses on the future of social manipulation efforts and involved a survey of multiple, overlapping information-related technologies and their potential for manipulation. It describes the emerging phenomenon of virtual societal warfare and suggests avenues for Western democracies to respond.

Park, J., Mohaisen, A., Kamhoua, C. A., Weisman, M. J., Leslie, N. O., & Njilla, L. (2020). Cyber Deception in the Internet of Battlefield Things: Techniques, Instances, and Assessments. In I. You (Ed.), Information Security Applications (Vol. 11897, pp. 299–312). Springer International Publishing. https://doi.org/10.1007/978-3-030-39303-8_23

The Internet of Battlefield Things (IoBT) is an emerging application to improve operational effectiveness for military applications. The security of IoBT is one of the more challenging aspects, where adversaries can exploit vulnerabilities in IoBT software and deployment conditions to gain insight into their state. In this work, we look into the security of IoBT from the lens of cyber deception. First, we formulate the IoBT domain as a graph learning problem from an adversarial point of view and introduce various tools through which an adversary can learn the graph starting with partial prior knowledge. Second, we use this model to show that an adversary can learn high-level information from low-level graph structures, including the number of soldiers and their proximity. For that, we use a powerful n-gram based algorithm to obtain features from random walks on the underlying graph representation of IoBT. Third, we provide microscopic and macroscopic approaches that manipulate the underlying IoBT graph structure to introduce uncertainty in the adversary’s learning. Finally, we show our approach’s effectiveness through analyses and evaluations.

Pond, P., & Lewis, J. (2019). Riots and Twitter: Connective politics, social media and framing discourses in the digital public sphere. Information, Communication & Society, 22(2), 213–231. https://doi.org/10.1080/1369118X.2017.1366539

Social media technologies like Twitter are credited with enabling a new form of connective action, in which political movements coalesce and mobilise around hashtags, memes and personalised action frames. After the UK riots in 2011, citizen ‘broom armies’ took to the streets to clear up and repair damage. Different hashtags, including #RiotCleanUp and #OperationCupOfTea, were implicated in these movements.

Shu, K., Sliva, A., Wang, S., Tang, J., & Liu, H. (2017). Fake News Detection on Social Media: A Data Mining Perspective. 19(1), 22–36.

Social media for news consumption is a double-edged sword. On the one hand, its low cost, easy access, and rapid dissemination of information lead people to seek out and consume news from social media. On the other hand, it enables the wide spread of “fake news”, i.e., low quality news with intentionally false information. The extensive spread of fake news has the potential for extremely negative impacts on individuals and society. Therefore, fake news detection on social media has recently become an emerging research that is attracting tremendous attention. Fake news detection on social media presents unique characteristics and challenges that make existing detection algorithms from traditional news media ineffective or not applicable. First, fake news is intentionally written to mislead readers to believe false information, which makes it difficult and nontrivial to detect based on news content; therefore, we need to include auxiliary information, such as user social engagements on social media, to help make a determination. Second, exploiting this auxiliary information is challenging in and of itself as users’ social engagements with fake news produce data that is big, incomplete, unstructured, and noisy. Because the issue of fake news detection on social media is both challenging and relevant, we conducted this survey to further facilitate research on the problem. In this survey, we present a comprehensive review of detecting fake news on social media, including fake news characterizations on psychology and social theories, existing algorithms from a data mining perspective, evaluation metrics and representative datasets. We also discuss related research areas, open problems, and future research directions for fake news detection on social media.

Tang, Z., & Liu, X. (2019). Research on the Status Quo and Countermeasures of the Discourse Power of Cyberspace Ideology in the New Era. Proceedings of the 2nd International Conference on Contemporary Education, Social Sciences and Ecological Studies (CESSES 2019). Proceedings of the 2nd International Conference on Contemporary Education, Social Sciences and Ecological Studies (CESSES 2019), Moscow, Russia. https://doi.org/10.2991/cesses-19.2019.191

The ideology of discourse in the new era of cyberspace presents new features such as diversification of discourse subjects, diversification of discourse carriers, and complexity of discourse content. Among them, the cyberspace ideology discourse power is also facing the impact of Western culture penetration, multiculturalism, and the pressure of Western discourse power. To further construct cyberspace ideology discourse power in the new era, it is necessary to adhere to Marxism as the ideological guidance, strengthen the discourse subject, guide the ability to play, expand the discourse carrier, enhance the influence of discourse, innovate the discourse content, improve the quality of discourse, and then highlight the international voice of cyberspace.

Vartanova, I., Eriksson, K., & Strimling, P. (2019). Country-Independence of Moral Arguments and Moral Opinion Dynamics. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3463009

How do opinion dynamics compare across different countries? The moral argument theory of opinion dynamics says that public opinion on moral issues should generally become more liberal, and the rate of change should vary across issues according to how the different issue positions connect with moral arguments that are acceptable to liberals. We conducted a cross-cultural test of this theory, using new measures of the connections between moral arguments and positions together with available opinion data on 98 issues in the United States, 108 issues in the United Kingdom, and 26 issues in Sweden. In each country the theory successfully predicted the pattern of long-term opinion trends. Moreover, on those issues that were covered both in the US and in the UK, argument measures and opinion trends were highly similar between the two countries. In sum, moral arguments seem to have a large explanatory power transcending different political cultures.

Warner-Søderholm, G., Bertsch, A., Sawe, E., Lee, D., Wolfe, T., Meyer, J., Engel, J., & Fatilua, U. N. (2018). Who trusts social media? Computers in Human Behavior, 81, 303–315. https://doi.org/10.1016/j.chb.2017.12.026

Trust is the foundation of all communication, yet a profound question in business today is how can we psychologically understand trust behaviors in our new digital landscape? Earlier studies in internet and human behavior have shown a significant connection between social media use and user personality (Hughes, Rowe, Batey, & Lee, 2012). Still, the connection between type of online user and their trust values is an under researched area. Today, millions of people globally read newsfeeds and information via their digital networks, but we do not know enough about human behavior related to which specific users of social media actually trust the news they read online. In this study we apply items from five different validated scales to measure trust to investigate to what degree a users’ perception of trust varies depending on their gender, age, or amount of time spent using social media. Using a convenience population sample (n ¼ 214) significant differences in levels of trusting behavior were found across gender, age, social media newsfeed preferences and extent of social media use. The findings suggest that women and younger users have the highest expectations for integrity, trusting others and expecting others to show empathy and goodwill. Implications of the results are discussed.

Westaby, J. D., Pfaff, D. L., & Redding, N. (2014). Psychology and social networks: A dynamic network theory perspective. American Psychologist, 69(3), 269–284. https://doi.org/10.1037/a0036106

Research on social networks has grown exponentially in recent years. However, despite its relevance, the field of psychology has been relatively slow to explain the underlying goal pursuit and resistance processes influencing social networks in the first place. In this vein, this article aims to demonstrate how a dynamic network theory perspective explains the way in which social networks influence these processes and related outcomes, such as goal achievement, performance, learning, and emotional contagion at the interpersonal level of analysis. The theory integrates goal pursuit, motivation, and conflict conceptualizations from psychology with social network concepts from sociology and organizational science to provide a taxonomy of social network role behaviors, such as goal striving, system supporting, goal preventing, system negating, and observing. This theoretical perspective provides psychologists with new tools to map social networks (e.g., dynamic network charts), which can help inform the development of change interventions. Implications for social, industrial-organizational, and counseling psychology as well as conflict resolution are discussed, and new opportunities for research are highlighted, such as those related to dynamic network intelligence (also known as cognitive accuracy), levels of analysis, methodological/ethical issues, and the need to theoretically broaden the study of social networking and social media behavior.

Related