Cyber threats are ever increasing. Adversaries are getting more sophisticated and cyber criminals are infiltrating companies in a variety of sectors. In today's landscape, organizations need to acquire and develop effective security tools and mechanisms - not only to keep up with cyber criminals, but also to stay one step ahead. Cyber-Vigilance and Digital Trust develops cyber security disciplines that serve this double objective, dealing with cyber security threats in a unique way. Specifically, the book reviews recent advances in cyber threat intelligence, trust management and risk analysis, and gives a formal and technical approach based on a data tainting mechanism to avoid data leakage in Android systems
Ebooka przeczytasz w aplikacjach Legimi na:
Liczba stron: 361
1 What is Cyber Threat Intelligence and How is it Evolving?
1.3. Cyber threat intelligence
1.4. Related work
1.5. Technical threat intelligence sharing problems
1.6. Technical threat intelligence limitations
1.7. Cyber threat intelligent libraries or platforms
1.9. Evaluation of technical threat intelligence tools
1.10. Conclusion and future work
2 Trust Management Systems: a Retrospective Study on Digital Trust
2.2. What is trust?
2.3. Genesis of trust management systems
2.4. Trust management
2.5. Classification of trust management systems
2.6. Trust management in cloud infrastructures
3 Risk Analysis Linked to Network Attacks
3.2. Risk theory
3.3. Analysis of IS risk in the context of IT networks
4 Analytical Overview on Secure Information Flow in Android Systems: Protecting Private Data Used by Smartphone Applications
4.2. Information flow
4.3. Data tainting
4.4. Protecting private data in Android systems
4.5. Detecting control flow
4.6. Handling explicit and control flows in Java and native Android appsʼ code
4.7. Protection against code obfuscation attacks based on control dependencies in Android systems
4.8. Detection of side channel attacks based on data tainting in Android systems
4.9. Tracking information flow in Android systems approaches comparison: summary
4.10. Conclusion and highlights
List of Authors
End User License Agreement
Table 1.1. Threat intelligence sources
Table 1.2. Threat intelligence sub-domains
Table 1.3. Reasons for not sharing
Table 1.4. Evaluation of threat intelligence tools
Table 2.1. Example of an access control list
Table 2.2. Example of policies used in ATNAC (adapted from Ryutov et al. (2005))
Table 3.1. Comparative table of risk analysis methods (ANSI 2014)
Table 4.1. Basic rules for derivations
Table 4.2. Formal proof of the first rule
Table 4.3. Formal proof of the second rule
Table 4.4. Explicit flow propagation logic
Table 4.5. Third-party analyzed applications
Table 4.6. Third-party applications leaking sensitive data (L: location, Ca: cam...
Table 4.7. Tracking information flow in Android systems approaches comparison
Figure 1.1. Typical steps of multi-vector and multi-stage attacks by Lookheed Ma...
Figure 1.2. The Diamond model of intrusion analysis
Figure 1.3. Trend of “threat intelligence” and “indicators of compromise” in cyb...
Figure 1.4. Count of indicators by days as observed in Verizon (2015) in at leas...
Figure 2.1. A basic access control model (adapted from Genovese )
Figure 2.2. An abstract IBAC model
Figure 2.3. Abstract lattice-based access control model
Figure 2.4. Basic role-based access control model (adapted from Genovese )
Figure 2.5. The OrBAC model
Figure 2.6. Abstract attribute-based access control model
Figure 2.7. Illustration of the functioning of a trust management system
Figure 2.8. Functioning modes of a trust management system
Figure 2.9. Architecture of the TrustBuilder TMS
Figure 3.1. Risk terminology
Figure 3.2. Overall EBIOS approach (Culioli et al. 2009)
Figure 3.3. General outline of the Mehari method (Ghazouani et al. 2014)
Figure 3.4. Phases of the OCTAVE method (Bou Nassar 2012)
Figure 3.5. Different classes of attacks on a web application of an IS
Figure 3.6. Models of the Impact and Probability of occurrence parameters
Figure 3.7. Proposed risk classification
Figure 3.8. Classification of Impact and Probability of occurrence
Figure 3.9. Risk classification
Figure 3.10. Bit alternation. For a color version of this figure, see www.iste.c...
Figure 3.11. Network architecture of the e-commerce company network
Figure 3.12. Secure network architecture
Figure 3.13. Risk analysis process
Figure 4.1. Example of an explicit flow
Figure 4.2. Example of an implicit flow
Figure 4.3. Attack using indirect control dependency
Figure 4.4. Implicit flow example
Figure 4.5. Modified architecture to handle control flow in native code
Figure 4.6. Our approach architecture
Figure 4.7. CF-Bench results of our taint tracking approach overhead
Figure 4.8. Dynamic taint analysis process without obfuscation attack
Figure 4.9. The attack model against dynamic taint analysis
Figure 4.10. Code obfuscation attack 1
Figure 4.11. Log files of code obfuscation attacks
Figure 4.12. Code obfuscation attack 2
Figure 4.13. Code obfuscation attack 3
Figure 4.14. Target threat model
Figure 4.15. The modified components (blue) to detect side channel attacks. For ...
Figure 4.16. Leakage of private data through the bitmap cache side channels
Figure 4.17. Microbenchmark of Java overhead. For a color version of this figure...
Table of Contents
First published 2019 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:
ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUKwww.iste.co.uk
John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USAwww.wiley.com
© ISTE Ltd 2019
The rights of Wiem Tounsi to be identified as the author of this work have been asserted by her in accordance with the Copyright, Designs and Patents Act 1988.
Library of Congress Control Number: 2019931457
British Library Cataloguing-in-Publication Data
A CIP record for this book is available from the British Library
This book starts by dealing with cyber threat intelligence in Chapter 1. Cyber threat intelligence is an actionable defense and evidence-based knowledge to reduce the gap between advanced attacks and organization defense means in order to aid specific decisions or to illuminate the risk landscape. This chapter classifies and makes distinctions among existing threat intelligence types and focuses particularly on technical threat intelligence issues and the emerging research, trends and frameworks.
Since threat data are sensitive, organizations are often reluctant to share threat information with their peers when they are not in a trusted environment. Trust, combined with new cloud services, is a solution to improve collective response to new threats. To deepen this approach, the second chapter of this book addresses digital trust and identifies mechanisms underlying trust management systems. It introduces basic concepts of trust management and classifies and analyzes several trust management systems. This chapter shows how trust management concepts are used in recent systems to address new challenges introduced by cloud computing.
When threats are not well addressed, any vulnerability could be exploited and could generate costs for the company. These costs can be of human, technical and financial nature. Thus, to get ahead of these threats, a preventive approach aiming to analyze risks is paramount. This is the subject of the third chapter of this book, which presents a complete information system risk analysis method deployed on various networks. This method is applicable to and is based on network security extensions of existing risk management standards and methods.
Finally, a detective approach based on both dynamic and static analysis is defined in the fourth chapter to defend sensitive data of mobile users, against information flow attacks launched by third-party applications. A formal and technical approach based on a data tainting mechanism is proposed to handle control flow in Java and native applications’ code and to solve the under-tainting problem, particularly in Android systems.
Introduction written by Wiem TOUNSI
Today’s cyberattacks have changed in form, function and sophistication during the last few years. These cyberattacks no longer originate from digital hacktivists or online thugs. Held by well-funded and well-organized threat actors, cyberattacks have transformed from hacking for kicks to advanced attacks for profit which may range from financial aims to political gains. In that aim, attacks designed for mischief have been replaced with dynamic, stealthy and persistent attacks, known as advanced malware and advanced persistent threats (APTs). The reason is due to the complexity of new technologies. As a system gets more complex, it gets less secure, making it easier for the attacker to find weaknesses in the system and harder for the defender to secure it (Schneier 2000). As a result, attackers have a first-mover advantage, by trying new attacks first, while defenders have the disadvantage of being in a constant position of responding, for example better anti-virus software to combat new malwares and better intrusion detection system to detect malicious activities. Despite spending over 20 billion dollars annually on traditional security defenses (Piper 2013), organizations find themselves faced with this new generation of cyberattacks, which easily bypass traditional defenses such as traditional and next-generation firewalls, intrusion prevention systems, anti-virus and security gateways. Those defenses rely heavily on static malware signatures or lists of pattern-matching technology, leaving them extremely vulnerable to ever-evolving threats that exploit unknown and zero-day vulnerabilities. This calls for a new category of threat prevention tools adapted to the complex nature of new generation threats and attacks. This leads to what is commonly named cyber threat intelligence (CTI). CTI or threat intelligence means evidence-based knowledge representing threats that can inform decisions. It is an actionable defense to reduce the gap between advanced attacks and means of the organization’s defenses. We focus specifically on technical threat intelligence (TTI), which is rapidly becoming an ever-higher business priority (Chismon and Ruks 2015), since it is immediately actionable and is easier to quantify than other TI sub-categories. TTI is also the most-shared intelligence, because of its easy standardization (Yamakawa 2014). With TTI, we can feed firewalls, gateways, security information and event management (SIEM) or other appliances of various types with indicators of compromise (IOC) (Verizon 2015), for example malicious payloads and IP addresses. We can also ingest IOC into a searchable index or just for visualization and dashboards.
Despite its prevalence, many problems exist with TTI. These are mainly related to the quality of IOC (i.e. IP address lifetime, malware signatures) and the massive repositories of threat data given by provider’s databases which overwhelms their consumers (e.g. threat analysts) with data that is not always useful, which should be essential for generating intelligence. In many cases, threat feeds can simply amount to faster signatures that still fail to reach the attackers. For example, specific malicious payloads, URLs and IP addresses are so ephemeral that they may only be used once in the case of a true targeted attack.
To date, few analyses have been made on different types of TI and specifically on TTI. Moreover, very little research surveys have been reported on how new techniques and trends try to overcome TTI problems. Most of the existing literature reveals technical reports exposing periodic statistics related to the use of threat intelligence (Ponemon 2015; Shackleford 2015; Shackleford 2016), and also interesting empirical investigations for specific threat analysis techniques (Ahrend et al. 2016; Sillaber et al. 2016).
In order to develop effective defense strategies, organizations can save time and bypass confusions if they start defining what threat intelligence actually is, and how to use it and mitigate its problems given its different sub-categories.
This chapter aims to give a clear idea about threat intelligence and how literature subdivides it given its multiple sources, the gathering methods, the information life-span and who consumes the resulting intelligence. It helps to classify and make distinctions among existing threat intelligence types to better exploit them. For example, given the short lifetime of TTI indicators, it is important to determine for how much time these indicators could be useful.
We focus particularly on the TTI issues and the emerging research studies, trends and standards to mitigate these issues. Finally, we evaluate most popular open source/free threat intelligence tools.
Through our analysis, we find that (1) contrary to what is commonly thought, fast sharing of TTI is not sufficient to avoid targeted attacks; (2) trust is key for effective sharing of threat information between organizations; (3) sharing threat information improves trust and coordination for a collective response to new threats; (4) a common standardized format for sharing TI minimizes the risk of losing the quality of threat data, which provides better automated analytics solutions on large volumes of TTI.
The new generation threats are no longer viruses, trojans and worms whose signatures are known to traditional defenses. Even social engineering and phishing attacks are now classified as traditional. New generation threats are multi-vectored (i.e. can use multiple means of propagation such as Web, email and applications) and multi-staged (i.e. can infiltrate networks and move laterally inside the network) (FireEye Inc. 2012). These blended, multi-stage attacks easily evade traditional security defenses, which are typically set up to inspect each attack vector as a separate path and each stage as an independent event. Thus, they do not view and analyze the attack as an orchestrated series of cyber incidents.
To bring new generation attacks into fruition, attackers are armed with the latest zero-day vulnerabilities and social engineering techniques. They utilize advanced tactics such as polymorphic threats and blended threats (Piper 2013), which are personalized to appear unknown to signature-based tools and yet authentic enough to bypass spam filters. A comprehensive taxonomy of the threat landscape is done by ENISA (The European Network and Information Security Agency) in early 2017 (ENISA 2017). In the following sections, we provide some examples of these new generation threats.
APTs are examples of multi-vectored and multi-staged threats. They are defined as sophisticated network attacks (Piper 2013; FireEye Inc. 2014) in which an attacker keeps trying until they gain access to a network and stay undetected for a long period of time. The intention of an APT is to steal data rather than to cause damage to the network. APTs target organizations in sectors with high-value information, such as government agencies and financial industries.
Polymorphic threats are cyberattacks, such as viruses, worms or trojans that constantly change (“morph”) (Piper 2013), making it nearly impossible to detect them using signature-based defenses. Evolution of polymorphic threats can occur in different ways (e.g. file name changes and file compression). Despite the changing appearance of the code in a polymorphic threat after each mutation, the essential function usually remains the same. For example, a malware intended to act as a key logger will continue to perform that function even though its signature has changed. The evolution of polymorphic threats has made them nearly impossible to detect using signature-based defenses. Vendors that manufacture signature-based security products are constantly creating and distributing new threat signatures (a very expensive and time-consuming proposition (Piper 2013)), while clients are constantly deploying the signatures provided by their security vendors. It is a vicious cycle which goes to the advantage of the attacker.
Zero-day threats are cyber threats on a publicly unknown vulnerability of an operating system or application. It is so named because the attack was launched on “day zero” or before public awareness of the vulnerability and, in many cases, before even the vendor was aware (Piper 2013). In some cases, the vendor is already aware of the vulnerability, but has not disclosed it publicly because the vulnerability has not yet been patched. Zero-day attacks are extremely effective because they can go undetected for long periods (i.e. for months, if not years), and when they are finally identified, patching the vulnerability still takes days or even weeks.
Cyberattacks can either be classified as syntactic or semantic attacks. A combination of these two approaches is known as a composite attack or blended attack (Choo et al. 2007). Syntactic attacks exploit technical vulnerabilities in software and/or hardware, for example a malware installation to steal data; whereas semantic attacks exploit social vulnerabilities to gain personal information, for example scam solicitations. In recent years, progress has been made using the two approaches to realize composite attacks: using a technical tool to facilitate social engineering in order to gain privileged information, or using a social engineering means to realize a technical attack in order to cause harm to network hosts. Composite attacks include phishing attacks (also called online scams) which frequently use emails to send to carefully selected victims a plausible-looking message including a malicious attachment targeting a zero-day vulnerability. Phishing is positioned in the first three steps of the kill chain (see section 220.127.116.11). Phishing attacks forge messages from legitimate organizations, particularly banking and finance services, to deceive victims into disclosing their financial and/or personal identity information or downloading malicious files, in order to facilitate other attacks (e.g. identity theft, credit card fraud, ransomware (National High Tech Crime Unit of the Netherlands police, Europol’s European Cybercrime Centre, Kaspersky Lab, Intel Security 2017)). When the attack focuses on a limited number of recipients to whom a highly personalized message is sent, the technique is named spear phishing. Phishing mostly abuses information found in social media (Fadilpasic 2016). Attackers are always on the lookout for new attack vectors for phishing including smart devices. Such devices are increasingly being used to access and store sensitive accounts and services (Choo 2011).
Obviously, the attack morphology is different depending on the aimed scenario; for example, cybercrime might use stealthy APT to steal intellectual property, while cyber war uses botnets to run distributed denial-of-service (DDoS) attacks (Skopik et al. 2016).
Some analytical frameworks provide structures for thinking about attacks and adversaries to allow defenders to take decisive actions faster. For example, we name the defensive perspective of a kill chain (Hutchins et al. 2011) and the Diamond model used to track attack groups over time. Other standardized frameworks are developed in section 8.4.
Kill chain, first developed by Lockheed Martin in 2011 (Hutchins et al. 2011), is the best known of the CTI frameworks. It is a sequence of stages required for an attacker to successfully infiltrate a network and exfiltrate data from it (Barraco 2014). By breaking up an attack in this manner, defenders can check which stage it is in and deploy appropriate countermeasures.
Figure 1.1.Typical steps of multi-vector and multi-stage attacks by Lookheed Martin’s kill chain
Reconnaissance and weaponization
: the reconnaissance consists of research, identification and selection of targets, often by browsing websites (e.g. conference proceedings, mailing lists and social relationships), pulling down PDFs or learning the internal structure of the target organization. The weaponization is realized by developing a plan of attack based on opportunities for exploitation.
: this consists of the transmission of the weapon to the targeted environment. It is often a blended attack delivered across the Web or email threat vectors, with the email containing malicious URLs (i.e. phishing attack). Whether it is an email with a malicious attachment or a hyperlink to a compromised website or an HTTP request containing SQL injection code, this is the critical phase where the payload is delivered to its target.
: most often, exploitation targets an application or operating system vulnerability, but it can exploit the users themselves or leverage an operating system feature that auto-executes a code.
Installation and persistence
: a single exploit translates into multiple infections on the same system. More malware executable payloads such as key loggers (i.e. unauthorized malware that records keystrokes), password crackers and Trojan backdoors could then be downloaded and installed. Attackers have built in this stage long-term control mechanisms to maintain persistence into the system.
Command and control (C&C)
: as soon as the malware is installed, a control point from organizational defenses is established. Once its permissions are elevated, the malware establishes communication with one of its C&C servers for further instructions. The malware can also replicate and disguise itself to avoid scans (i.e. polymorphic threats), turn off anti-virus scanners, or can lie dormant for days or weeks, using slow-and-low strategy to evade detection. By using callbacks from the trusted network, malware communications are allowed through a firewall and could penetrate all the different layers of the network.
: data acquired from infected servers are exfiltrated via encrypted files over a commonly allowed protocol, for example FTP or HTTP, to an external compromised server controlled by the attacker. Violations of data integrity or availability are potential objectives as well.
: the attacker works to move beyond the single system and establishes long-term control in the targeted network. The advanced malware looks for mapped drives on infected systems, and can then spread laterally into network file shares.
Typically, if you are able to manage and stop an attack at the exploitation stage using this framework, you can be confident that nothing has been installed on the targeted systems, and triggering a full incident response activity may not be needed.
The kill chain is a good way for defending systems from attacks, but it has some limitations. One of the big criticisms of this model is that it does not take into account the way many modern attacks work. For example, many phishing attacks skip the exploitation phase and instead rely on the victim to open a document with an embedded macro or by double-clicking on an attached script (Pace et al. 2018). But even with these limitations, the Kill Chain is a good baseline to discuss attacks, and find at which stage they can be stopped and analyzed.
The Diamond model was created in 2013 at the Center for Cyber Intelligence Analysis and Threat Research (CCIATR). It is used to track adversary groups over time rather than the progress of individual attacks. The simplest form of the Diamond model is shown in Figure 1.2.
Figure 1.2.The Diamond model of intrusion analysis
The Diamond model classifies the different elements of an attack. The diamond for an adversary or a group is not static, but evolves as the adversary changes infrastructure and targets and modifies its TTPs (tactics techniques and procedures). The Diamond model helps defenders to track an adversary, the victims, the adversary’s capabilities and the infrastructure the adversary uses. Each point on the diamond is a pivot point that defenders can use during an investigation to connect one aspect of an attack with the others (Pace et al. 2018).
One big advantage of the Diamond model is its flexibility and extensibility. It is possible to add different aspects of an attack under the appropriate point on the diamond to create complex profiles of different attack groups. These aspects of an attack include: phase, result, direction, methodology and resources.
This model requires time and resources. Some aspects of the model, especially infrastructure, change rapidly. If the diamond of an adversary is not constantly updated, there is a risk of working with outdated information.
Cyber threat intelligence, also known as threat intelligence, is any evidence-based knowledge about threats that can inform decisions (McMillan 2013), with the aim of preventing an attack or shortening the window between compromise and detection. CTI can also be information that, instead of aiding specific decisions, helps to illuminate the risk landscape (Chismon and Ruks 2015). Other definitions exist, for example, in Steele (2014) and Dalziel (2014). A more rigorous one (Dalziel 2014) states that CTI is an information that should be relevant (i.e. potentially related to the organization and/or objectives), actionable (i.e. specific enough to prompt some response, action or decision) and valuable (i.e. the information has to contribute to any useful business outcome). CTI supports different activities, namely security operations, incident response, vulnerability and risk management, risk analysis and fraud prevention (for more details, see Pace et al. (2018)). Depending on the intended activities, the sources of CTI may differ.
Cyber threat intelligence can be generated from information collected from a variety of sources (Holland et al. 2013). These commonly include internal sources (i.e. firewall and router logs or other local sensor traffic, such as honeynets, which are groups of interactive computer systems mostly connected to the Internet that are configured to trap attackers), or external sources, such as government-sponsored sources (i.e. law enforcement, national security organizations), industry sources (i.e. business partners), Open Source Intelligence (OSINT i.e. public threat feeds such as Dshield (2017), ZeuS Tracker (2017), social media and dark web forums) and commercial sources (i.e. threat feeds, Software-as-a-Service (SaaS) threat alerting, security intelligence providers).
External sources could provide structured or unstructured information, whereas internal sources are known to provide structured information as it is generated by technical tools. Structured sources are technical, meaning all information from vulnerability databases or threat data feeds, which are machine parsable and digestible and so their processing is simple. Unstructured sources are all that is produced by natural language, such as what we find in social media, discussions in underground forums, communications with a peer, or dark webs. They require natural language processing and machine learning techniques to produce intelligence. Table 1.1 presents these sources with required technologies to process information and transform it into intelligence.
Table 1.1.Threat intelligence sources
Firewall and router logs, honeynets
Vulnerabilities databases, IP blacklists and whitelists, threat data feeds
Forums, news sites, social media, dark web
Technologies for collecting and processing
Feed/web scraper, parser
Collection: crawlers, feed/web parsers
Processing: Natural Language Processing (NLP), machine learning
After collecting and processing threat information, several initiatives encourage threat information sharing, such as incident response teams and international cooperation (CERTs, FIRST, TF-CSIRT) (Skopik et al. 2016), and information sharing and analysis centers (ISACs) (ENISA 2015).
With different sources of threat intelligence and the activities that make use of it, it is useful to have subdivisions to better manage the gathered information and to focus efforts. TI can be categorized into sub-domains. Ahrend et al. (2016) divide TI into formal and informal practices to uncover and utilize tacit knowledge between collaborators. It depends on the collaborators’ form of interaction. Gundert (2014) and Hugh (2016) categorize TI as strategic and operational depending on the form of analysis used to produce it.
In Chismon and Ruks (2015) and Korstanje (2016), a more refined model divides threat intelligence into four distinct domains: strategic threat intelligence, operational threat intelligence, tactical threat intelligence and technical threat intelligence. This subdivision is also known as the four levels of intelligence analysis (Steele 2007b). It was originally used in a military context as the model of expeditionary factors analysis that distinguishes these four levels (Steele 2007a). In what follows, our study follows the last subdivision. Table 1.2 summarizes the four domains.
Strategic threat intelligence
is high-level information consumed by decision-makers. The purpose is to help strategists understand current risks and identify further risks of which they are yet unaware. It could cover financial impact of cyber activity or attack trends, historical data or predictions regarding the threat activity. As a result, a board needs to consider and target possible attacks, in order to weigh risks and allocate effort and budget to mitigate these possible attacks. Strategic TI is generally in the form of reports, briefings or conversations.
Operational threat intelligence
is information about specific impending attacks against the organization. It is initially consumed by high-level security staff, for example security managers or heads of incident response team (Chismon and Ruks 2015). It helps them anticipate when and where attacks will take place.
Tactical threat intelligence
is often referred to as tactics, techniques and procedures. It is information about how threat actors are conducting attacks (Chismon and Ruks 2015). Tactical TI is consumed by incident responders to ensure that their defenses and investigation are prepared for current tactics. For example, understanding the attacker tooling and methodology is tactical intelligence that could prompt defenders to change policies. Tactical TI is often gained by reading technical press or white papers, communicating with peers in other organizations to know what they are seeing attackers do, or purchasing from a provider of such intelligence.
Technical threat intelligence (TTI)
is information that is normally consumed through technical resources (Chismon and Ruks 2015). Technical TI typically feeds the investigative or monitoring functions of an organization, for example firewalls and mail filtering devices, by blocking attempted connections to suspect servers. TTI also serves for analytic tools, or just for visualization and dashboards. For example, after including an IOC in an organization’s defensive infrastructure such as firewalls and mail filtering devices, historical attacks can be detected by searching logs of previously observed connections or binaries (Chismon and Ruks 2015).
Table 1.2.Threat intelligence sub-domains
Senior security management; architects
Security Operation Center staff; incident response team
High level information on changing risks
Details of specific incoming attacks
Attackers’ tactics, techniques and procedures
Indicators of compromise
From their definitions, strategic and tactical threat intelligence are gainful for a long-term use, whereas operational and technical threat intelligence are profitable for a short-time/immediate use. In case technical IOC are for short time use, a key question is: how long we can expect those indicators to remain useful? In the next section, we deal with TTI in more detail.
Defenders should not only be aware of threat actors and the nature of attacks they are facing, but also be aware of the data fundamentals associated with these cyberattacks, known as indicators of compromise (IOC). IOC are closely linked to TTI, but are often confused with intelligence. IOC are an aspect that enables the production of intelligence. The feeds by themselves are just data. By conducting the analysis with the internal data intelligence which is relevant to the organization, an actionable decision is able to recover from any incident (Dalziel 2014). IOC are commonly partitioned into three distinct categories (Ray 2015): network, host-based indicators and email indicators.
are found in URLs and domain names used for command and control (C&C) and link-based malware delivery. They could be IP addresses used in detecting attacks from known compromised servers, botnets and systems conducting DDoS attacks. However, this type of IOC has a short lifetime as threat actors move from one compromised server to another, and with the development of cloud-based hosting services, it is no longer just compromised servers that are used, but also legitimate IP addresses belonging to large corporations.
can be found through analysis of an infected computer. They can be malware names and decoy documents or file hashes of the malware being investigated. The most commonly offered malware indicators are MD5 or SHA-1 hashes of binaries (Chismon and Ruks 2015). Dynamic link libraries (DLLs) are also often targeted, as attackers replace Windows system files to ensure that their payload executes each time Windows starts. Registry keys could be added by a malicious code, and specific keys are modified in computer registry settings to allow for persistence. This is a common technique that malware authors use when creating Trojans (Ray 2015).
are typically created when attackers use free email services to send socially engineered emails to targeted organizations and individuals. Source email address and email subject are created from addresses that appear to belong to recognizable individuals or highlight current events to create intriguing email subject lines, often with attachments and links. X-originating and X-forwarding IP addresses are email headers identifying the originating IP address of (1) a client connecting to a mail server, and (2) a client connecting to a web server through an HTTP proxy or load balancer, respectively. Monitoring these IP addresses when available provides additional insight into attackers.
Spam is the main means to transport malicious URLs and malware. These are later wrapped in the form of spam and phishing messages (i.e. phishing is positioned in the first three steps of the kill chain.) Phishing attacks forge messages from legitimate organizations to deceive victims into disclosing their financial and/or personal identity information or downloading malicious files, in order to facilitate other attacks). Spam is mainly distributed by large spam botnets (i.e. devices that are taken over and form a large network of zombies adhering to C&C servers (ENISA 2017)). Obfuscation methods (Symantec 2016) were observed in 2015 and continued in 2016 to evade the detection of this type of attack. These methods could be the expedition of massive amounts of spam to a wide IP range to reduce the efficiency of spam filters or the usage of alphanumeric symbols and UTF-8 characters to encode malicious URLs.
Cyber threats and attacks are currently one of the most widely discussed phenomena in the IT industry and the general media (e.g. news) (iSightPartners 2014). Figure 1.3(a) shows Google results for cyber “threat intelligence”, particularly in terms of research publications, and Figure 1.3(b) shows Google results for “indicators of compromise” in the threat landscape generally and in terms of research publications particularly, in the last five years. These numbers are taken year on year. Even though an exponential interest in threat intelligence and IOC fields is seen, we observe a gap between the evolution of cyber threat intelligence activities and related research work.
Figure 1.3.Trend of “threat intelligence” and “indicators of compromise” in cyber activity from the last ten years. For a color version of this figure, see www.iste.co.uk/tounsi/cyber.zip
Actually, a large number of threat intelligence vendors and advisory papers are found describing very different products and activities under the banner of threat intelligence. The same conclusion is observed with TTI category via the indicators of compromise. However, few research studies have been done to examine and identify characteristics of TI and its related issues. It is also noteworthy that in recent years, significant research progress has been made in this field.
Regarding surveys related to our work, most of them show yearly new trends and statistics which are relevant to strategic intelligence (Ponemon 2015; Shackleford 2015; Shackleford 2016). On the research side, a significant body of work has been dedicated to threat intelligence sharing issues (Moriarty 2011; Barnum 2014; Burger et al. 2014; Ring 2014; Skopik et al. 2016). Many guidelines, best practices and summaries on existing sharing standards and techniques have been published (e.g. Johnson et al. 2016). In contrast, less research has been devoted to areas like TTI problems and how to mitigate them.
This work complements the aforementioned research work by separating TI categories. It specifically analyzes TTI problems per type (i.e. problems of information quantity over quality and specific limitations related to each type of IOC). Then, it shows how to mitigate them. We also survey the reasons behind not sharing threat information with peers and present solutions to share this information by avoiding either attack or business risks for organizations. We show how a common standardized representation of TTI improves the quality of threat information which improves automated analytics solutions on large volumes of TTI suffering from non-uniformity and redundancy. Finally, we evaluate TTI tools which aim to share threat intelligence between organizations.
In the following section, we start by describing the main reasons for not sharing TI.
The benefits of collective sharing and learning from extended and shared threat information are undeniable. Yet, various barriers limit the possibilities to cooperate. In this section, we detail some of these benefits and expose the reasons for not sharing threat information.
Many organizations and participants today agree on the importance of threat information sharing for many reasons. First, the exchange of critical threat data has been shown to prevent potential cyberattacks and mitigate ongoing attacks and future hazards. According to Bipartisan Policy Center (2012), leading cybercrime analysts recognize that public-private cyber information sharing can speed identification and detection of threats. Thus, if organizations are able to find an intruder in his/her active phases, they have a greater chance of stopping the attacker before data is stolen (Zurkus 2015).
In addition, threat sharing is a cost-effective tool in combating cybercrime if properly developed (PERETTI 2014; Ponemon 2014). In Gilligan et al. (2014), a study on the economics of cyber security identified a number of “investment principles” for organizations to use in developing data security programs with high economic benefit. One of these principles is the participation in multiple cyber security information-sharing exchanges. Advantages of sharing also include a better situational awareness of the threat landscape, a deeper understanding of threat actors and their TTPs, and a greater agility to defend against evolving threats (Zheng and Lewis 2015). This has been proved in a recent survey (Ponemon 2015), where 692 IT and IT security practitioners were surveyed across various industries. Results reveal that there is more recognition that the threat intelligence exchange can improve an organization security posture and situational awareness. More broadly, sharing threats improves coordination for a collective learning and response to new threats and reduces the likelihood of cascading effects across an entire system, industry, sector or sectors (Zheng and Lewis 2015). Many attacks do not target a single organization in isolation, but target a number of organizations, often in the same sector (Chismon and Ruks 2015). For example, a company can be damaged when a competing business’s computers are attacked, since the information stolen can often be used against other organizations in the same sector.
Despite the obvious benefits of sharing threat intelligence, a reluctant position in reporting breaches is observed. The issue was seriously highlighted at a pan-European level when ENISA, the EU’s main cyber security agency, published a report (ENISA 2013) in 2013, intentionally capitalizing the word “SHARE”. The report warned around 200 major CERTs in Europe that “the ever-increasing complexity of cyberattacks requires more effective information sharing” and that organizations were not really involved in doing so. In its last report on threat landscape published in early 2017 (ENISA 2017), ENISA continues to recommend sharing information as a mitigation vector for malwares. Authors recommend the development of methods for the identification and sharing of modus operandi without disclosing competitive information.
Many concerns are deterrent to participation in such a sharing initiative. In Table 1.3, we identify by order of importance ten major reasons for not sharing threat information.
Fearing negative publicity is one of the main reasons for not sharing threat information which could result in a competitive disadvantage (Richards 2009; Choo 2011; Peretti 2014; Chismon and Ruks 2015); for example, competitors might use the information against victimized organization. In some sectors, even a rumor of compromise can influence purchasing decisions or market valuations (Bipartisan Policy Center 2012).
Legal rules and privacy issues are also cited among the most important reasons for not sharing (ENISA: European Union Agency for Network and Information Security 2013; PERETTI 2014; Murdoch and Leaver 2015; Skopik et al. 2016). Organizations may be reluctant to report an incident because they are often unsure about what sort of information can be exchanged to avoid legal questions regarding data and privacy protection. In the same country legal rules may not be the same for the collaborating parties. Affiliation to a specific sector, for example, might force adherence to specific regulations (ENISA 2006). Regarding international cooperation, confidence between cooperating teams while handling sensitive information is most of the time prevented by international regulations that limit the exchange and usage of such information. Teams working in different countries have to comply with different legal environments. This issue influences the ways the teams provide their services and the way they treat particular kinds of attacks, and therefore limits the possibilities to cooperate, if not making cooperation impossible (Skopik et al. 2016).
Quality issues are one of the most common barriers to effective information exchange, according to different surveys realized on CERTs and other similar organizations (ENISA 2013; Ring 2014; Ponemon 2015; Sillaber et al. 2016). Data quality includes relevance, timeliness, accuracy, comparability, coherence and clarity. For example, many interviewees report that a great deal of what is shared is usually a bit old and thus not actionable. It is also not specific enough to aid the decision-making process.
Tysiące ebooków i audiobooków
Ich liczba ciągle rośnie, a Ty masz gwarancję niezmiennej ceny.
Napisali o nas:
Nowy sposób na e-księgarnię
Czytelnicy nie wierzą
Legimi idzie na całość
Projekt Legimi wielkim wydarzeniem
Spotify for ebooks