A ground shaking exposé on the failure of popular cyber risk management methods How to Measure Anything in Cybersecurity Risk exposes the shortcomings of current "risk management" practices, and offers a series of improvement techniques that help you fill the holes and ramp up security. In his bestselling book How to Measure Anything, author Douglas W. Hubbard opened the business world's eyes to the critical need for better measurement. This book expands upon that premise and draws from The Failure of Risk Management to sound the alarm in the cybersecurity realm. Some of the field's premier risk management approaches actually create more risk than they mitigate, and questionable methods have been duplicated across industries and embedded in the products accepted as gospel. This book sheds light on these blatant risks, and provides alternate techniques that can help improve your current situation. You'll also learn which approaches are too risky to save, and are actually more damaging than a total lack of any security. Dangerous risk management methods abound; there is no industry more critically in need of solutions than cybersecurity. This book provides solutions where they exist, and advises when to change tracks entirely. * Discover the shortcomings of cybersecurity's "best practices" * Learn which risk management approaches actually create risk * Improve your current practices with practical alterations * Learn which methods are beyond saving, and worse than doing nothing Insightful and enlightening, this book will inspire a closer examination of your company's own risk management practices in the context of cybersecurity. The end goal is airtight data protection, so finding cracks in the vault is a positive thing--as long as you get there before the bad guys do. How to Measure Anything in Cybersecurity Risk is your guide to more robust protection through better quantitative processes, approaches, and techniques.
Ebooka przeczytasz w aplikacjach Legimi na:
Liczba stron: 498
DOUGLAS W. HUBBARD
Cover images: Cyber security lock © Henrik5000/iStockphoto; Cyber eye © kasahasa/iStockphoto; Internet Security concept © bluebay2014/iStockphoto; Background © omergenc/iStockphoto; Abstract business background © Natal'ya Bondarenko/iStockphoto; Abstract business background © procurator/iStockphoto; Cloud Computing © derrrek/iStockphoto Cover design: Wiley
Copyright © 2016 by John Wiley & Sons, Inc. All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey. Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750–8400, fax (978) 646–8600, or on the Web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748–6011, fax (201) 748–6008, or online at http://www.wiley.com/go/permissions.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762–2974, outside the United States at (317) 572–3993 or fax (317) 572–4002.
Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com.
Library of Congress Cataloging-in-Publication Data:
ISBN 978-1-119-08529-4 (Hardcover)
ISBN 978-1-119-22460-0 (ePDF)
ISBN 978-1-119-22461-7 (ePub)
Douglas Hubbard's dedication: To my children, Evan, Madeleine, and Steven, as the continuing sources of inspiration in my life; and to my wife, Janet, for doing all the things that make it possible for me to have time to write a book, and for being the ultimate proofreader.
Richard Seiersen's dedication: To all the ladies in my life: Helena, Kaela, Anika, and Brenna. Thank you for your love and support through the book and life. You make it fun.
Doug and Richard would also like to dedicate this book to the military and law enforcement professionals who specialize in cybersecurity.
About the Authors
Why This Book, Why Now?
What Is This Book About?
What to Expect
Is This Book for Me?
We Need More Than Technology
New Tools for Decision Makers
Our Path Forward
PART I: Why Cybersecurity Needs Better Measurements for Risk
Chapter 1: The One Patch Most Needed in Cybersecurity
The Global Attack Surface
The Cyber Threat Response
A Proposal for Cybersecurity Risk Management
Chapter 2: A Measurement Primer for Cybersecurity
The Concept of Measurement
The Object of Measurement
The Methods of Measurement
Chapter 3: Model Now!:
An Introduction to Practical Quantitative Methods for Cybersecurity
A Simple One-for-One Substitution
The Expert as the Instrument
Doing “Uncertainty Math”
Supporting the Decision: A Return on Mitigation
Where to Go from Here
Chapter 4: The Single Most Important Measurement in Cybersecurity
The Analysis Placebo: Why We Can’t Trust Opinion Alone
How You Have More Data Than You Think
When Algorithms Beat Experts
Tools for Improving the Human Component
Summary and Next Steps
Chapter 5: Risk Matrices, Lie Factors, Misconceptions, and Other Obstacles to Measuring Risk
Scanning the Landscape: A Survey of Cybersecurity Professionals
What Color Is Your Risk? The Ubiquitous—and Risky—Risk Matrix
Exsupero Ursus and Other Fallacies
PART II: Evolving the Model of Cybersecurity Risk
Chapter 6: Decompose It:
Unpacking the Details
Decomposing the Simple One-for-One Substitution Model
More Decomposition Guidelines: Clear, Observable, Useful
A Hard Decomposition: Reputation Damage
Chapter 7: Calibrated Estimates:
How Much Do You Know
Introduction to Subjective Probability
Further Improvements on Calibration
Conceptual Obstacles to Calibration
The Effects of Calibration
Answers to Trivia Questions for Calibration Exercise
Chapter 8: Reducing Uncertainty with Bayesian Methods
A Major Data Breach Example
A Brief Introduction to Bayes and Probability Theory
Bayes Applied to the Cloud Breach Use Case
Chapter 9: Some Powerful Methods Based on Bayes
Computing Frequencies with (Very) Few Data Points: The Beta Distribution
Decomposing Probabilities with Many Conditions
Reducing Uncertainty Further and When To Do It
Leveraging Existing Resources to Reduce Uncertainty
Wrapping Up Bayes
PART III: Cybersecurity Risk Management for the Enterprise
Chapter 10: Toward Security Metrics Maturity
Introduction: Operational Security Metrics Maturity Model
Sparse Data Analytics
Functional Security Metrics
Security Data Marts
Chapter 11: How Well Are My Security Investments Working Together?
Addressing BI Concerns
Just the Facts: What Is Dimensional Modeling and Why Do I Need It?
Dimensional Modeling Use Case: Advanced Data Stealing Threats
Modeling People Processes
Chapter 12: A Call to Action:
How to Roll Out Cybersecurity Risk Management
Establishing the CSRM Strategic Charter
Organizational Roles and Responsibilities for CSRM
Getting Audit to Audit
What the Cybersecurity Ecosystem Must Do to Support You
Can We Avoid the Big One?
Appendix A: Selected Distributions
Distribution Name: Triangular
Distribution Name: Binary
Distribution Name: Normal
Distribution Name: Lognormal
Distribution Name: Beta
Distribution Name: Power Law
Distribution Name: Truncated Power Law
Appendix B: Guest Contributors
Appendix B Contents
Aggregating Data Sources for Cyber Insights
Forecasting—and Reducing—Occurrence of Espionage Attacks
Financial Impact of Breaches
The Flaw of Averages in Cyber Security
How Catastrophe Modeling Can Be Applied to Cyber Risk
The familiar risk matrix (a.k.a. heat map or risk map)
The Lognormal versus Normal Distribution
Example of a Loss Exceedance Curve
Inherent Risk, Residual Risk, and Risk Tolerance
Duplicate Scenario Consistency: Comparison of First and Second Probability Estimates of Same Scenario by Same Judge
Summary of Distribution of Inconsistencies
Variations of NATO Officers’ Interpretations of Probability Phrases
Heat Map Theory and Empirical Testing
Stats Literacy versus Attitude toward Quantitative Methods
Quarter-to-Quarter Change in Sales for Major Retailers with Major Data Breaches Relative to the Quarter of the Breach
Day-to-Day Change in Stock Prices of Firms with Major Data Breaches Relative to Day of the Breach
Changes in Stock Prices after a Major Breach for Three Major Retailers Relative to Historical Volatility
Changes in Seasonally Adjusted Quarterly Sales after Breach Relative to Historical Volatility
Spin to Win!
Distribution of Answers Within 90% CI for 10-Question Calibration Test
Calibration Experiment Results for 20 IT Industry Predictions in 1997
A Chain Rule Tree
Major Data Breach Decomposition Example with Conditional Probabilities
A Uniform Distribution (a Beta Distribution with alpha=beta=1)
A Distribution Starting with a Uniform Prior and Updated with a Sample of 1 Hit and 5 Misses
The Per-Year Frequency of Data Breaches in This Industry
Example of How a Beta Distribution Changes the Chance of Extreme Losses
Example of Regression Model Predicting Judge Estimates
Distribution of Investigation Time for Cybersecurity Incidents
Security Analytics Maturity Model
Bayes Triplot, beta(4.31, 6.3) prior, s=3, f=7
Bayes Triplot, beta(4.31, 6.3) prior, s=5, f=30
The Standard Security Data Mart
Expanded Mart with Conforming Dimensions
Days ADST Alive Before Being Found
ADST High Level Mart
Remediation Workflow Facts
Cybersecurity Risk Management Function
Power Law Distribution
Truncated Power Law Distribution
Probability of an Espionage Incident with Modeled Changes to Training and Operating Systems
Average Data Breaches per Year by State
Data Breach Rate by Year as a Function of Number of Employees
Data on Breached Records: SEC Filings versus Ponemon
The Distribution of Detection Times for Layers 1 and 2 of a Security System
SIPs of 10,000 Trials of Layer 1 and Layer 2 Detection Times
A SIPmath Excel Model to Calculate the Overall Detection Distribution
The Detection Time SIPs of 10 Independent Botnets
Simulation of Multiple Botnets
Probability of Compromise by Company Size and Password Policy
The AIR Worldwide Catastrophe Modeling Framework
Table of Contents
Daniel E. Geer, Jr., ScD
Daniel Geer is a security researcher with a quantitative bent. His group at MIT produced Kerberos, and a number of startups later he is still at it—today as chief information security officer at In-Q-Tel. He writes a lot at every length, and sometimes it gets read. He’s an electrical engineer, a statistician, and someone who thinks truth is best achieved by adversarial procedures.
It is my pleasure to recommend How to Measure Anything in Cybersecurity Risk. The topic is nothing if not pressing, and it is one that I have myself been dancing around for some time.1 It is a hard problem, which allows me to quote Secretary of State John Foster Dulles: “The measure of success is not whether you have a tough problem to deal with, but whether it is the same problem you had last year.” At its simplest, this book promises to help you put some old, hard problems behind you.
The practice of cybersecurity is part engineering and part inference. The central truth of engineering is that design pays if and only if the problem statement is itself well understood. The central truth of statistical inference is that all data has bias—the question being whether you can correct for it. Both engineering and inference depend on measurement. When measurement gets good enough, metrics become possible.
I say “metrics” because metrics are derivatives of measurement. A metric encapsulates measurements for the purpose of ongoing decision support. I and you, dear reader, are not in cybersecurity for reasons of science, though those who are in it for science (or philosophy) will also want measurement of some sort to backstop their theorizing. We need metrics derived from solid measurement because the scale of our task compared to the scale of our tools demands force multiplication. In any case, no game play improves without a way to keep score.
Early in the present author’s career, a meeting was held inside a market-maker bank. The CISO, who was an unwilling promotion from Internal Audit, was caustic even by the standards of NYC finance. He began his comments mildly enough:
Are you security people so stupid that you can’t tell me:
How secure am I?
Am I better off than I was this time last year?
Am I spending the right amount of money?
How do I compare to my peers?
What risk transfer options do I have?
Twenty-five years later, those questions remain germane. Answering them, and others, comes only from measurement; that is the “Why?” of this book.
Yet even if we all agree on “Why?,” the real value of this book is not “Why?” but “How?”: how to measure and then choose among methods, how to do that both consistently and repeatedly, and how to move up from one method to a better one as your skill improves.
Some will say that cybersecurity is impossible if you face a sufficiently skilled opponent. That’s true. It is also irrelevant. Our opponents by and large pick the targets that maximize their return on their investment, which is a polite way of saying that you may not be able to thwart the most singularly determined opponent for whom cost is no object, but you can sure as the world make other targets more attractive than you are. As I said, no game play improves without a way to keep score. That is what this book offers you—a way to improve your game.
This all requires numbers because numbers are the only input to both engineering and inference. Adjectives are not. Color codes are not. If you have any interest in taking care of yourself, of standing on your own two feet, of knowing where you are, then you owe it to yourself to exhaust this book. Its writing is clear, its pedagogy is straightforward, and its downloadable Excel spreadsheets leave no excuse for not trying.
Have I made the case? I hope so.
. Daniel Geer, Jr., Kevin Soo Hoo, and Andrew Jaquith, “Information Security: Why the Future Belongs to the Quants,”
IEEE Security & Privacy
1, no. 4 (July/August 2003): 32–40,
Stuart McClure is the CEO of Cylance, former global CTO of McAfee, and founding author of the Hacking Exposed series.
My university professors always sputtered the age-old maxim in class: “You can’t manage what you cannot measure.” And while my perky, barely-out-of-teenage-years ears absorbed the claim aurally, my brain never really could process what it meant. Sure, my numerous computer science classes kept me chasing an infinite pursuit of improving mathematical algorithms in software programs, but little did I know how to really apply these quantitative efforts to the management of anything, much less cyber.
So I bounded forward in my career in IT and software programming, looking for an application of my unique talents. I never found cyber measurement all that compelling until I found cybersecurity. What motivated me to look at a foundational way to measure what I did in cybersecurity was the timeless question that I and many of you get almost daily: “Are we secure from attack?”
The easy answer to such a trite yet completely understandable question is “No. Security is never 100%.” But some of you have answered the same way I have done from time to time, being exhausted by the inane query, with “Yes. Yes we are.” Why? Because we know a ridiculous question should be given an equally ridiculous answer. For how can we know? Well, you can’t—without metrics.
As my cybersecurity career developed with InfoWorld and Ernst & Young, while founding the company Foundstone, taking senior executive roles in its acquiring company, McAfee, and now starting Cylance, I have developed a unique appreciation for the original professorial claim that you really cannot manage what you cannot measure. While an objective metric may be mythical, a subjective and localized measurement of your current risk posture and where you stand relative to your past and your peers is very possible.
Measuring the cyber risk present at an organization is nontrivial, and when you set the requirement of delivering on quantitative measurements rather than subjective and qualitative measurements, it becomes almost beyond daunting.
The real questions for all of us security practitioners are ultimately “Where do we start? How do we go about measuring cybersecurity’s effectiveness and return?” The only way to begin to answer those questions is through quantitative metrics. And until now, the art of cybersecurity measurement has been elusive. I remember the first time someone asked me my opinion on a security-risk metrics program, I answered something to the effect of, “It’s impossible to measure something you cannot quantify.”
What the authors of this book have done is begin to define a framework and a set of algorithms and metrics to do exactly what the industry has long thought impossible, or at least futile: measure security risk. We may not be perfect in our measurement, but we can define a set of standard metrics that are defensible and quantifiable, and then use those same metrics day in and day out to ensure that things are improving. And that is the ultimate value of defining and executing on a set of security metrics. You don’t need to be perfect; all you need to do is start somewhere and measure yourself relative to the day before.
We thank these people for their help as we wrote this book:
Christopher “Kip” Bohn
A very special thanks to Bonnie Norman and Steve Abrahamson for providing additional editing.
Douglas Hubbard is the creator of the Applied Information Economics method and the founder of Hubbard Decision Research. He is the author of one of the best-selling business statistics books of all time, How to Measure Anything: Finding the Value of “Intangibles” in Business. He is also the author of The Failure of Risk Management: Why It’s Broken and How to Fix It, and Pulse: The New Science of Harnessing Internet Buzz to Track Threats and Opportunities. He has sold more than 100,000 copies of his books in eight different languages, and his books are used in courses at many major universities. His consulting experience in quantitative decision analysis and measurement problems totals over 27 years and spans many industries including pharmaceuticals, insurance, banking, utilities, cybersecurity, interventions in developing economies, mining, federal and state government, entertainment media, military logistics, and manufacturing. He is also published in several periodicals including Nature, The IBM Journal of R&D, Analytics, OR/MS Today, InformationWeek, and CIO Magazine.
Richard Seiersen is a technology executive with nearly 20 years of experience in information security, risk management, and product development. Currently he is the general manager of cybersecurity and privacy for GE Healthcare. Many years ago, prior to his life in technology, he was a classically trained musician—guitar, specifically. Richard now lives with his family of string players in the San Francisco Bay Area. In his limited spare time he is slowly working through his MS in predictive analytics at Northwestern. He should be done just in time to retire. He thinks that will be the perfect time to take up classical guitar again.
This book is the first of a series of spinoffs from Douglas Hubbard’s successful first book, How to Measure Anything: Finding the Value of “Intangibles” in Business. For future books in this franchise, we were considering titles such as How to Measure Anything in Project Management or industry-specific books like How to Measure Anything in Healthcare. All we had to do was pick a good idea from a long list of possibilities.
Cybersecurity risk seemed like an ideal first book for this new series. It is extremely topical and filled with measurement challenges that may often seem impossible. We also believe it is an extremely important topic for personal reasons (as we are credit card users and have medical records, client data, intellectual property, and so on) as well as for the economy as a whole.
Another factor in choosing a topic was finding the right co-author. Because Doug Hubbard—a generalist in measurement methods—would not be a specialist in any of the particular potential spinoff topics, he planned to find a co-author who could write authoritatively on the topic. Hubbard was fortunate to find an enthusiastic volunteer in Richard Seiersen—someone with years of experience in the highest levels of cybersecurity management with some of the largest organizations.
So, with a topical but difficult measurement subject, a broad and growing audience, and a good co-author, cybersecurity seemed like an ideal fit.
Even though this book focuses on cybersecurity risk, this book still has a lot in common with the original How to Measure Anything book, including:
Making better decisions when you are significantly uncertain about the present and future, and
Reducing that uncertainty even when data seems unavailable or the targets of measurement seem ambiguous and intangible.
This book in particular offers an alternative to a set of deeply rooted risk assessment methods now widely used in cybersecurity but that have no basis in the mathematics of risk or scientific method. We argue that these methods impede decisions about a subject of growing criticality. We also argue that methods based on real evidence of improving decisions are not only practical but already have been applied to a wide variety of equally difficult problems, including cybersecurity itself. We will show that we can start at a simple level and then evolve to whatever level is required while avoiding problems inherent to “risk matrices” and “risk scores.” So there is no reason not to adopt better methods immediately.
You should expect a gentle introduction to measurably better decision making—specifically, improvement in high-stakes decisions that have a lot of uncertainty and where, if you are wrong, your decisions could lead to catastrophe. We think security embodies all of these concerns.
We don’t expect our readers to be risk management experts or cybersecurity experts. The methods we apply to security can be applied to many other areas. Of course, we do hope it will make those who work in the field of cybersecurity better defenders and strategists. We also hope it will make the larger set of leaders more conscious of security risks in the process of becoming better decision makers.
If you really want to be sure this book is for you, here are the specific personas we are targeting:
You are a decision maker looking to improve—that is,
your high-stakes decision making.
You are a security professional looking to become more strategic in your fight against the bad guy.
You are neither of the above. Instead, you have an interest in understanding more about cybersecurity and/or risk management using readily accessible quantitative techniques.
If you are a hard-core quant, consider skipping the purely quant parts. If you are a hard-core hacker, consider skipping the purely security parts. That said, we will often have a novel perspective, or “epiphanies of the obvious,” on topics you already know well. Read as you see fit.
We need to lose less often in the fight against the bad guys. Or, at least, lose more gracefully and recover quickly. Many feel that this requires better technology. We clamor for more innovation from our vendors in the security space even though breach frequency has not been reduced. To effectively battle security threats, we think there is something equally important as innovative technology, if not more important. We believe that “something” must include a better way to think quantitatively about risk.
We need decision makers who consistently make better choices through better analysis. We also need decision makers who know how to deftly handle uncertainty in the face of looming catastrophe. Parts of this solution are sometimes referred to with current trendy terms like “predictive analytics,” but more broadly this includes all of decision science or decision analysis and even properly applied statistics.
Part I of this book sets the stage for reasoning about uncertainty in security. We will come to terms on things like security, uncertainty, measurement, and risk management. We also argue against toxic misunderstandings of these terms and why we need a better approach to measuring cybersecurity risk and, for that matter, measuring the performance of cybersecurity risk analysis itself. We will also introduce a very simple quantitative method that could serve as a starting point for anyone, no matter how averse they may be to complexity.
Part II of this book will delve further into evolutionary steps we can take with a very simple quantitative model. We will describe how to add further complexity to a model and how to use even minimal amounts of data to improve those models.
Last, in Part III we will describe what is needed to implement these methods in the organization. We will also talk about the implications of this book for the entire cybersecurity “ecosystem,” including standards organizations and vendors.
There is nothing more deceptive than an obvious fact.
The Bascombe Valley Mystery1
In the days after September 11, 2001, increased security meant overhauled screening at the airport, no-fly lists, air marshals, and attacking terrorist training camps. But just 12 years later, the FBI was emphasizing the emergence of a very different concern: the “cyber-based threat.” In 2013, FBI director James B. Comey, testifying before the Senate Committee on Homeland Security and Governmental Affairs, stated the following:
. . .we anticipate that in the future, resources devoted to cyber-based threats will equal or even eclipse the resources devoted to non-cyber based terrorist threats.
—FBI director James B. Comey, November 14, 20132
This is a shift in priorities we cannot overstate. How many organizations in 2001, preparing for what they perceived as the key threats at the time, would have even imagined that cyber threats would have not only equaled but exceeded more conventional terrorist threats? Yet as we write this book, it is accepted as our new “new normal.”
Admittedly, those outside of the world of cybersecurity may think the FBI is sowing seeds of Fear, Uncertainty, and Doubt (FUD) to some political end. But it would seem that there are plenty of sources of FUD, so why pick cyber threats in particular? Of course, to cybersecurity experts this is a non-epiphany. We are under attack and it will certainly get worse before it gets better.
Yet resources are limited. Therefore, the cybersecurity professional must effectively determine a kind of “return on risk mitigation.” Whether or not such a return is explicitly calculated, we must evaluate whether a given defense strategy is a better use of resources than another. In short, we have to measure and monetize risk and risk reduction. What we need is a “how to” book for professionals in charge of allocating limited resources to addressing ever-increasing cyber threats, and leveraging those resources for optimum risk reduction. This includes methods for:
How to measure risk assessment methods themselves.
How to measure reduction in risk from a given defense, control, mitigation, or strategy (using some of the better-performing methods as identified in the first bullet).
How to continuously and measurably improve on the implemented methods, using more advanced methods that the reader may employ as he or she feels ready.
Let’s be explicit about what this book isn’t. This is not a technical security book—if you’re looking for a book on “ethical hacking,” then you have certainly come to the wrong place. There will be no discussions about how to execute stack overflows, defeat encryption algorithms, or execute SQL injections. If and when we do discuss such things, it’s only in the context of understanding them as parameters in a risk model.
But don’t be disappointed if you’re a technical person. We will certainly be getting into some analytic nitty-gritty as it applies to security. This is from the perspective of an analyst or leader trying to make better bets in relation to possible future losses. For now, let’s review the scale of the challenge we are dealing with and how we deal with it currently, then outline a direction for the improvements laid out in the rest of the book.
Nation-states, organized crime, hacktivist entities, and insider threats want our secrets, our money, and our intellectual property, and some want our complete demise. Sound dramatic? If we understand the FBI correctly, they expect to spend as much or more on protecting us from cyber threats than from those who would turn airplanes, cars, pressure cookers, and even people into bombs. And if you are reading this book, you probably already accept the gravity of the situation. But we should at least spend some time emphasizing this point if for no other reason than to help those who already agree with this point make the case to others.
The Global Information Security Workforce Study (GISWS)—a survey conducted in 2015 of more than 14,000 security professionals, including 1,800 federal employees—showed we are not just taking a beating, we are backpedaling:
When we consider the amount of effort dedicated over the past two years to furthering the security readiness of federal systems and the nation’s overall security posture, our hope was to see an obvious step forward. The data shows that, in fact, we have taken a step back.
—(ISC)2 on the announcement of the GISWS, 20153
Indeed, other sources of data support this dire conclusion. The UK insurance market, Lloyd’s of London, estimated that cyberattacks cost businesses $400 billion globally per year.4 In 2014, one billion records were compromised. This caused Forbes magazine to refer to 2014 as “The Year of the Data Breach.”5,6 Unfortunately, identifying 2014 as the year of the data breach may still prove to be premature. It could easily get worse.
In fact, the founder and head of XL Catlin, the largest insurer in Lloyd’s of London, said cybersecurity is the “biggest, most systemic risk” he has seen in his 42 years in insurance.7 Potential weaknesses in widely used software; interdependent network access between companies, vendors, and clients; and the possibility of large coordinated attacks can affect much more than even one big company like Anthem, Target, or Sony. XL Catlin believes it is possible that there could be a simultaneous impact on multiple major organizations affecting the entire economy. They feel that if there are multiple major claims in a short period of time, this is a bigger burden than insurers can realistically cover.
What is causing such a dramatic rise in breach and the anticipation of even more breaches? It is called attack surface. “Attack surface” is usually defined as the kind of total of all exposures of an information system. It exposes value to untrusted sources. You don’t need to be a security professional to get this. Your home, your bank account, your family, and your identity all have an attack surface. If you received identity theft protection as a federal employee, or a customer of Home Depot, Target, Anthem, or Neiman Marcus, then you received that courtesy of an attack surface. These companies put the digital you within reach of criminals. Directly or indirectly, the Internet facilitated this. This evolution happened quickly and without the knowledge or direct permission of all interested parties (organizations, employees, customers, or citizens).
Various definitions of the phrase consider the ways into and out of a system, the defenses of that system, and sometimes the value of data in that system.8,9 Some definitions of attack surface refer to the attack surface of a system and some refer to the attack surface of a network, but either might be too narrow even for a given firm. We might also define an “Enterprise Attack Surface” that not only consists of all systems and networks in that organization but also the exposure of third parties. This includes everyone in the enterprise “ecosystem” including major customers, vendors, and perhaps government agencies. (Recall that in the case of the Target breach, the exploit came from an HVAC vendor.)
Perhaps the total attack surface that concerns all citizens, consumers, and governments is a kind of “global attack surface”: the total set of cybersecurity exposures—across all systems, networks, and organizations—we all face just by shopping with a credit card, browsing online, receiving medical benefits, or even just being employed. This global attack surface is a macro-level phenomenon driven by at least four macro-level causes of growth: increasing users worldwide, variety of users worldwide, growth in discovered and exploited vulnerabilities per person per use, and organizations more networked with each other resulting in “cascade failure” risks.
The increasing number of persons on the Internet.
Internet users worldwide grew by a factor of 6 from 2001 to 2014 (half a billion to 3 billion). It may not be obvious that the number of users is a dimension in some attack surfaces, but some measures of attack surface also include the value of a target, which would be partly a function of number of users (e.g., gaining access to more personal records)
Also, on a global scale, it acts as an important multiplier on the following dimensions.
The number of uses per person for online resources.
The varied uses of the Internet, total time spent on the Internet, use of credit cards, and various services that require the storage of personal data-automated transactions are growing. Per person. Worldwide. For example, since 2001 the number of websites alone has grown at a rate five times faster than the number of users—a billion total by 2014. Connected devices constitute another potential way for an individual to use the Internet even without their active involvement. One forecast regarding the “Internet of Things” (IoT) was made by Gartner, Inc: “4.9 billion connected things will be in use in 2015, up 30 percent from 2014, and will reach 25 billion by 2020.”
A key concern here is the lack of consistent security in designs. The National Security Telecommunications Advisory Committee determined that “there is a small—and rapidly closing—window to ensure that the IoT is adopted in a way that maximizes security and minimizes risk. If the country fails to do so, it will be coping with the consequences for generations.”
A natural consequence of the previous two factors is the number of ways such uses can be exploited increases. This is due to the increase in systems and devices with potential vulnerabilities, even if vulnerabilities per system or device do not increase. At least the number of
vulnerabilities will increase partly because the number of people actively seeking and exploiting vulnerabilities increases. And more of those will be from well-organized and well-funded teams of individuals working for national sponsors.
The possibility of a major breach “cascade.”
More large organizations are finding efficiencies from being more connected. The fact that Target was breached through a vendor raises the possibility of the same attack affecting multiple organizations. Organizations like Target have many vendors, several of which in turn have multiple large corporate and government clients. Mapping this cyber-ecosystem of connections would be almost impossible, since it would certainly require all these organizations to divulge sensitive information. So the kind of publicly available metrics we have for the previous three factors in this list do not exist for this one. But we suspect most large organizations could just be one or two degrees of separation from each other.
It seems reasonable that of these four trends the earlier trends magnify the latter trends. If so, the risk of the major breach “cascade” event could grow faster than the growth rate of the first couple of trends.
Our naïve, and obvious, hypothesis? Attack surface and breach are correlated. If this holds true, then we haven’t seen anything yet. We are heading into a historic growth in attack surface, and hence breach, which will eclipse what has been seen to date. Given all this, the FBI director’s comments and the statements of Lloyd’s of London insurers cannot be dismissed as alarmist. Even with the giant breaches like Target, Anthem, and Sony behind us, we believe we haven’t seen “The Big One” yet.
It’s a bit of a catch-22 in that success in business is highly correlated with exposure. Banking, buying, getting medical attention, and even being employed is predicated on exposure. You need to expose data to transact business, and if you want to do more business, that means more attack surface. When you are exposed, you can be seen and affected in unexpected and malicious ways. In defense, cybersecurity professionals try to “harden” systems—that is, removing all nonessentials, including programs, users, data, privileges, and vulnerabilities. Hardening shrinks, but does not eliminate, attack surface. Yet even this partial reduction in attack surface requires significant resources, and the trends show that the resource requirements will grow.
Generally, executive-level attention on cybersecurity risks has increased, and attention is followed by resources. The boardroom is beginning to ask questions like “Will we be breached?” or “Are we better than Sony?” or “Did we spend enough on the right risks?” Asking these questions eventually brings some to hire a chief information security officer (CISO). The first Fortune 100 CISO role emerged more than 20 years ago, but for most of that time growth in CISOs was slow. CFO Magazine acknowledged that hiring a CISO as recently as 2008 would have been considered “superfluous.”13 In fact, large companies are still in the process of hiring their first CISOs, many just after they suffer major breaches. By the time this book was written, Target finally hired their first CISO,14 and JPMorgan did likewise after their breach.15
In addition to merely asking these questions and creating a management- level role for information security, corporations have been showing a willingness, perhaps more slowly than cybersecurity professionals would like, to allocate serious resources to this problem:
Just after the 9/11 attacks the annual cybersecurity market in the United States was $4.1 billion.
By 2015 the information technology budget of the United States Defense Department had grown to $36.7 billion.
This does not include $1.4 billion in startup investments for new cybersecurity-related firms.
Cybersecurity budgets have grown at about twice the rate of IT budgets overall.
So what do organizations do with this new executive visibility and inflow of money to cybersecurity? Mostly, they seek out vulnerabilities, detect attacks, and eliminate compromises. Of course, the size of the attack surface and the sheer volume of vulnerabilities, attacks, and compromises means organizations must make tough choices; not everything gets fixed, stopped, recovered, and so forth. There will need to be some form of acceptable (tolerable) losses. What risks are acceptable is often not documented, and when they are, they are stated in soft, unquantified terms that cannot be used clearly in a calculation to determine if a given expenditure is justified or not.
On the vulnerability side of the equation, this has led to what is called “vulnerability management.” An extension on the attack side is “security event management,” which can generalize to “security management.” More recently there is “threat intelligence” and the emerging phrase “threat management.” While all are within the tactical security solution spaces, the management portion attempts to rank-order what to do next. So how do organizations conduct security management? How do they prioritize the allocation of significant, but limited, resources for an expanding list of vulnerabilities? In other words, how do they make cybersecurity decisions to allocate limited resources in a fight against such uncertain and growing risks?
Certainly a lot of expert intuition is involved, as there always is in management. But for more systematic approaches, the vast majority of organizations concerned with cybersecurity will resort to some sort of “scoring” method that ultimately plots risks on a “matrix.” This is true for both very tactical level issues and strategic, aggregated risks. For example, an application with multiple vulnerabilities could have all of them aggregated into one score. Using similar methods at another scale, groups of applications can then be aggregated into a portfolio and plotted with other portfolios. The aggregation process is typically some form of invented mathematics unfamiliar to actuaries, statisticians, and mathematicians.
In one widely used approach, “likelihood” and “impact” will be rated subjectively, perhaps on a 1 to 5 scale, and those two values will be used to plot a particular risk on a matrix (variously called a “risk matrix,” “heat map,” “risk map,” etc.). The matrix—similar to the one shown in Figure 1.1—is then often further divided into sections of low, medium, and high risk. Events with high likelihood and high impact would be in the upper-right “high risk” corner, while those with low likelihood and low impact would be in the opposite “low risk” corner. The idea is that the higher the score, the more important something is and the sooner you should address it. You may intuitively think such an approach is reasonable, and if you thought so you would be in good company.
Figure 1.1 The familiar risk matrix (a.k.a. heat map or risk map)
Various versions of scores and risk maps are endorsed and promoted by several major organizations, standards, and frameworks such as the National Institute of Standards and Technology (NIST), the International Standards Organization (ISO), MITRE.org, and the Open Web Application Security Project (OWASP), among others. Most organizations with a cybersecurity function claim at least one of these as part of their framework for assessing risk. In fact, most major software organizations like Oracle, Microsoft, and Adobe rate their vulnerabilities using a NIST-supported scoring system called the “Common Vulnerability Scoring System” (CVSS). Also, many security solutions also include CVSS ratings, be it for vulnerability and/or attack related. While the control recommendations made by many of these frameworks are good, it’s how we are guided to prioritize risk management on an enterprise scale that is amplifying risk.
Literally hundreds of security vendors and even standards bodies have come to adopt some form of scoring system. Indeed, scoring approaches and risk matrices are at the core of the security industry’s risk management approaches.
In all cases, they are based on the idea that such methods are of some sufficient benefit. That is, they are assumed to be at least an improvement over not using such a method. As one of the standards organizations has put it, rating risk this way is adequate:
Once the tester has identified a potential risk and wants to figure out how serious it is, the first step is to estimate the likelihood. At the highest level, this is a rough measure of how likely this particular vulnerability is to be uncovered and exploited by an attacker. It is not necessary to be over-precise in this estimate. Generally, identifying whether the likelihood is low, medium, or high is sufficient.
—OWASP20 (emphasis added)
Does this last phrase, stating “low, medium, or high is sufficient,” need to be taken on faith? Considering the critical nature of the decisions such methods will guide, we argue that it should not. This is a testable hypothesis and it actually has been tested in many different ways. The growing trends of cybersecurity attacks alone indicate it might be high time to try something else.
So let’s be clear about our position on current methods: They are a failure. They do not work. A thorough investigation of the research on these methods and decision-making methods in general indicates the following (all of this will be discussed in detail in Chapters 4 and 5):
There is no evidence that the types of scoring and risk matrix methods widely used in cybersecurity improve judgment.
On the contrary, there is evidence these methods add noise and error to the judgment process. One researcher—Tony Cox—goes as far as to say they can be “worse than random.” (Cox’s research and many others will be detailed in Chapter 5.)
Any appearance of “working” is probably a type of “analysis placebo.” That is, a method may make you feel better even though the activity provides no measurable improvement in estimating risks (or even adds error).
There is overwhelming evidence in published research that quantitative, probabilistic methods are effective.
Fortunately, most cybersecurity experts seem willing and able to adopt better quantitative solutions. But common misconceptions held by some—including misconceptions about basic statistics—create some obstacles for adopting better methods.
How cybersecurity assesses risk, and how it determines how much it reduces risk, are the basis for determining where cybersecurity needs to prioritize the use of resources. And if this method is broken—or even just leaves room for significant improvement—then that is the highest-priority problem for cybersecurity to tackle! Clearly, putting cybersecurity risk- assessment and decision-making methods on a solid foundation will affect everything else cybersecurity does. If risk assessment itself is a weakness, then fixing risk assessment is the most important “patch” a cybersecurity professional can implement.
In this book, we will propose a different direction for cybersecurity. Every proposed solution will ultimately be guided by the title of this book. That is, we are solving problems by describing how to measure cybersecurity risk—anything in cybersecurity risk. These measurements will be a tool in the solutions proposed but also reveal how these solutions were selected in the first place. So let us propose that we adopt a new quantitative approach to cybersecurity, built upon the following principles:
It is possible to greatly improve on the existing methods.
Many aspects of existing methods have been measured and found wanting. This is not acceptable for the scale of the problems faced in cybersecurity.
Cybersecurity can use the same quantitative language of risk analysis used in other problems.
As we will see, there are plenty of fields with massive risk, minimal data, and profoundly chaotic actors that are regularly modeled using traditional mathematical methods. We don’t need to reinvent terminology or methods from other fields that also have challenging risk analysis problems.
Methods exist that have already been measured to be an improvement over expert intuition.
This improvement exists even when methods are based, as are the current methods, on only the subjective judgment of cybersecurity experts.
These improved methods are entirely feasible.
We know this because it has already been done. One or both of the authors have had direct experience with using every method described in this book in real-world corporate environments. The methods are currently used by cybersecurity analysts with a variety of backgrounds.
You can improve further on these models with empirical data.
You have more data available than you think from a variety of existing and newly emerging sources. Even when data is scarce, mathematical methods with limited data can still be an improvement on subjective judgment alone. Even the risk analysis methods themselves can be measured and tracked to make continuous improvements.
The book is separated into three parts that will make each of these points in multiple ways. Part I will introduce a simple quantitative method that requires little more effort than the current scoring methods, but uses techniques that have shown a measurable improvement in judgment. It will then discuss how to measure the measurement methods themselves. In other words, we will try to answer the question “How do we know it works?” regarding different methods for assessing cybersecurity. The last chapter of Part I will address common objections to quantitative methods, detail the research against scoring methods, and discuss misconceptions and misunderstandings that keep some from adopting better methods.
Part II will move from the “why” we use the methods we use and focus on how to add further improvements to the simple model described in Part I. We will talk about how to add useful details to the simple model, how to refine the ability of cybersecurity experts to assess uncertainties, and how to improve a model with empirical data (even when data seems limited).
Part III will take a step back to the bigger picture of how these methods can be rolled out to the enterprise, how new threats may emerge, and how evolving tools and methods can further improve the measurement of cybersecurity risks. We will try to describe a call to action for the cybersecurity industry as a whole.
But first, our next chapter will build a foundation for how we should understand the term “measurement.” That may seem simple and obvious, but misunderstandings about that term and the methods required to execute it are behind at least some of the resistance to applying measurement to cybersecurity.
. Sir Arthur Conan Doyle, “ The Boscombe Valley Mystery,”
The Strand Magazine
. Greg Miller, “FBI Director Warns of Cyberattacks; Other Security Chiefs Say Terrorism Threat Has Altered,”
November 14, 2013,
. Dan Waddell, Director of Government Affairs, National Capital Regions of (ISC)
in an announcement of the Global Information Security Workforce Study (GISWS),
, May 14, 2015.
. Stephen Gandel, “Lloyd’s CEO: Cyber Attacks Cost Companies $400 Billion Every Year,” Fortune.com, January 23, 2015,
. Sue Poremba, “2014 Cyber Security News Was Dominated by the Sony Hack Scandal and Retail Data Breaches,”
, December 31, 2014,
. Kevin Haley, “The 2014 Internet Security Threat Report: Year Of The Mega Data Breach,”
, July 24, 2014,
. Matthew Heller, “Lloyd’s Insurer Says Cyber Risks Too Big to Cover,” CFO.com, February 6, 2015,
. Jim Bird and Jim Manico, “Attack Surface Analysis Cheat Sheet.” OWASP.org. July 18, 2015,
. Stephen Northcutt, “The Attack Surface Problem.” SANS.edu. January 7, 2011,
. Pratyusa K. Manadhata and Jeannette M. Wing, “An Attack Surface Metric,”
IEEE Transactions on Software Engineering
37, no. 3 (2010): 371–386.
. Gartner, “Gartner Says 4.9 Billion Connected ‘Things’ Will Be in Use in 2015” (press release), November 11, 2014,
. The President’s National Security Telecommunications Advisory Committee, “NSTAC Report to the President on the Internet of Things,” November 19, 2014,
. Alissa Ponchione, “CISOs: The CFOs of IT,”
, November 7, 2013,
. Matthew J. Schwartz, “Target Ignored Data Breach Alarms,”
March 14, 2014,
. Elizabeth Weise, “Chief Information Security Officers Hard to Find—and Harder to Keep,”
, December 3, 2014,
. Kelly Kavanagh, “North America Security Market Forecast: 2001–2006,” Gartner, October 9, 2002,
. Sean Brodrick, “Why 2016 Will Be the Year of Cybersecurity,”
Energy & Resources Digest,
December 30, 2015,
. Deborah Gage, “VCs Pour Money into Cybersecurity Startups,”
Wall Street Journal
, April 19, 2015,
Managing Cyber Risks in an Interconnected World: Key Findings from the Global State of Information Security Survey 2015,
September 30, 2014,
. OWASP, “OWASP Risk Rating Methodology,” last modified September 3, 2015,
Success is a function of persistence and doggedness and the willingness to work hard for twenty-two minutes to make sense of something that most people would give up on after thirty seconds.
—Malcom Gladwell, Outliers1
Before we can discuss how literally anything can be measured in cybersecurity, we need to discuss measurement itself, and we need to address early the objection that some things in cybersecurity are simply not measurable. The fact is that a series of misunderstandings about the methods of measurement, the thing being measured, or even the definition of measurement itself will hold back many attempts to measure.
This chapter will be mostly redundant for readers of the original How to Measure Anything: Finding the Value of “Intangibles” in Business. This chapter has been edited from the original and the examples geared slightly more in the direction of cybersecurity. However, if you have already read the original book, then you might prefer to skip this chapter. Otherwise, you will need to read on to understand these critical basics.
We propose that there are just three reasons why anyone ever thought something was immeasurable—cybersecurity included—and all three are rooted in misconceptions of one sort or another. We categorize these three reasons as concept, object, and method. Various forms of these objections to measurement will be addressed in more detail later in this book (especially in Chapter 5). But for now, let’s review the basics:
Concept of measurement.
The definition of measurement itself is widely misunderstood. If one understands what “measurement” actually means, a lot more things become measurable.
Object of measurement.
The thing being measured is not well defined. Sloppy and ambiguous language gets in the way of measurement.
Methods of measurement.
Many procedures of empirical observation are not well known. If people were familiar with some of these basic methods, it would become apparent that many things thought to be immeasurable are not only measurable but may have already been measured.
A good way to remember these three common misconceptions is by using a mnemonic like “howtomeasureanything.com,” where the c, o, and m in “.com” stand for concept, object, and method. Once we learn that these three objections are misunderstandings of one sort or another, it becomes apparent that everything really is measurable.
As far as the propositions of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.
Although this may seem a paradox, all exact science is based on the idea of approximation. If a man tells you he knows a thing exactly, then you can be safe in inferring that you are speaking to an inexact man.
—Bertrand Russell (1872–1970), British mathematician and philosopher
For those who believe something to be immeasurable, the concept of measurement—or rather the misconception of it—is probably the most important obstacle to overcome. If we incorrectly think that measurement means meeting some nearly unachievable standard of certainty, then few things will be measurable even in the physical sciences.
Tysiące ebooków i audiobooków
Ich liczba ciągle rośnie, a Ty masz gwarancję niezmiennej ceny.
Napisali o nas:
Nowy sposób na e-księgarnię
Czytelnicy nie wierzą
Legimi idzie na całość
Projekt Legimi wielkim wydarzeniem
Spotify for ebooks