How to Measure Anything in Cybersecurity Risk - Douglas W. Hubbard - ebook

How to Measure Anything in Cybersecurity Risk ebook

Douglas W. Hubbard

0,0
159,99 zł

Opis

A ground shaking exposé on the failure of popular cyber risk management methods How to Measure Anything in Cybersecurity Risk exposes the shortcomings of current "risk management" practices, and offers a series of improvement techniques that help you fill the holes and ramp up security. In his bestselling book How to Measure Anything, author Douglas W. Hubbard opened the business world's eyes to the critical need for better measurement. This book expands upon that premise and draws from The Failure of Risk Management to sound the alarm in the cybersecurity realm. Some of the field's premier risk management approaches actually create more risk than they mitigate, and questionable methods have been duplicated across industries and embedded in the products accepted as gospel. This book sheds light on these blatant risks, and provides alternate techniques that can help improve your current situation. You'll also learn which approaches are too risky to save, and are actually more damaging than a total lack of any security. Dangerous risk management methods abound; there is no industry more critically in need of solutions than cybersecurity. This book provides solutions where they exist, and advises when to change tracks entirely. * Discover the shortcomings of cybersecurity's "best practices" * Learn which risk management approaches actually create risk * Improve your current practices with practical alterations * Learn which methods are beyond saving, and worse than doing nothing Insightful and enlightening, this book will inspire a closer examination of your company's own risk management practices in the context of cybersecurity. The end goal is airtight data protection, so finding cracks in the vault is a positive thing--as long as you get there before the bad guys do. How to Measure Anything in Cybersecurity Risk is your guide to more robust protection through better quantitative processes, approaches, and techniques.

Ebooka przeczytasz w aplikacjach Legimi na:

Androidzie
iOS
czytnikach certyfikowanych
przez Legimi
Windows
10
Windows
Phone

Liczba stron: 498




How to Measure Anything in Cybersecurity Risk

DOUGLAS W. HUBBARD

RICHARD SEIERSEN

Cover images: Cyber security lock © Henrik5000/iStockphoto; Cyber eye © kasahasa/iStockphoto; Internet Security concept © bluebay2014/iStockphoto; Background © omergenc/iStockphoto; Abstract business background © Natal'ya Bondarenko/iStockphoto; Abstract business background © procurator/iStockphoto; Cloud Computing © derrrek/iStockphoto Cover design: Wiley

Copyright © 2016 by John Wiley & Sons, Inc. All rights reserved.

Published by John Wiley & Sons, Inc., Hoboken, New Jersey. Published simultaneously in Canada.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750–8400, fax (978) 646–8600, or on the Web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748–6011, fax (201) 748–6008, or online at http://www.wiley.com/go/permissions.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762–2974, outside the United States at (317) 572–3993 or fax (317) 572–4002.

Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com.

Library of Congress Cataloging-in-Publication Data:

ISBN 978-1-119-08529-4 (Hardcover)

ISBN 978-1-119-22460-0 (ePDF)

ISBN 978-1-119-22461-7 (ePub)

Douglas Hubbard's dedication: To my children, Evan, Madeleine, and Steven, as the continuing sources of inspiration in my life; and to my wife, Janet, for doing all the things that make it possible for me to have time to write a book, and for being the ultimate proofreader.

Richard Seiersen's dedication: To all the ladies in my life: Helena, Kaela, Anika, and Brenna. Thank you for your love and support through the book and life. You make it fun.

Doug and Richard would also like to dedicate this book to the military and law enforcement professionals who specialize in cybersecurity.

CONTENTS

Foreword

Note

Foreword

Acknowledgments

About the Authors

Introduction

Why This Book, Why Now?

What Is This Book About?

What to Expect

Is This Book for Me?

We Need More Than Technology

New Tools for Decision Makers

Our Path Forward

PART I: Why Cybersecurity Needs Better Measurements for Risk

Chapter 1: The One Patch Most Needed in Cybersecurity

The Global Attack Surface

The Cyber Threat Response

A Proposal for Cybersecurity Risk Management

Notes

Chapter 2: A Measurement Primer for Cybersecurity

The Concept of Measurement

The Object of Measurement

The Methods of Measurement

Notes

Chapter 3: Model Now!:

An Introduction to Practical Quantitative Methods for Cybersecurity

A Simple One-for-One Substitution

The Expert as the Instrument

Doing “Uncertainty Math”

Visualizing Risk

Supporting the Decision: A Return on Mitigation

Where to Go from Here

Notes

Chapter 4: The Single Most Important Measurement in Cybersecurity

The Analysis Placebo: Why We Can’t Trust Opinion Alone

How You Have More Data Than You Think

When Algorithms Beat Experts

Tools for Improving the Human Component

Summary and Next Steps

Notes

Chapter 5: Risk Matrices, Lie Factors, Misconceptions, and Other Obstacles to Measuring Risk

Scanning the Landscape: A Survey of Cybersecurity Professionals

What Color Is Your Risk? The Ubiquitous—and Risky—Risk Matrix

Exsupero Ursus and Other Fallacies

Conclusion

Notes

PART II: Evolving the Model of Cybersecurity Risk

Chapter 6: Decompose It:

Unpacking the Details

Decomposing the Simple One-for-One Substitution Model

More Decomposition Guidelines: Clear, Observable, Useful

A Hard Decomposition: Reputation Damage

Conclusion

Notes

Chapter 7: Calibrated Estimates:

How Much Do You Know

Now

?

Introduction to Subjective Probability

Calibration Exercise

Further Improvements on Calibration

Conceptual Obstacles to Calibration

The Effects of Calibration

Notes

Answers to Trivia Questions for Calibration Exercise

Chapter 8: Reducing Uncertainty with Bayesian Methods

A Major Data Breach Example

A Brief Introduction to Bayes and Probability Theory

Bayes Applied to the Cloud Breach Use Case

Note

Chapter 9: Some Powerful Methods Based on Bayes

Computing Frequencies with (Very) Few Data Points: The Beta Distribution

Decomposing Probabilities with Many Conditions

Reducing Uncertainty Further and When To Do It

Leveraging Existing Resources to Reduce Uncertainty

Wrapping Up Bayes

Notes

PART III: Cybersecurity Risk Management for the Enterprise

Chapter 10: Toward Security Metrics Maturity

Introduction: Operational Security Metrics Maturity Model

Sparse Data Analytics

Functional Security Metrics

Security Data Marts

Prescriptive Analytics

Notes

Chapter 11: How Well Are My Security Investments Working Together?

Addressing BI Concerns

Just the Facts: What Is Dimensional Modeling and Why Do I Need It?

Dimensional Modeling Use Case: Advanced Data Stealing Threats

Modeling People Processes

Chapter 12: A Call to Action:

How to Roll Out Cybersecurity Risk Management

Establishing the CSRM Strategic Charter

Organizational Roles and Responsibilities for CSRM

Getting Audit to Audit

What the Cybersecurity Ecosystem Must Do to Support You

Can We Avoid the Big One?

Appendix A: Selected Distributions

Distribution Name: Triangular

Distribution Name: Binary

Distribution Name: Normal

Distribution Name: Lognormal

Distribution Name: Beta

Distribution Name: Power Law

Distribution Name: Truncated Power Law

Appendix B: Guest Contributors

Appendix B Contents

Aggregating Data Sources for Cyber Insights

Forecasting—and Reducing—Occurrence of Espionage Attacks

Skyrocketing Breaches?

Financial Impact of Breaches

The Flaw of Averages in Cyber Security

Botnets

Password Hacking

Cyber-CI

How Catastrophe Modeling Can Be Applied to Cyber Risk

Notes

Index

EULA

List of Tables

Chapter 3

Table 3.1

Table 3.2

Table 3.3

Table 3.4

Chapter 5

Table 5.1

Table 5.2

Table 5.3

Table 5.4

Chapter 6

Table 6.1

Table 6.2

Chapter 7

Table 7.1

Table 7.2

Table 7.3

Chapter 9

Table 9.1

Table 9.2

Table 9.3

Chapter 11

Table 11.1

Table 11.2

Table 11.3

Table 11.4

Table 11.5

List of Illustrations

Chapter 1

Figure 1.1

The familiar risk matrix (a.k.a. heat map or risk map)

Chapter 3

Figure 3.1

The Lognormal versus Normal Distribution

Figure 3.2

Example of a Loss Exceedance Curve

Figure 3.3

Inherent Risk, Residual Risk, and Risk Tolerance

Chapter 4

Figure 4.1

Duplicate Scenario Consistency: Comparison of First and Second Probability Estimates of Same Scenario by Same Judge

Figure 4.2

Summary of Distribution of Inconsistencies

Chapter 5

Figure 5.1

Variations of NATO Officers’ Interpretations of Probability Phrases

Figure 5.2

Heat Map Theory and Empirical Testing

Figure 5.3

Stats Literacy versus Attitude toward Quantitative Methods

Chapter 6

Figure 6.1

Quarter-to-Quarter Change in Sales for Major Retailers with Major Data Breaches Relative to the Quarter of the Breach

Figure 6.2

Day-to-Day Change in Stock Prices of Firms with Major Data Breaches Relative to Day of the Breach

Figure 6.3

Changes in Stock Prices after a Major Breach for Three Major Retailers Relative to Historical Volatility

Figure 6.4

Changes in Seasonally Adjusted Quarterly Sales after Breach Relative to Historical Volatility

Chapter 7

Figure 7.1

Spin to Win!

Figure 7.2

Distribution of Answers Within 90% CI for 10-Question Calibration Test

Figure 7.3

Calibration Experiment Results for 20 IT Industry Predictions in 1997

Chapter 8

Figure 8.1

A Chain Rule Tree

Figure 8.2

Major Data Breach Decomposition Example with Conditional Probabilities

Chapter 9

Figure 9.1

A Uniform Distribution (a Beta Distribution with alpha=beta=1)

Figure 9.2

A Distribution Starting with a Uniform Prior and Updated with a Sample of 1 Hit and 5 Misses

Figure 9.3

The Per-Year Frequency of Data Breaches in This Industry

Figure 9.4

Example of How a Beta Distribution Changes the Chance of Extreme Losses

Figure 9.5

Example of Regression Model Predicting Judge Estimates

Figure 9.6

Distribution of Investigation Time for Cybersecurity Incidents

Chapter 10

Figure 10.1

Security Analytics Maturity Model

Figure 10.2

Bayes Triplot, beta(4.31, 6.3) prior, s=3, f=7

Figure 10.3

Bayes Triplot, beta(4.31, 6.3) prior, s=5, f=30

Chapter 11

Figure 11.1

The Standard Security Data Mart

Figure 11.2

Vulnerability Mart

Figure 11.3

Expanded Mart with Conforming Dimensions

Figure 11.4

Malware Dimension

Figure 11.5

Days ADST Alive Before Being Found

Figure 11.6

ADST High Level Mart

Figure 11.7

Remediation Workflow Facts

Chapter 12

Figure 12.1

Cybersecurity Risk Management Function

Appendix A

Figure A.1

Triangular Distribution

Figure A.2

Binary Distribution

Figure A.3

Normal Distribution

Figure A.4

Lognormal Distribution

Figure A.5

Beta Distribution

Figure A.6

Power Law Distribution

Figure A.7

Truncated Power Law Distribution

Appendix B

Figure B.1

Probability of an Espionage Incident with Modeled Changes to Training and Operating Systems

Figure B.2

Average Data Breaches per Year by State

Figure B.3

Data Breach Rate by Year as a Function of Number of Employees

Figure B.4

Data on Breached Records: SEC Filings versus Ponemon

Figure B.5

The Distribution of Detection Times for Layers 1 and 2 of a Security System

Figure B.6

SIPs of 10,000 Trials of Layer 1 and Layer 2 Detection Times

Figure B.7

A SIPmath Excel Model to Calculate the Overall Detection Distribution

Figure B.8

The Detection Time SIPs of 10 Independent Botnets

Figure B.9

Simulation of Multiple Botnets

Figure B.10

Probability of Compromise by Company Size and Password Policy

Figure B.11

The AIR Worldwide Catastrophe Modeling Framework

Guide

Cover

Table of Contents

1

Pages

ix

x

xi

xii

xiii

xv

1

2

3

5

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

113

114

115

116

118

119

120

121

122

123

124

125

126

127

128

129

130

131

133

134

135

136

137

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

199

200

201

202

203

204

205

206

207

208

209

210

211

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

Foreword

Daniel E. Geer, Jr., ScD

Daniel Geer is a security researcher with a quantitative bent. His group at MIT produced Kerberos, and a number of startups later he is still at it—today as chief information security officer at In-Q-Tel. He writes a lot at every length, and sometimes it gets read. He’s an electrical engineer, a statistician, and someone who thinks truth is best achieved by adversarial procedures.

It is my pleasure to recommend How to Measure Anything in Cybersecurity Risk. The topic is nothing if not pressing, and it is one that I have myself been dancing around for some time.1 It is a hard problem, which allows me to quote Secretary of State John Foster Dulles: “The measure of success is not whether you have a tough problem to deal with, but whether it is the same problem you had last year.” At its simplest, this book promises to help you put some old, hard problems behind you.

The practice of cybersecurity is part engineering and part inference. The central truth of engineering is that design pays if and only if the problem statement is itself well understood. The central truth of statistical inference is that all data has bias—the question being whether you can correct for it. Both engineering and inference depend on measurement. When measurement gets good enough, metrics become possible.

I say “metrics” because metrics are derivatives of measurement. A metric encapsulates measurements for the purpose of ongoing decision support. I and you, dear reader, are not in cybersecurity for reasons of science, though those who are in it for science (or philosophy) will also want measurement of some sort to backstop their theorizing. We need metrics derived from solid measurement because the scale of our task compared to the scale of our tools demands force multiplication. In any case, no game play improves without a way to keep score.

Early in the present author’s career, a meeting was held inside a market-maker bank. The CISO, who was an unwilling promotion from Internal Audit, was caustic even by the standards of NYC finance. He began his comments mildly enough:

Are you security people so stupid that you can’t tell me:

How secure am I?

Am I better off than I was this time last year?

Am I spending the right amount of money?

How do I compare to my peers?

What risk transfer options do I have?

Twenty-five years later, those questions remain germane. Answering them, and others, comes only from measurement; that is the “Why?” of this book.

Yet even if we all agree on “Why?,” the real value of this book is not “Why?” but “How?”: how to measure and then choose among methods, how to do that both consistently and repeatedly, and how to move up from one method to a better one as your skill improves.

Some will say that cybersecurity is impossible if you face a sufficiently skilled opponent. That’s true. It is also irrelevant. Our opponents by and large pick the targets that maximize their return on their investment, which is a polite way of saying that you may not be able to thwart the most singularly determined opponent for whom cost is no object, but you can sure as the world make other targets more attractive than you are. As I said, no game play improves without a way to keep score. That is what this book offers you—a way to improve your game.

This all requires numbers because numbers are the only input to both engineering and inference. Adjectives are not. Color codes are not. If you have any interest in taking care of yourself, of standing on your own two feet, of knowing where you are, then you owe it to yourself to exhaust this book. Its writing is clear, its pedagogy is straightforward, and its downloadable Excel spreadsheets leave no excuse for not trying.

Have I made the case? I hope so.

Note

1

. Daniel Geer, Jr., Kevin Soo Hoo, and Andrew Jaquith, “Information Security: Why the Future Belongs to the Quants,”

IEEE Security & Privacy

1, no. 4 (July/August 2003): 32–40,

geer.tinho.net/ieee/ieee.sp.geer.0307.pdf

.

Foreword

Stuart McClure

Stuart McClure is the CEO of Cylance, former global CTO of McAfee, and founding author of the Hacking Exposed series.

My university professors always sputtered the age-old maxim in class: “You can’t manage what you cannot measure.” And while my perky, barely-out-of-teenage-years ears absorbed the claim aurally, my brain never really could process what it meant. Sure, my numerous computer science classes kept me chasing an infinite pursuit of improving mathematical algorithms in software programs, but little did I know how to really apply these quantitative efforts to the management of anything, much less cyber.

So I bounded forward in my career in IT and software programming, looking for an application of my unique talents. I never found cyber measurement all that compelling until I found cybersecurity. What motivated me to look at a foundational way to measure what I did in cybersecurity was the timeless question that I and many of you get almost daily: “Are we secure from attack?”

The easy answer to such a trite yet completely understandable question is “No. Security is never 100%.” But some of you have answered the same way I have done from time to time, being exhausted by the inane query, with “Yes. Yes we are.” Why? Because we know a ridiculous question should be given an equally ridiculous answer. For how can we know? Well, you can’t—without metrics.

As my cybersecurity career developed with InfoWorld and Ernst & Young, while founding the company Foundstone, taking senior executive roles in its acquiring company, McAfee, and now starting Cylance, I have developed a unique appreciation for the original professorial claim that you really cannot manage what you cannot measure. While an objective metric may be mythical, a subjective and localized measurement of your current risk posture and where you stand relative to your past and your peers is very possible.

Measuring the cyber risk present at an organization is nontrivial, and when you set the requirement of delivering on quantitative measurements rather than subjective and qualitative measurements, it becomes almost beyond daunting.

The real questions for all of us security practitioners are ultimately “Where do we start? How do we go about measuring cybersecurity’s effectiveness and return?” The only way to begin to answer those questions is through quantitative metrics. And until now, the art of cybersecurity measurement has been elusive. I remember the first time someone asked me my opinion on a security-risk metrics program, I answered something to the effect of, “It’s impossible to measure something you cannot quantify.”

What the authors of this book have done is begin to define a framework and a set of algorithms and metrics to do exactly what the industry has long thought impossible, or at least futile: measure security risk. We may not be perfect in our measurement, but we can define a set of standard metrics that are defensible and quantifiable, and then use those same metrics day in and day out to ensure that things are improving. And that is the ultimate value of defining and executing on a set of security metrics. You don’t need to be perfect; all you need to do is start somewhere and measure yourself relative to the day before.

Acknowledgments

We thank these people for their help as we wrote this book:

Jack Jones

Jack Freund

Jim Lipkis

Thomas Lee

Christopher “Kip” Bohn

Scott Stransky

Tomas Girnius

Jay Jacobs

Sam Savage

Tony Cox

Michael Murray

Patrick Heim

Cheng-Ping Li

Michael Sardaryzadeh

Stuart McClure

Rick Rankin

Anton Mobley

Vinnie Liu

SIRA.org Team

Dan Geer

Dan Rosenberg

A very special thanks to Bonnie Norman and Steve Abrahamson for providing additional editing.

About the Authors

Douglas Hubbard is the creator of the Applied Information Economics method and the founder of Hubbard Decision Research. He is the author of one of the best-selling business statistics books of all time, How to Measure Anything: Finding the Value of “Intangibles” in Business. He is also the author of The Failure of Risk Management: Why It’s Broken and How to Fix It, and Pulse: The New Science of Harnessing Internet Buzz to Track Threats and Opportunities. He has sold more than 100,000 copies of his books in eight different languages, and his books are used in courses at many major universities. His consulting experience in quantitative decision analysis and measurement problems totals over 27 years and spans many industries including pharmaceuticals, insurance, banking, utilities, cybersecurity, interventions in developing economies, mining, federal and state government, entertainment media, military logistics, and manufacturing. He is also published in several periodicals including Nature, The IBM Journal of R&D, Analytics, OR/MS Today, InformationWeek, and CIO Magazine.

Richard Seiersen is a technology executive with nearly 20 years of experience in information security, risk management, and product development. Currently he is the general manager of cybersecurity and privacy for GE Healthcare. Many years ago, prior to his life in technology, he was a classically trained musician—guitar, specifically. Richard now lives with his family of string players in the San Francisco Bay Area. In his limited spare time he is slowly working through his MS in predictive analytics at Northwestern. He should be done just in time to retire. He thinks that will be the perfect time to take up classical guitar again.

Introduction

Why This Book, Why Now?

This book is the first of a series of spinoffs from Douglas Hubbard’s successful first book, How to Measure Anything: Finding the Value of “Intangibles” in Business. For future books in this franchise, we were considering titles such as How to Measure Anything in Project Management or industry-specific books like How to Measure Anything in Healthcare. All we had to do was pick a good idea from a long list of possibilities.

Cybersecurity risk seemed like an ideal first book for this new series. It is extremely topical and filled with measurement challenges that may often seem impossible. We also believe it is an extremely important topic for personal reasons (as we are credit card users and have medical records, client data, intellectual property, and so on) as well as for the economy as a whole.

Another factor in choosing a topic was finding the right co-author. Because Doug Hubbard—a generalist in measurement methods—would not be a specialist in any of the particular potential spinoff topics, he planned to find a co-author who could write authoritatively on the topic. Hubbard was fortunate to find an enthusiastic volunteer in Richard Seiersen—someone with years of experience in the highest levels of cybersecurity management with some of the largest organizations.

So, with a topical but difficult measurement subject, a broad and growing audience, and a good co-author, cybersecurity seemed like an ideal fit.

What Is This Book About?

Even though this book focuses on cybersecurity risk, this book still has a lot in common with the original How to Measure Anything book, including:

Making better decisions when you are significantly uncertain about the present and future, and

Reducing that uncertainty even when data seems unavailable or the targets of measurement seem ambiguous and intangible.

This book in particular offers an alternative to a set of deeply rooted risk assessment methods now widely used in cybersecurity but that have no basis in the mathematics of risk or scientific method. We argue that these methods impede decisions about a subject of growing criticality. We also argue that methods based on real evidence of improving decisions are not only practical but already have been applied to a wide variety of equally difficult problems, including cybersecurity itself. We will show that we can start at a simple level and then evolve to whatever level is required while avoiding problems inherent to “risk matrices” and “risk scores.” So there is no reason not to adopt better methods immediately.

What to Expect

You should expect a gentle introduction to measurably better decision making—specifically, improvement in high-stakes decisions that have a lot of uncertainty and where, if you are wrong, your decisions could lead to catastrophe. We think security embodies all of these concerns.

We don’t expect our readers to be risk management experts or cybersecurity experts. The methods we apply to security can be applied to many other areas. Of course, we do hope it will make those who work in the field of cybersecurity better defenders and strategists. We also hope it will make the larger set of leaders more conscious of security risks in the process of becoming better decision makers.

Is This Book for Me?

If you really want to be sure this book is for you, here are the specific personas we are targeting:

You are a decision maker looking to improve—that is,

measurably improve—

your high-stakes decision making.

You are a security professional looking to become more strategic in your fight against the bad guy.

You are neither of the above. Instead, you have an interest in understanding more about cybersecurity and/or risk management using readily accessible quantitative techniques.

If you are a hard-core quant, consider skipping the purely quant parts. If you are a hard-core hacker, consider skipping the purely security parts. That said, we will often have a novel perspective, or “epiphanies of the obvious,” on topics you already know well. Read as you see fit.

We Need More Than Technology

We need to lose less often in the fight against the bad guys. Or, at least, lose more gracefully and recover quickly. Many feel that this requires better technology. We clamor for more innovation from our vendors in the security space even though breach frequency has not been reduced. To effectively battle security threats, we think there is something equally important as innovative technology, if not more important. We believe that “something” must include a better way to think quantitatively about risk.

New Tools for Decision Makers

We need decision makers who consistently make better choices through better analysis. We also need decision makers who know how to deftly handle uncertainty in the face of looming catastrophe. Parts of this solution are sometimes referred to with current trendy terms like “predictive analytics,” but more broadly this includes all of decision science or decision analysis and even properly applied statistics.

Our Path Forward

Part I of this book sets the stage for reasoning about uncertainty in security. We will come to terms on things like security, uncertainty, measurement, and risk management. We also argue against toxic misunderstandings of these terms and why we need a better approach to measuring cybersecurity risk and, for that matter, measuring the performance of cybersecurity risk analysis itself. We will also introduce a very simple quantitative method that could serve as a starting point for anyone, no matter how averse they may be to complexity.

Part II of this book will delve further into evolutionary steps we can take with a very simple quantitative model. We will describe how to add further complexity to a model and how to use even minimal amounts of data to improve those models.

Last, in Part III we will describe what is needed to implement these methods in the organization. We will also talk about the implications of this book for the entire cybersecurity “ecosystem,” including standards organizations and vendors.

PART IWhy Cybersecurity Needs Better Measurements for Risk

Chapter 1The One Patch Most Needed in Cybersecurity

There is nothing more deceptive than an obvious fact.

—Sherlock Holmes

The Bascombe Valley Mystery1

In the days after September 11, 2001, increased security meant overhauled screening at the airport, no-fly lists, air marshals, and attacking terrorist training camps. But just 12 years later, the FBI was emphasizing the emergence of a very different concern: the “cyber-based threat.” In 2013, FBI director James B. Comey, testifying before the Senate Committee on Homeland Security and Governmental Affairs, stated the following:

. . .we anticipate that in the future, resources devoted to cyber-based threats will equal or even eclipse the resources devoted to non-cyber based terrorist threats.

—FBI director James B. Comey, November 14, 20132

This is a shift in priorities we cannot overstate. How many organizations in 2001, preparing for what they perceived as the key threats at the time, would have even imagined that cyber threats would have not only equaled but exceeded more conventional terrorist threats? Yet as we write this book, it is accepted as our new “new normal.”

Admittedly, those outside of the world of cybersecurity may think the FBI is sowing seeds of Fear, Uncertainty, and Doubt (FUD) to some political end. But it would seem that there are plenty of sources of FUD, so why pick cyber threats in particular? Of course, to cybersecurity experts this is a non-epiphany. We are under attack and it will certainly get worse before it gets better.

Yet resources are limited. Therefore, the cybersecurity professional must effectively determine a kind of “return on risk mitigation.” Whether or not such a return is explicitly calculated, we must evaluate whether a given defense strategy is a better use of resources than another. In short, we have to measure and monetize risk and risk reduction. What we need is a “how to” book for professionals in charge of allocating limited resources to addressing ever-increasing cyber threats, and leveraging those resources for optimum risk reduction. This includes methods for:

How to measure risk assessment methods themselves.

How to measure reduction in risk from a given defense, control, mitigation, or strategy (using some of the better-performing methods as identified in the first bullet).

How to continuously and measurably improve on the implemented methods, using more advanced methods that the reader may employ as he or she feels ready.

Let’s be explicit about what this book isn’t. This is not a technical security book—if you’re looking for a book on “ethical hacking,” then you have certainly come to the wrong place. There will be no discussions about how to execute stack overflows, defeat encryption algorithms, or execute SQL injections. If and when we do discuss such things, it’s only in the context of understanding them as parameters in a risk model.

But don’t be disappointed if you’re a technical person. We will certainly be getting into some analytic nitty-gritty as it applies to security. This is from the perspective of an analyst or leader trying to make better bets in relation to possible future losses. For now, let’s review the scale of the challenge we are dealing with and how we deal with it currently, then outline a direction for the improvements laid out in the rest of the book.

The Global Attack Surface

Nation-states, organized crime, hacktivist entities, and insider threats want our secrets, our money, and our intellectual property, and some want our complete demise. Sound dramatic? If we understand the FBI correctly, they expect to spend as much or more on protecting us from cyber threats than from those who would turn airplanes, cars, pressure cookers, and even people into bombs. And if you are reading this book, you probably already accept the gravity of the situation. But we should at least spend some time emphasizing this point if for no other reason than to help those who already agree with this point make the case to others.

The Global Information Security Workforce Study (GISWS)—a survey conducted in 2015 of more than 14,000 security professionals, including 1,800 federal employees—showed we are not just taking a beating, we are backpedaling:

When we consider the amount of effort dedicated over the past two years to furthering the security readiness of federal systems and the nation’s overall security posture, our hope was to see an obvious step forward. The data shows that, in fact, we have taken a step back.

—(ISC)2 on the announcement of the GISWS, 20153

Indeed, other sources of data support this dire conclusion. The UK insurance market, Lloyd’s of London, estimated that cyberattacks cost businesses $400 billion globally per year.4 In 2014, one billion records were compromised. This caused Forbes magazine to refer to 2014 as “The Year of the Data Breach.”5,6 Unfortunately, identifying 2014 as the year of the data breach may still prove to be premature. It could easily get worse.

In fact, the founder and head of XL Catlin, the largest insurer in Lloyd’s of London, said cybersecurity is the “biggest, most systemic risk” he has seen in his 42 years in insurance.7 Potential weaknesses in widely used software; interdependent network access between companies, vendors, and clients; and the possibility of large coordinated attacks can affect much more than even one big company like Anthem, Target, or Sony. XL Catlin believes it is possible that there could be a simultaneous impact on multiple major organizations affecting the entire economy. They feel that if there are multiple major claims in a short period of time, this is a bigger burden than insurers can realistically cover.

What is causing such a dramatic rise in breach and the anticipation of even more breaches? It is called attack surface. “Attack surface” is usually defined as the kind of total of all exposures of an information system. It exposes value to untrusted sources. You don’t need to be a security professional to get this. Your home, your bank account, your family, and your identity all have an attack surface. If you received identity theft protection as a federal employee, or a customer of Home Depot, Target, Anthem, or Neiman Marcus, then you received that courtesy of an attack surface. These companies put the digital you within reach of criminals. Directly or indirectly, the Internet facilitated this. This evolution happened quickly and without the knowledge or direct permission of all interested parties (organizations, employees, customers, or citizens).

Various definitions of the phrase consider the ways into and out of a system, the defenses of that system, and sometimes the value of data in that system.8,9 Some definitions of attack surface refer to the attack surface of a system and some refer to the attack surface of a network, but either might be too narrow even for a given firm. We might also define an “Enterprise Attack Surface” that not only consists of all systems and networks in that organization but also the exposure of third parties. This includes everyone in the enterprise “ecosystem” including major customers, vendors, and perhaps government agencies. (Recall that in the case of the Target breach, the exploit came from an HVAC vendor.)

Perhaps the total attack surface that concerns all citizens, consumers, and governments is a kind of “global attack surface”: the total set of cybersecurity exposures—across all systems, networks, and organizations—we all face just by shopping with a credit card, browsing online, receiving medical benefits, or even just being employed. This global attack surface is a macro-level phenomenon driven by at least four macro-level causes of growth: increasing users worldwide, variety of users worldwide, growth in discovered and exploited vulnerabilities per person per use, and organizations more networked with each other resulting in “cascade failure” risks.

The increasing number of persons on the Internet.

Internet users worldwide grew by a factor of 6 from 2001 to 2014 (half a billion to 3 billion). It may not be obvious that the number of users is a dimension in some attack surfaces, but some measures of attack surface also include the value of a target, which would be partly a function of number of users (e.g., gaining access to more personal records)

10

Also, on a global scale, it acts as an important multiplier on the following dimensions.

The number of uses per person for online resources.

The varied uses of the Internet, total time spent on the Internet, use of credit cards, and various services that require the storage of personal data-automated transactions are growing. Per person. Worldwide. For example, since 2001 the number of websites alone has grown at a rate five times faster than the number of users—a billion total by 2014. Connected devices constitute another potential way for an individual to use the Internet even without their active involvement. One forecast regarding the “Internet of Things” (IoT) was made by Gartner, Inc: “4.9 billion connected things will be in use in 2015, up 30 percent from 2014, and will reach 25 billion by 2020.”

11

A key concern here is the lack of consistent security in designs. The National Security Telecommunications Advisory Committee determined that “there is a small—and rapidly closing—window to ensure that the IoT is adopted in a way that maximizes security and minimizes risk. If the country fails to do so, it will be coping with the consequences for generations.”

12

Vulnerabilities increase.

A natural consequence of the previous two factors is the number of ways such uses can be exploited increases. This is due to the increase in systems and devices with potential vulnerabilities, even if vulnerabilities per system or device do not increase. At least the number of

discovered

vulnerabilities will increase partly because the number of people actively seeking and exploiting vulnerabilities increases. And more of those will be from well-organized and well-funded teams of individuals working for national sponsors.

The possibility of a major breach “cascade.”

More large organizations are finding efficiencies from being more connected. The fact that Target was breached through a vendor raises the possibility of the same attack affecting multiple organizations. Organizations like Target have many vendors, several of which in turn have multiple large corporate and government clients. Mapping this cyber-ecosystem of connections would be almost impossible, since it would certainly require all these organizations to divulge sensitive information. So the kind of publicly available metrics we have for the previous three factors in this list do not exist for this one. But we suspect most large organizations could just be one or two degrees of separation from each other.

It seems reasonable that of these four trends the earlier trends magnify the latter trends. If so, the risk of the major breach “cascade” event could grow faster than the growth rate of the first couple of trends.

Our naïve, and obvious, hypothesis? Attack surface and breach are correlated. If this holds true, then we haven’t seen anything yet. We are heading into a historic growth in attack surface, and hence breach, which will eclipse what has been seen to date. Given all this, the FBI director’s comments and the statements of Lloyd’s of London insurers cannot be dismissed as alarmist. Even with the giant breaches like Target, Anthem, and Sony behind us, we believe we haven’t seen “The Big One” yet.

The Cyber Threat Response

It’s a bit of a catch-22 in that success in business is highly correlated with exposure. Banking, buying, getting medical attention, and even being employed is predicated on exposure. You need to expose data to transact business, and if you want to do more business, that means more attack surface. When you are exposed, you can be seen and affected in unexpected and malicious ways. In defense, cybersecurity professionals try to “harden” systems—that is, removing all nonessentials, including programs, users, data, privileges, and vulnerabilities. Hardening shrinks, but does not eliminate, attack surface. Yet even this partial reduction in attack surface requires significant resources, and the trends show that the resource requirements will grow.

Generally, executive-level attention on cybersecurity risks has increased, and attention is followed by resources. The boardroom is beginning to ask questions like “Will we be breached?” or “Are we better than Sony?” or “Did we spend enough on the right risks?” Asking these questions eventually brings some to hire a chief information security officer (CISO). The first Fortune 100 CISO role emerged more than 20 years ago, but for most of that time growth in CISOs was slow. CFO Magazine acknowledged that hiring a CISO as recently as 2008 would have been considered “superfluous.”13 In fact, large companies are still in the process of hiring their first CISOs, many just after they suffer major breaches. By the time this book was written, Target finally hired their first CISO,14 and JPMorgan did likewise after their breach.15

In addition to merely asking these questions and creating a management- level role for information security, corporations have been showing a willingness, perhaps more slowly than cybersecurity professionals would like, to allocate serious resources to this problem:

Just after the 9/11 attacks the annual cybersecurity market in the United States was $4.1 billion.

16

By 2015 the information technology budget of the United States Defense Department had grown to $36.7 billion.

17

This does not include $1.4 billion in startup investments for new cybersecurity-related firms.

18

Cybersecurity budgets have grown at about twice the rate of IT budgets overall.

19

So what do organizations do with this new executive visibility and inflow of money to cybersecurity? Mostly, they seek out vulnerabilities, detect attacks, and eliminate compromises. Of course, the size of the attack surface and the sheer volume of vulnerabilities, attacks, and compromises means organizations must make tough choices; not everything gets fixed, stopped, recovered, and so forth. There will need to be some form of acceptable (tolerable) losses. What risks are acceptable is often not documented, and when they are, they are stated in soft, unquantified terms that cannot be used clearly in a calculation to determine if a given expenditure is justified or not.

On the vulnerability side of the equation, this has led to what is called “vulnerability management.” An extension on the attack side is “security event management,” which can generalize to “security management.” More recently there is “threat intelligence” and the emerging phrase “threat management.” While all are within the tactical security solution spaces, the management portion attempts to rank-order what to do next. So how do organizations conduct security management? How do they prioritize the allocation of significant, but limited, resources for an expanding list of vulnerabilities? In other words, how do they make cybersecurity decisions to allocate limited resources in a fight against such uncertain and growing risks?

Certainly a lot of expert intuition is involved, as there always is in management. But for more systematic approaches, the vast majority of organizations concerned with cybersecurity will resort to some sort of “scoring” method that ultimately plots risks on a “matrix.” This is true for both very tactical level issues and strategic, aggregated risks. For example, an application with multiple vulnerabilities could have all of them aggregated into one score. Using similar methods at another scale, groups of applications can then be aggregated into a portfolio and plotted with other portfolios. The aggregation process is typically some form of invented mathematics unfamiliar to actuaries, statisticians, and mathematicians.

In one widely used approach, “likelihood” and “impact” will be rated subjectively, perhaps on a 1 to 5 scale, and those two values will be used to plot a particular risk on a matrix (variously called a “risk matrix,” “heat map,” “risk map,” etc.). The matrix—similar to the one shown in Figure 1.1—is then often further divided into sections of low, medium, and high risk. Events with high likelihood and high impact would be in the upper-right “high risk” corner, while those with low likelihood and low impact would be in the opposite “low risk” corner. The idea is that the higher the score, the more important something is and the sooner you should address it. You may intuitively think such an approach is reasonable, and if you thought so you would be in good company.

Figure 1.1 The familiar risk matrix (a.k.a. heat map or risk map)

Various versions of scores and risk maps are endorsed and promoted by several major organizations, standards, and frameworks such as the National Institute of Standards and Technology (NIST), the International Standards Organization (ISO), MITRE.org, and the Open Web Application Security Project (OWASP), among others. Most organizations with a cybersecurity function claim at least one of these as part of their framework for assessing risk. In fact, most major software organizations like Oracle, Microsoft, and Adobe rate their vulnerabilities using a NIST-supported scoring system called the “Common Vulnerability Scoring System” (CVSS). Also, many security solutions also include CVSS ratings, be it for vulnerability and/or attack related. While the control recommendations made by many of these frameworks are good, it’s how we are guided to prioritize risk management on an enterprise scale that is amplifying risk.

Literally hundreds of security vendors and even standards bodies have come to adopt some form of scoring system. Indeed, scoring approaches and risk matrices are at the core of the security industry’s risk management approaches.

In all cases, they are based on the idea that such methods are of some sufficient benefit. That is, they are assumed to be at least an improvement over not using such a method. As one of the standards organizations has put it, rating risk this way is adequate:

Once the tester has identified a potential risk and wants to figure out how serious it is, the first step is to estimate the likelihood. At the highest level, this is a rough measure of how likely this particular vulnerability is to be uncovered and exploited by an attacker. It is not necessary to be over-precise in this estimate. Generally, identifying whether the likelihood is low, medium, or high is sufficient.

—OWASP20 (emphasis added)

Does this last phrase, stating “low, medium, or high is sufficient,” need to be taken on faith? Considering the critical nature of the decisions such methods will guide, we argue that it should not. This is a testable hypothesis and it actually has been tested in many different ways. The growing trends of cybersecurity attacks alone indicate it might be high time to try something else.

So let’s be clear about our position on current methods: They are a failure. They do not work. A thorough investigation of the research on these methods and decision-making methods in general indicates the following (all of this will be discussed in detail in Chapters 4 and 5):

There is no evidence that the types of scoring and risk matrix methods widely used in cybersecurity improve judgment.

On the contrary, there is evidence these methods add noise and error to the judgment process. One researcher—Tony Cox—goes as far as to say they can be “worse than random.” (Cox’s research and many others will be detailed in Chapter 5.)

Any appearance of “working” is probably a type of “analysis placebo.” That is, a method may make you feel better even though the activity provides no measurable improvement in estimating risks (or even adds error).

There is overwhelming evidence in published research that quantitative, probabilistic methods are effective.

Fortunately, most cybersecurity experts seem willing and able to adopt better quantitative solutions. But common misconceptions held by some—including misconceptions about basic statistics—create some obstacles for adopting better methods.

How cybersecurity assesses risk, and how it determines how much it reduces risk, are the basis for determining where cybersecurity needs to prioritize the use of resources. And if this method is broken—or even just leaves room for significant improvement—then that is the highest-priority problem for cybersecurity to tackle! Clearly, putting cybersecurity risk- assessment and decision-making methods on a solid foundation will affect everything else cybersecurity does. If risk assessment itself is a weakness, then fixing risk assessment is the most important “patch” a cybersecurity professional can implement.

A Proposal for Cybersecurity Risk Management

In this book, we will propose a different direction for cybersecurity. Every proposed solution will ultimately be guided by the title of this book. That is, we are solving problems by describing how to measure cybersecurity risk—anything in cybersecurity risk. These measurements will be a tool in the solutions proposed but also reveal how these solutions were selected in the first place. So let us propose that we adopt a new quantitative approach to cybersecurity, built upon the following principles:

It is possible to greatly improve on the existing methods.

Many aspects of existing methods have been measured and found wanting. This is not acceptable for the scale of the problems faced in cybersecurity.

Cybersecurity can use the same quantitative language of risk analysis used in other problems.

As we will see, there are plenty of fields with massive risk, minimal data, and profoundly chaotic actors that are regularly modeled using traditional mathematical methods. We don’t need to reinvent terminology or methods from other fields that also have challenging risk analysis problems.

Methods exist that have already been measured to be an improvement over expert intuition.

This improvement exists even when methods are based, as are the current methods, on only the subjective judgment of cybersecurity experts.

These improved methods are entirely feasible.

We know this because it has already been done. One or both of the authors have had direct experience with using every method described in this book in real-world corporate environments. The methods are currently used by cybersecurity analysts with a variety of backgrounds.

You can improve further on these models with empirical data.

You have more data available than you think from a variety of existing and newly emerging sources. Even when data is scarce, mathematical methods with limited data can still be an improvement on subjective judgment alone. Even the risk analysis methods themselves can be measured and tracked to make continuous improvements.

The book is separated into three parts that will make each of these points in multiple ways. Part I will introduce a simple quantitative method that requires little more effort than the current scoring methods, but uses techniques that have shown a measurable improvement in judgment. It will then discuss how to measure the measurement methods themselves. In other words, we will try to answer the question “How do we know it works?” regarding different methods for assessing cybersecurity. The last chapter of Part I will address common objections to quantitative methods, detail the research against scoring methods, and discuss misconceptions and misunderstandings that keep some from adopting better methods.

Part II will move from the “why” we use the methods we use and focus on how to add further improvements to the simple model described in Part I. We will talk about how to add useful details to the simple model, how to refine the ability of cybersecurity experts to assess uncertainties, and how to improve a model with empirical data (even when data seems limited).

Part III will take a step back to the bigger picture of how these methods can be rolled out to the enterprise, how new threats may emerge, and how evolving tools and methods can further improve the measurement of cybersecurity risks. We will try to describe a call to action for the cybersecurity industry as a whole.

But first, our next chapter will build a foundation for how we should understand the term “measurement.” That may seem simple and obvious, but misunderstandings about that term and the methods required to execute it are behind at least some of the resistance to applying measurement to cybersecurity.

Notes

1

. Sir Arthur Conan Doyle, “ The Boscombe Valley Mystery,”

The Strand Magazine

, 1891.

2

. Greg Miller, “FBI Director Warns of Cyberattacks; Other Security Chiefs Say Terrorism Threat Has Altered,”

Washington Post,

November 14, 2013,

www.washingtonpost.com/world/national-security/fbi-director-warns-of-cyberattacks-other-security-chiefs-say-terrorism-threat-has-altered/2013/11/14/24f1b27a-4d53-11e3-9890-a1e0997fb0c0_story.html

.

3

. Dan Waddell, Director of Government Affairs, National Capital Regions of (ISC)

2

in an announcement of the Global Information Security Workforce Study (GISWS),

www.isc2.org

, May 14, 2015.

4

. Stephen Gandel, “Lloyd’s CEO: Cyber Attacks Cost Companies $400 Billion Every Year,” Fortune.com, January 23, 2015,

http://fortune.com/2015/01/23/cyber-attack-insurance-lloyds/

.

5

. Sue Poremba, “2014 Cyber Security News Was Dominated by the Sony Hack Scandal and Retail Data Breaches,” 

Forbes Magazine

, December 31, 2014,

www.forbes.com/sites/sungardas/2014/12/31/2014-cyber-security-news-was-dominated-by-the-sony-hack-scandal-and-retail-data-breaches/#1c79203e4910

.

6

. Kevin Haley, “The 2014 Internet Security Threat Report: Year Of The Mega Data Breach,” 

Forbes Magazine

, July 24, 2014,

www.forbes.com/sites/symantec/2014/07/24/the-2014-internet-security-threat-report-year-of-the-mega-data-breach/#724e90a01a98

.

7

. Matthew Heller, “Lloyd’s Insurer Says Cyber Risks Too Big to Cover,” CFO.com, February 6, 2015,

ww2.cfo.com/risk-management/2015/02/lloyds-insurer-says-cyber-risks-big-cover/

.

8

. Jim Bird and Jim Manico, “Attack Surface Analysis Cheat Sheet.” OWASP.org. July 18, 2015,

www.owasp.org/index.php/Attack_Surface_Analysis_Cheat_Sheet

.

9

. Stephen Northcutt, “The Attack Surface Problem.” SANS.edu. January 7, 2011,

www.sans.edu/research/security-laboratory/article/did-attack-surface

.

10

. Pratyusa K. Manadhata and Jeannette M. Wing, “An Attack Surface Metric,”

IEEE Transactions on Software Engineering

37, no. 3 (2010): 371–386.

11

. Gartner, “Gartner Says 4.9 Billion Connected ‘Things’ Will Be in Use in 2015” (press release), November 11, 2014,

www.gartner.com/newsroom/id/2905717

.

12

. The President’s National Security Telecommunications Advisory Committee, “NSTAC Report to the President on the Internet of Things,” November 19, 2014,

www.dhs.gov/sites/default/files/publications/IoT%20Final%20Draft%20Report%2011-2014.pdf

.

13

. Alissa Ponchione, “CISOs: The CFOs of IT,” 

CFO

, November 7, 2013,

ww2.cfo.com/technology/2013/11/cisos-cfos/

.

14

. Matthew J. Schwartz, “Target Ignored Data Breach Alarms,” 

Dark Reading

(blog),

InformationWeek,

March 14, 2014,

www.darkreading.com/attacks-and-breaches/target-ignored-data-breach-alarms/d/d-id/1127712

.

15

. Elizabeth Weise, “Chief Information Security Officers Hard to Find—and Harder to Keep,” 

USA Today

, December 3, 2014,

www.usatoday.com/story/tech/2014/12/02/sony-hack-attack-chief-information-security-officer-philip-reitinger/19776929/

.

16

. Kelly Kavanagh, “North America Security Market Forecast: 2001–2006,” Gartner, October 9, 2002,

www.bus.umich.edu/KresgePublic/Journals/Gartner/research/110400/110432/110432.html

.

17

. Sean Brodrick, “Why 2016 Will Be the Year of Cybersecurity,”

Energy & Resources Digest,

December 30, 2015,

http://energyandresourcesdigest.com/invest-cybersecurity-2016-hack-cibr/

.

18

. Deborah Gage, “VCs Pour Money into Cybersecurity Startups,”

Wall Street Journal

, April 19, 2015,

www.wsj.com/articles/vcs-pour-money-into-cybersecurity-startups-1429499474

.

19

. PWC,

Managing Cyber Risks in an Interconnected World: Key Findings from the Global State of Information Security Survey 2015,

September 30, 2014,

www.pwc.be/en/news-publications/publications/2014/gsiss2015.html

.

20

. OWASP, “OWASP Risk Rating Methodology,” last modified September 3, 2015,

www.owasp.org/index.php/OWASP_Risk_Rating_Methodology

.

Chapter 2A Measurement Primer for Cybersecurity

Success is a function of persistence and doggedness and the willingness to work hard for twenty-two minutes to make sense of something that most people would give up on after thirty seconds.

—Malcom Gladwell, Outliers1

Before we can discuss how literally anything can be measured in cybersecurity, we need to discuss measurement itself, and we need to address early the objection that some things in cybersecurity are simply not measurable. The fact is that a series of misunderstandings about the methods of measurement, the thing being measured, or even the definition of measurement itself will hold back many attempts to measure.

This chapter will be mostly redundant for readers of the original How to Measure Anything: Finding the Value of “Intangibles” in Business. This chapter has been edited from the original and the examples geared slightly more in the direction of cybersecurity. However, if you have already read the original book, then you might prefer to skip this chapter. Otherwise, you will need to read on to understand these critical basics.

We propose that there are just three reasons why anyone ever thought something was immeasurable—cybersecurity included—and all three are rooted in misconceptions of one sort or another. We categorize these three reasons as concept, object, and method. Various forms of these objections to measurement will be addressed in more detail later in this book (especially in Chapter 5). But for now, let’s review the basics:

Concept of measurement.

The definition of measurement itself is widely misunderstood. If one understands what “measurement” actually means, a lot more things become measurable.

Object of measurement.

The thing being measured is not well defined. Sloppy and ambiguous language gets in the way of measurement.

Methods of measurement.

Many procedures of empirical observation are not well known. If people were familiar with some of these basic methods, it would become apparent that many things thought to be immeasurable are not only measurable but may have already been measured.

A good way to remember these three common misconceptions is by using a mnemonic like “howtomeasureanything.com,” where the c, o, and m in “.com” stand for concept, object, and method. Once we learn that these three objections are misunderstandings of one sort or another, it becomes apparent that everything really is measurable.

The Concept of Measurement

As far as the propositions of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.

—Albert Einstein

Although this may seem a paradox, all exact science is based on the idea of approximation. If a man tells you he knows a thing exactly, then you can be safe in inferring that you are speaking to an inexact man.

—Bertrand Russell (1872–1970), British mathematician and philosopher

For those who believe something to be immeasurable, the concept of measurement—or rather the misconception of it—is probably the most important obstacle to overcome. If we incorrectly think that measurement means meeting some nearly unachievable standard of certainty, then few things will be measurable even in the physical sciences.