0,0

**459,99 zł**

- Wydawca: John Wiley & Sons
- Kategoria: Nauka i nowe technologie
- Język: angielski
- Rok wydania: 2016

An accessible introduction to probability, stochastic processes, and statistics for computer science and engineering applications Second edition now also available in Paperback. This updated and revised edition of the popular classic first edition relates fundamental concepts in probability and statistics to the computer sciences and engineering. The author uses Markov chains and other statistical tools to illustrate processes in reliability of computer systems and networks, fault tolerance, and performance. This edition features an entirely new section on stochastic Petri nets--as well as new sections on system availability modeling, wireless system modeling, numerical solution techniques for Markov chains, and software reliability modeling, among other subjects. Extensive revisions take new developments in solution techniques and applications into account and bring this work totally up to date. It includes more than 200 worked examples and self-study exercises for each section. Probability and Statistics with Reliability, Queuing and Computer Science Applications, Second Edition offers a comprehensive introduction to probability, stochastic processes, and statistics for students of computer science, electrical and computer engineering, and applied mathematics. Its wealth of practical examples and up-to-date information makes it an excellent resource for practitioners as well. An Instructor's Manual presenting detailed solutions to all the problems in the book is available from the Wiley editorial department.

Ebooka przeczytasz w aplikacjach Legimi na:

Liczba stron: 1028

Cover

Title Page

Copyright

Preface to the Paperback Edition

Preface to the Second Edition

Preface to the First Edition

Acronyms

About the Companion Website

Chapter 1: Introduction

1.1 Motivation

1.2 Probability Models

1.3 Sample Space

1.4 Events

1.5 Algebra of Events

1.6 Graphical Methods of Representing Events

1.7 Probability Axioms

1.8 Combinatorial Problems

1.9 Conditional Probability

1.10 Independence of Events

1.11 Bayes' Rule

1.12 Bernoulli Trials

References

Chapter 2: Discrete Random Variables

2.1 Introduction

2.2 Random Variables and Their Event Spaces

2.3 The Probability Mass Function

2.4 Distribution Functions

2.5 Special Discrete Distributions

2.6 Analysis of Program MAX

2.7 The Probability Generating Function

2.8 Discrete Random Vectors

2.9 Independent Random Variables

References

Chapter 3: Continuous Random Variables

3.1 Introduction

3.2 The Exponential Distribution

3.3 The Reliability and Failure Rate

3.4 Some Important Distributions

3.5 Functions of a Random Variable

3.6 Jointly Distributed Random Variables

3.7 Order Statistics

3.8 Distribution of Sums

3.9 Functions of Normal Random Variables

References

Chapter 4: Expectation

4.1 Introduction

4.2 Moments

4.3 Expectation Based on Multiple Random Variables

4.4 Transform Methods

4.5 Moments and Transforms of Some Distributions

4.6 Computation of Mean Time to Failure

4.7 Inequalities and Limit Theorems

References

Chapter 5: Conditional Distribution and Expectation

5.1 Introduction

5.2 Mixture Distributions

5.3 Conditional Expectation

5.4 Imperfect Fault Coverage and Reliability

5.5 Random Sums

References

Chapter 6: Stochastic Processes

6.1 Introduction

6.2 Classification of Stochastic Processes

6.3 The Bernoulli Process

6.4 The Poisson Process

6.5 Renewal Processes

6.6 Availability Analysis

6.7 Random Incidence

6.8 Renewal Model of Program Behavior

References

Chapter 7: Discrete-Time Markov Chains

7.1 Introduction

7.2 Computation of n-step Transition Probabilities

7.3 State Classification and Limiting Probabilities

7.4 Distribution of Times Between State Changes

7.5 Markov Modulated Bernoulli Process

7.6 Irreducible Finite Chains with Aperiodic States

7.7 * The

M

/

G

/ 1 Queuing System

7.8 Discrete-Time Birth–Death Processes

7.9 Finite Markov Chains with Absorbing States

References

Chapter 8: Continuous-Time Markov Chains

8.1 Introduction

8.2 The Birth–Death Process

8.3 Other Special Cases of the Birth–Death Model

8.4 Non-Birth–Death Processes

8.5 Markov Chains with Absorbing States

8.6 Solution Techniques

8.7 Automated Generation

References

Chapter 9: Networks of Queues

9.1 Introduction

9.2 Open Queuing Networks

9.3 Closed Queuing Networks

9.4 General Service Distribution and Multiple Job Types

9.5 Non-product-form Networks

9.6 Computing Response Time Distribution

9.7 Summary

References

Chapter 10: Statistical Inference

10.1 Introduction

10.2 Parameter Estimation

10.3 Hypothesis Testing

References

Chapter 11: Regression and Analysis of Variance

11.1 Introduction

11.2 Least-squares Curve Fitting

11.3 The Coefficients of Determination

11.4 Confidence Intervals in Linear Regression

11.5 Trend Detection and Slope Estimation

11.6 Correlation Analysis

11.7 Simple Nonlinear Regression

11.8 Higher-dimensional Least-squares Fit

11.9 Analysis of Variance

References

Appendix A: Bibliography

A.1 Theory

A.2 Applications

Appendix B: Properties of Distributions

Appendix C: Statistical Tables

Appendix D: Laplace Transforms

References

Appendix E: Program Performance Analysis

Author Index

Subject Index

End User License Agreement

ix

xi

xii

xiii

xiv

xv

xvi

xvii

xix

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

316

317

318

319

320

321

322

323

324

325

326

327

328

329

330

331

332

333

334

335

336

337

338

339

340

341

342

343

344

345

346

347

348

349

351

352

353

354

355

356

357

358

359

360

361

362

363

364

365

366

367

368

369

370

371

372

373

374

375

376

377

378

379

380

381

382

383

384

385

386

387

388

389

390

391

392

393

394

395

396

397

398

399

400

401

402

403

404

405

406

407

408

409

410

411

412

413

414

415

416

417

418

419

421

422

423

424

425

426

427

428

429

430

431

432

433

434

435

436

437

438

439

440

441

442

443

444

445

446

447

448

449

450

451

452

453

454

455

456

457

458

459

460

461

462

463

464

465

466

467

468

469

470

471

472

473

474

475

476

477

478

479

480

481

482

483

484

485

486

487

488

489

490

491

492

493

494

495

496

497

498

499

500

501

502

503

504

505

506

507

508

509

510

511

512

513

514

515

516

517

518

519

520

521

522

523

524

525

526

527

528

529

530

531

532

533

534

535

536

537

538

539

540

541

542

543

544

545

546

547

548

549

550

551

552

553

554

555

556

557

558

559

560

561

562

563

564

565

566

567

568

569

570

571

572

573

574

575

576

577

578

579

580

581

582

583

584

585

586

587

588

589

590

591

592

593

594

595

596

597

598

599

600

601

602

603

604

605

606

607

608

609

610

611

612

613

614

615

616

617

618

619

620

621

622

623

624

625

626

627

628

629

630

631

632

633

634

635

636

637

638

639

640

641

642

643

644

645

646

647

648

649

650

651

652

653

654

655

656

657

658

659

661

662

663

664

665

666

667

668

669

670

671

672

673

674

675

676

677

678

679

680

681

682

683

684

685

686

687

688

689

690

691

692

693

694

695

696

697

698

699

700

701

702

703

704

705

706

707

708

709

710

711

712

713

714

715

716

717

718

719

720

721

722

723

724

725

726

727

728

729

730

731

732

733

734

735

736

737

738

739

740

741

742

743

744

745

746

747

748

749

750

751

753

754

755

756

757

758

759

760

761

762

763

764

765

766

767

768

769

770

771

772

773

774

775

776

777

778

779

780

781

782

783

784

785

786

787

788

789

790

791

792

793

794

795

796

797

798

799

800

801

802

803

804

805

806

807

808

809

810

811

812

813

814

815

816

817

818

819

820

821

822

823

824

825

826

828

829

830

831

832

833

834

835

837

838

839

840

841

842

843

845

846

847

848

849

850

851

852

853

854

855

856

857

Cover

Table of Contents

Preface to the Paperback Edition

Begin Reading

Chapter 1: Introduction

Figure 1.1 A one-dimensional sample space

Figure 1.2 A two-dimensional sample space

Figure 1.3 A one-dimensional sample space

Figure 1.4 Venn diagram for sample space

S

and events

A

and

B

Figure 1.5 Venn diagram of disjoint events

A

and

B

Figure 1.6 Venn diagram for two intersecting events

A

and

B

Figure 1.7 Venn diagram of

A

and its complement

Figure 1.8 Tree diagram of a sequential sample space

Figure 1.9 A two-dimensional sample space

Figure 1.10 Reliability curve of a parallel redundant system

Figure 1.11 A series–parallel reliability block diagram

Figure 1.12 A communication network with five nodes

Figure 1.13 Fault tree for the communication network

Figure 1.14 A fault tree

Figure 1.P.1 A network of communication channels

Figure 1.P.2 Reliability block diagrams

Figure 1.P.3 Lamp problem

Figure 1.P.4 A modified communication network

Figure 1.15 The theorem of total probability

Figure 1.16 A channel diagram

Figure 1.17 A non-series–parallel system

Figure 1.P.5 Another non-series–parallel system

Figure 1.P.6 A trinary communication channel: channel diagram

Figure 1.18 A CPU queue with time slicing

Figure 1.19 A triple modular redundant system

Figure 1.20 BTS sector/transmitter

Figure 1.21 Reliability block diagram when

2:1 combiner

and

duplexer 1

are up

Figure 1.22 System reliability block diagram

Figure 1.23 A CPU to I/O device queuing scheme

Figure 1.24 (a) Series configuration; (b) parallel configuration; (c) series–parallel configuration

Figure 1.P.7 Comparison of two redundancy schemes

Figure 1.P.8 A fault tree

Chapter 2: Discrete Random Variables

Figure 2.1 Tree diagram of a sequential sample space

Figure 2.2 Event space for three Bernoulli trials

Figure 2.3 Histogram of pmf for Example 2.1

Figure 2.4 Domain and range of

P

,

X

, pmf, and CDF

Figure 2.5 CDF of three Bernoulli trials

Figure 2.6 CDF of Bernoulli random variable

Figure 2.7 Comparing the model pmf with data of Example 2.4

Figure 2.8 Probability of acceptance versus fraction defective

Figure 2.9 Symmetric binomial pmf

Figure 2.11 Negatively skewed binomial pmf

Figure 2.10 Positively skewed binomial pmf

Figure 2.12 Geometric pmf

Figure 2.13 Poisson pmf

Figure 2.14 Discrete uniform distribution

Figure 2.15 CDF of constant random variable

Figure 2.16 Indicator random variable

Figure 2.17 Flowchart of MAX

Figure 2.18 Joint pmf for two-module execution problem

Figure 2.19 I/O queuing at the end of a CPU burst

Figure 2.20 Computing the pmf of the random variable

Figure 2.21 Theorem 2.2

Figure 2.22 Graph for two-module execution problem

Figure 2.P.1 A combinational circuit

Chapter 3: Continuous Random Variables

Figure 3.1 Relation between CDF and pdf

Figure 3.2 CDF of a mixed random variable

Figure 3.3 The CDF of an exponentially distributed random variable

Figure 3.4 Exponential pdf

Figure 3.5 Memoryless property of the exponential distribution ()

Figure 3.6 Failure rate as a function of time

Figure 3.7 Stress as a function of time

Figure 3.8 The pdf of the hypoexponential distribution

Figure 3.9 The CDF of the hypoexponential distribution

Figure 3.10 The failure rate of the hypoexponential distribution

Figure 3.11 The failure rate of the gamma distribution

Figure 3.12 The gamma pdf

Figure 3.13 The CPU service time distribution compared with the hyperexponential distribution.

Figure 3.14 Failure rate of the Weibull distribution with various values of and

Figure 3.15 Failure rate of log-logistic distribution with

Figure 3.16 Normal density with parameters and

Figure 3.17 Failure rate of the normal distribution

Figure 3.18 Normal approximation to the binomial pmf

Figure 3.19 The pdf of a uniform distribution

Figure 3.20 The CDF of a uniform distribution

Figure 3.21 The pdf of a Pareto distribution

Figure 3.22 The CDF of a Pareto distribution

Figure 3.23 The failure rate of a Pareto distribution

Figure 3.24 Defective distribution function

Figure 3.25 Generating a random deviate

Figure 3.26 Properties of joint CDF

Figure 3.27 Reliability of a parallel-redundant system

Figure 3.28 Comparison of TMR and simplex reliabilities

Figure 3.29 Comparison of

k

-out-of-

n

and simplex reliabilities

Figure 3.30 Reliability block diagram for the workstation–file server (WFS) example

Figure 3.31 Transmit and exchange operations of a switching element

Figure 3.32 An SEN

Figure 3.33 A SEN and a SEN+

Figure 3.34 Series–parallel reliability block diagram of SEN+

Figure 3.35 An SEN+

Figure 3.36 Area of integration for the convolution of

X

and

Y

Figure 3.37 The area of integration for Example 3.24

Figure 3.38 Reliabilities of simplex and two-component standby systems

Figure 3.39 Computing capacity as a function of time

Figure 3.40 Hypoexponential as a series of exponential stages

Figure 3.41 Comparison of simplex, TMR, and TMR/simplex reliabilities

Figure 3.42 A module with an online fault detector

Figure 3.43 The order statistics of the exponential distribution

Figure 3.44 Lifetime distribution of a hybrid

k

-out-of-

n

system

Figure 3.45 The area of integration for Example 3.35

Figure 3.46 The area of integration for computing the CDF of

Figure 3.47 Student's

t

pdf and its comparison with standard normal pdf

Chapter 4: Expectation

Figure 4.P.1 An alternative method of computing

Figure 4.1 The pdf of a “concentrated” distribution

Figure 4.2 The pdf of a diffuse distribution

Figure 4.3 Two areas of integration for Example 4.7

Figure 4.4 The variation in the expected life with the degree of (parallel) redundancy (simplex failure rate )

Figure 4.P.2 A system with dependent failures

Chapter 5: Conditional Distribution and Expectation

Figure 5.1 Illustration for

Figure 5.2 The hyperexponential distribution as a set of parallel exponential stages

Figure 5.3 The pdf of a mixture of exponential and Erlang

Figure 5.4 Hazard rate of a mixture of exponential and Erlang

Figure 5.P.1 Flowchart of automatic fault recovery

Figure 5.5 Three phases of fault handling

Figure 5.6 Reliability of two-component standby system with imperfect coverage

Figure 5.7 The unit-step function: CDF of a constant random variable,

Figure 5.8 The stage-type distribution of system lifetime for the system in Example 5.16

Figure 5.9 Tree diagram for Example 5.17

Figure 5.10 Lifetime distribution of a two-component warm standby system with imperfect coverage

Figure 5.11 Coxian stage-type distribution

Figure 5.12 Lifetime distribution of an

n

-component standby redundant system with imperfect coverage

Figure 5.13 Lifetime distribution of hybrid

k

-out-of-

n

system with imperfect coverage

Chapter 6: Stochastic Processes

Figure 6.1 Noise voltages across two resistors

Figure 6.2 A queuing system

Figure 6.3 Typical sample function of a discrete-time, discrete-state process

Figure 6.4 Typical sample function of a continuous-time, discrete-state process

Figure 6.5 Typical sample function of a discrete-time, continuous-state process

Figure 6.6 Typical sample function of a continuous-time, continuous-state process

Figure 6.7 Typical sample function of the telegraph process

Figure 6.8 Autocorrelation function of the telegraph process

Figure 6.9 The tree diagram for WWW request

Figure 6.10 State transitions of the Poisson process

Figure 6.11 Pooling two Poisson streams

Figure 6.12 Superposition of independent Poisson processes

Figure 6.13 Decomposition of a Poisson process

Figure 6.14 A realization of a sequence of failures and repairs

Figure 6.15 A realization of the process corresponding to Figure 6.14

Figure 6.16 Reliability, instantaneous availability, interval availability, and limiting availability as functions of time ( and )

Figure 6.17 Random incidence

Chapter 7: Discrete-Time Markov Chains

Figure 7.1 The state diagram of a two-state Markov chain

Figure 7.2 A channel diagram

Figure 7.3 The state diagram of the two-state Markov chain of Example 7.5

Figure 7.4 The state diagram of a two-state periodic Markov chain

Figure 7.5 The state classification of discrete-time Markov chains

Figure 7.6 The state diagram of a Markov chain with one absorbing state and one transient state

Figure 7.7 A discrete-time Markov model of a program

Figure 7.8 Non-homogeneous DTMC model for software reliability growth

Figure 7.9 A multiprocessor system with multimodule memory

Figure 7.10 A discrete-time queuing network representation of multiprocessor memory interference

Figure 7.11 The state diagram for an example of the memory interference problem

Figure 7.12 Different cache organizations: (a) fully associative; (b) direct-mapped; (c) set-associative

Figure 7.13 Markov chain for the directly mapped cache

Figure 7.14 LRU stack updating procedure

Figure 7.15 The state diagram for the LRU-stack example

Figure 7.16 The state diagram of the DTMC for slotted Aloha

Figure 7.17 an ATM multiplexer

Figure 7.18 DTMC for the queue length of an ATM multiplexer

Figure 7.P.1 The queue with Bernoulli feedback

Figure 7.19 The state diagram of the discrete-time birth-death process

Figure 7.20 The state diagram for Example 7.16

Figure 7.21 Two stacks sharing an area of memory

Figure 7.22 Tree diagram for Example 7.17

Figure 7.23 The state diagram for the two-stacks example

Figure 7.24 A program flow graph

Figure 7.25 A program flow graph

Figure 7.26 A program flow graph with failure state

Figure 7.P.2 Another program flow graph

Chapter 8: Continuous-Time Markov Chains

Figure 8.1 The state diagram of the birth–death process

Figure 8.2 The queuing system

Figure 8.3 Expected number of jobs in system versus traffic intensity

Figure 8.4 Average response time versus traffic intensity

Figure 8.5 The queuing system

Figure 8.6 The state diagram of the queue

Figure 8.7 Queuing schemes: (a) separate queues; (b) common queue

Figure 8.8 Queuing schemes for Example 8.4: (a) separate queues; (b) common queue

Figure 8.9 State diagram of a birth–death process with a finite state space

Figure 8.10 queuing system

Figure 8.11 state diagram

Figure 8.12 Task completion time distribution

Figure 8.13 The cyclic queuing model of a multiprogramming system

Figure 8.14 The state diagram for the cyclic queuing model

Figure 8.15 The CPU utilization as a function of the degree of multiprogramming

Figure 8.P.1 Two CTMCs

Figure 8.16 The state diagram of a finite-population queuing system

Figure 8.17 Service slot availability model

Figure 8.18 A client–server system

Figure 8.19 Average response time as a function of the number of clients

Figure 8.20 CTMC for the wireless handoff model

Figure 8.21 State diagram of the Poisson process

Figure 8.22 Nonhomogeneous Poisson process

Figure 8.23 The state diagram of a death process with a linear death rate

Figure 8.24 The state diagram for Example 8.19

Figure 8.25 CTMC for preventive maintenance model

Figure 8.26 Steady-state availability of the preventive maintenance model with different

Figure 8.27 Two-component system availability model

Figure 8.28 Two-component system with fault detection delay

Figure 8.29 Downtime of two-component system with fault detection delay

Figure 8.30 Two-component availability model with imperfect coverage

Figure 8.31 Downtime due to imperfect coverage

Figure 8.32 Two-component availability model with imperfect coverage and detection delay

Figure 8.33 Downtime due to imperfect coverage and detection delay

Figure 8.34 CTMC availability model for the WFS example

Figure 8.35 CTMC for the 2-node cluster system

Figure 8.36 CTMC model for Web server with cold replication

Figure 8.37 Differentiating between failures of the active and spare units

Figure 8.38 A semi-Markov model

Figure 8.39 Difference in downtime of the SMP model and the approximate CTMC model

Figure 8.40 Top-level model for Example 8.28

Figure 8.41 heterogeneous system

Figure 8.42 The state diagram for the heterogeneous queue

Figure 8.43 The counting process of MMPP with three states

Figure 8.44 State diagram for the queue

Figure 8.45 State diagram for the Erlang loss availability model

Figure 8.46 State diagram for the Erlang loss performance model

Figure 8.47 State diagram for the Erlang loss composite model

Figure 8.48 Total blocking probability in the Erlang loss model

Figure 8.49 State diagram for the multiprocessor availability model

Figure 8.50 Downtime Vs. the number of processors

Figure 8.51 Queuing system for the multiprocessor performance model

Figure 8.52 State diagram for the multiprocessor performance model

Figure 8.53 Total loss probability for a multiprocessor model

Figure 8.54 Markov chain of availability model

Figure 8.56 versus

g

for systems without APS and with APS (top); percentage of unavailability in (bottom)

Figure 8.57 versus

g

for systems without APS and with APS (top); percentage of unavailability in (bottom)

Figure 8.57 and versus

N

b

and

g

for system without APS (top) and with APS (bottom)

Figure 8.58 The state diagram for Example 8.34

Figure 8.59 The state diagram of a duplex system with imperfect coverage

Figure 8.60 The effect of coverage on

Figure 8.61 The NHCTMC model of the duplex system

Figure 8.62 The state diagram of a TMR system with spare and imperfect coverage

Figure 8.63 The stage-type lifetime distribution for the system of Example 8.37

Figure 8.65 CTMC reliability model for the WFS example with repair

Figure 8.66 CTMC reliability model for the WFS example without repair

Figure 8.66 RBD of workstations and file server system

Figure 8.67 CTMC for operational security

Figure 8.68 The state diagram for Example 8.40

Figure 8.69 Architectures of fault-tolerant systems: (a) system S; (b) system D; (c) system DS

Figure 8.70 A CTMC with

m

absorbing states

Figure 8.71 Multiprocessor system reliability model

Figure 8.72 Multiprocessor system reliability model with hard-deadline failures

Figure 8.P.2 A two-unit system with nonzero detection latency

Figure 8.P.3 An aggregated version of Figure 8.P.2

Figure 8.P.4 Stiffler's coverage model

Figure 8.73 Two-state availability model of Example 8.44

Figure 8.74 (a) A noncovergent CTMC; (b) reordered convergent CTMC

Figure 8.75 An example of a Petri net

Figure 8.76 Reachability graph of the Petri net for Example 8.48

Figure 8.77 SPN model of a Poisson process

Figure 8.78 Reachability Graph of the Poisson process

Figure 8.79 SPN model of queue

Figure 8.80 Reachability graph of the SPN model of queue

Figure 8.81 SPN model of queue

Figure 8.82 Reachability graph of the SPN model of queue

Figure 8.83 A GSPN model of queue

Figure 8.84 Extended reachability graph of the GSPN model of

M/E

m

/

1

/n

+ 1 queue

Figure 8.85 CTMC derived from the GSPN model of queue

Figure 8.86 A GSPN model of queue

Figure 8.87 A GSPN model of queue

Figure 8.88 The reachability graph of Example 8.54 ()

Figure 8.89 SRN for the WFS example

Figure 8.90 Reachability graph of Example 8.55

Figure 8.91 SRN for WFS example with imperfect coverage

Figure 8.92 SRN for WFS with complex failure and repair dependency

Figure 8.93 The SRN of wireless loss model

Figure 8.94 The SRN for the producer-consumer system

Figure 8.95 SRN of a multiprocessor model

Chapter 9: Networks of Queues

Figure 9.1 A two-stage tandem network

Figure 9.2 The state diagram for the two-stage tandem network

Figure 9.3 Portion of the state diagram for equation (9.1)

Figure 9.4 (a) An open network with feedback; (b) an “equivalent” network without feedback

Figure 9.5 State diagram for the network in Figure 9.4a

Figure 9.6 The open central server network

Figure 9.P.1 queue with Bernoulli feedback

Figure 9.7 An open central server network with a blocking

Figure 9.8 The (closed) central server model

Figure 9.9 The closed cyclic queuing model

Figure 9.10 State spaces for two-node networks

Figure 9.11 The state diagram for the closed cyclic queuing model

Figure 9.12 Average system throughput versus degree of multiprogramming

Figure 9.13 Demonstration of the thrashing phenomenon

Figure 9.14 Queuing model of Example 9.10

Figure 9.15 Average response time versus number of terminals

Figure 9.16 Queuing model of Example 9.11

Figure 9.17 Average response time versus number of clients

Figure 9.18 Closed queuing network model of Example 9.12

Figure 9.19 Two-phase hyperexponential CPU service time distribution

Figure 9.20 State diagram for the system of Example 9.13

Figure 9.21 The state diagram for the central server network with two job types

Figure 9.22 Equivalent server queuing system

Figure 9.23 The behavior of the average response time as a function of the number of partitions and the arrival rate

Figure 9.24 A terminal-oriented system with a limited number of memory partitions

Figure 9.25 SRN model of terminal-oriented system with limited memory

Figure 9.26 Upper level model

Figure 9.27 Plot of average response time for Example 9.18

Figure 9.28 SRN model of the two-node system with multiple processors

Figure 9.29 CTMC model for the two-node system with multiple processors

Figure 9.P.2 Another terminal-oriented system

Figure 9.30 CTMC for response time distribution approximation of model in Figure 9.4a

Figure 9.31 CTMC for response time distribution approximation of model in Figure 9.4b

Figure 9.32 Response time distribution for networks of Figure 9.4, versions a and b

Figure 9.33 CTMC for response time distribution approximation of model in Example 9.3

Figure 9.34 Response time distribution for Example 9.3.

Figure 9.35 The response time block

Figure 9.36 The response time block

Figure 9.37 The response time block

Figure 9.38 Distributed system queuing network

Figure 9.39 CTMC corresponding to response time distribution

Figure 9.41 Response time distribution of distributed system

Figure 9.40 GSPN model for calculating the response time distribution

Figure 9.42 Central server model of a computer system

Figure 9.43 CTMC model of the CSM for computing the response time distribution

Figure 9.44 CTMC model for computing the steady-state probabilities of the non-tagged customer

Figure 9.45 Response time distribution for different number of customers

Chapter 10: Statistical Inference

Figure 10.1

Figure 10.2 Comparing the standard normal pdf with the pdf of the

t

distribution with 3 degrees of freedom

Figure 10.3

Figure 10.4

Figure 10.5 Weibull estimation

Figure 10.6 90% confidence intervals of error-detection coverage [I—initial (physical fault injection); CPV—checking for protocol violations (simulated fault injection); LCL—lower confidence limit; UCL—upper confidence limit.]

Figure 10.7 Two-state CTMC

Figure 10.8 Number of samples

n

versus lower boundary,

Figure 10.9 Acceptance and rejection regions for a two-sided test

Figure 10.10 Acceptance and rejection regions for a one-sided test

Figure 10.11 Probability of false alarm as a function of test stringency and sample size

n

Figure 10.12 Computing types I and II errors

Figure 10.13 A typical power curve for a two-tailed test

Figure 10.14 A typical power curve for a one-tailed test

Figure 10.15 A nonparametric test

Figure 10.16 The Kolmogorov–Smirnov test for a small sample size

Figure 10.17 The Kolmogorov–Smirnov test for a large sample size

Figure 10.18 Probabilityplot with exponential distribution assumption

Figure 10.19 Probability plot with Weibull distribution assumption

Chapter 11: Regression and Analysis of Variance

Figure 11.1 A scatter diagram showing a linear dependence

Figure 11.2 A scatter diagram showing a nonlinear dependence

Figure 11.3 A scatter diagram showing no specific dependence

Figure 11.4 Least-squares linear fit

Figure 11.5 Nonlinear curve fitting

Appendix D: Laplace Transforms

Figure D.1 Solution of a linear differential equation using Laplace transform procedure

Chapter 1: Introduction

Table 1.1 Sample Points

Chapter 6: Stochastic Processes

Table 6.1 A classification of stochastic processes

Table 6.2 Random-request Availability,

Chapter 8: Continuous-Time Markov Chains

Table 8.1 Measures for the system

Table 8.2 Availability of a standby redundant system

Table 8.3 Availabilities for parallel redundant system

Table 8.4 State indices

Table 8.5 Average response times

Table 8.6 Reward rates for systems without APS

Table 8.7 Reward rates for systems with APS

Table 8.8 Parameters used in numerical study

Table 8.9 Parameters for Example 8.40

Table 8.10 Dependability measures for the three architectures

Table 8.11 Firing rates for transitions in the producer-consumer SRN

Chapter 9: Networks of Queues

Table 9.1 Computation of the Normalization Constant

Table 9.2 Parameters of Example 9.6

Table 9.3 Computation of the Normalization Constant

Table 9.4 Average System Throughput (Jobs Completed per Second)

Table 9.5 Parameters for Example 9.7

Table 9.6 Normalization Constant for Load-dependent Servers

Table 9.7 Results for Example 9.9

Table 9.8 Networks with Product-form Solutions

Table 9.9 Parameters for Example 9.16

Table 9.10

Table 9.11 Parameters of Figure 9.24

Table 9.12 The numbers of states and the numbers of nonzero entries in the

Q

matrix when and

Table 9.13 Comparing the Exact and Approximate Values of Mean Response Time

Chapter 10: Statistical Inference

Table 10.1 Critical Values of

N

(0,1)

Table 10.2 Statistics for the workload variables measured

Table 10.3 Statistics for the workload clusters

Table 10.4 Sojourn time distributions in the workload states

Table 10.5 Comparison of state occupancy probabilities (expressed as percentage)

Table 10.6 Fitting a Poisson model

Chapter 11: Regression and Analysis of Variance

Table 11.1 The failure rate versus temperature

Table 11.2 The failure rate data for a Weibull model

Table 11.3 Observations for one-way analysis of variance

Table 11.4 One-way ANOVA table

Table 11.5 Observations for two-way analysis of variance

Table 11.6 Two-way ANOVA with interaction

Table 11.7 ANOVA Table for Example 11.11

Appendix D: Laplace Transforms

Table D.1 Important transform pairs

Table D.2 Properties of Laplace tansforms

Kishor S. Trivedi

Duke UniversityDurham, North Carolina

Copyright © 2016 by John Wiley & Sons, Inc. All rights reserved

Published by John Wiley & Sons, Inc., Hoboken, New Jersey Published simultaneously in Canada Hardcover edition originally published as per part of Wiley-Interscience

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.

Library of Congress Cataloging-in-Publication Data:

Names: Trivedi, Kishor S., 1946- author.

Title: Probability and statistics with reliability, queuing, and computer science applications / Kishor S. Trivedi, Duke University, Durham, North Carolina.

Description: Hoboken, NJ : John Wiley & Sons, Inc., [2016] | Includes bibliographical references and index.

Identifiers: LCCN 2016010100 (print) | LCCN 2016013554 (ebook) | ISBN 9781119285427 (pbk. : acid-free paper) | ISBN 9780471460817 (pdf)

Subjects: LCSH: Probabilities--Data processing. | Mathematical statistics-Data processing. | Computer algorithms. | Engineering mathematics. | Reliability Engineering. | Computer Performance. | Queuing Theory. | Fault Tolerant Computing. | Software Reliability.

Classification: LCC QA273.19.E4 T74 2016 (print) | LCC QA273.19.E4 (ebook) |

DDC 519.5-dc23

LC record available at http://lccn.loc.gov/2016010100

Nearly 15 years have passed since the publication of the second edition of this book by John Wiley. Indian edition (of the first edition this book from 1982), published by Prentice-Hall India, is still in print and doing quite well. Asian edition (of the second edition from 2001) is published by John Wiley Asia. In 2015, a Chinese translation of the second edition has been published. I did not see the need for adding or subtracting significant amount of material to produce a new, third edition of the “bluebook” as it is popularly known. However, I took the opportunity to correct nearly 100 minor errors from the book before supplying the camera ready copy to the publisher. I have added some more terms in the subject index leading to an extra page at the end. Thus, though this is not a new edition, it is a significant new reprint. I like to remind the readers that a complete solution manual is available to instructors as well the power point slides of all chapters. Instructors may also wish to use the SHARPE software package as a pedagogic aid. (See http://sharpe.pratt.duke.edu/ for further details.)

I wish to thank many friends for pointing out and/or help fix the errors—Javier Alonso López, Yonghuan Cao, Xiaolin Chang, Olivia Das, Jie Feng, Marcos A. M. Ferreira, Lance Fiondella, Ravi Iyer, Michael Kwok, Bharat Madan, José M. Martínez, Rivalino Matias, Jr., Kesari Mishra, Yan Qu, Dharmaraja Selvamuthu, Vibhu Sharma, Harish Sukhwani, Alex Thomasian and (late) Ranjith Vasireddy.

Kishor S. Trivedi

Nearly 20 years have passed since the publication of the first edition of this book. Its Indian edition is still in print and doing quite well. In this second edition, I have thoroughly revised all the chapters. Many examples and problems are updated, and many new examples and problems have been added. There is a considerable addition of examples on system availability modeling, wireless system performance and availability modeling, software reliability modeling, and system performability modeling. New material on fault trees and stochastic Petri nets, and numerical solution techniques for Markov chains has been added. A section on the computation of response time distribution for Markovian queuing networks has also been added. Chapter 8, on continuous-time Markov chains, has undergone the most change. My research experience and the application of these methods in practice for the past 25 years (at the time of writing) have been distilled in these chapters as much as possible. I hope that the book will be of use as a classroom textbook as well as of use for practicing engineers. Researchers will also find valuable material here. I have tried to avoid adding excessive and very advanced material. Thus, for instance, I have omitted discussion of Markov regenerative processes, fluid stochastic Petri nets, and binary decision diagrams. Other topics that are omitted include material on self-similarity, large deviation theory, and diffusion approximation. The topic of hierarchical and fixed-point iterative models is covered very briefly. Modeling software fault tolerance with various kinds of correlation is also excluded.

I wish to thank many of my current students—Yonghuan Cao, Dong Chen, Dongyan Chen, Christophe Hirel, Lei Li, Yun Liu, Rajiv Poonamalli, Srinivasan Ramani, Kalyan Vaidyanathan, Wei Xie, and Liang Yin; current postdoctoral associates—Dr. Katerina Goseva-Popstojanova, Dr. Yiguang Hong, Dr. Xiaomin Ma, and Dr. Dharmaraja Selvamuthu; former students—Dr. Hoon Choi, Dr. Gianfranco Ciardo, Dr. Ricardo Fricks, Dr. Sachin Garg, Dr. Swapna Gokhale, Wei Li, Xuemei (Judith) Lou, Dr. Tong Luo, Dr. Yue Ma, Dr. Varsha Mainkar, Dr. Manish Malhotra, Anu Mohan, Dr. Jogesh Muppala, Hu Pan, Dr. Anapathur Ramesh, Dr. Robin Sahner, Kartik Sudeep, Dr. Steve Woolet, and Dr. Xinyu (Henry) Zang; former postdoctoral associates—Dr. Hairong Sun and Dr. Bruno Tuffin; and other friends—Prof. Tadashi Dohi, Dr. Zahava Koren, Prof. Igor Kovalenko, Prof. Kang Lee, Dr. Bharat Madan and Prof. Alex Thomasian.

Kishor S. Trivedi

The aim of this book is to provide an introduction to probability, stochastic processes, and statistics for students of computer science, electrical/computer engineering, reliability engineering, and applied mathematics. The prerequisites are two semesters of calculus, a course on introduction to computer programming, and preferably, a course on computer organization.

I have found that the material in the book can be covered in a two-semester or three-quarter course. However, through a choice of topics, shorter courses can also be organized. I have taught the material in this book to seniors and first-year graduate students but with the text in printed form, it could be given to juniors as well.

With a specific audience in mind, I have attempted to provide examples and problems, with which the student can identify, as motivation for the probability concepts. The majority of applications are drawn from reliability analysis and performance analysis of computer systems and from probabilistic analysis of algorithms. Although there are many good texts on each of these application areas, I felt the need for a text that treats them in a balanced fashion.

Chapters 1–5 provide an introduction to probability theory. These five chapters provide the core for one semester course on introduction to applied probability. Chapters 6–9 deal with stochastic processes and their applications. These four chapters form the core of the second course with a title such as systems modeling. I have included an entire chapter on networks of queues. The last two chapters are on statistical inference and regression, respectively. I have placed the material on sampling distributions in Chapter 3 dealing with continuous random variables. Portions of the chapters on statistics can be taught with the first course and other portions in the second course.

Besides more than 200 worked examples, most sections conclude with a number of exercises. Difficult exercises are indicated by a star. A solution manual for instructors is available from the publisher.

I am indebted to the Department of Computer Science, Duke University and to Merrell Patrick for their encouragement and support during this project. The efficient typing skills of Patricia Land helped make the job of writing the book much easier than it could have been.

Many of my friends, colleagues, and students carefully read several drafts and suggested many changes that improved the readability and the accuracy of this text. Many thanks to Robert Geist, Narayan Bhat, Satish Tripathi, John Meyer, Frank Harrell, Veena Adlakha, and Jack Stiffler for their suggestions. Joey de la Cruz and Nelson Strothers helped in the editing and the typing process very early in the project. The help by the staff of Prentice-Hall in preparation of the book is also appreciated.

I would like to thank my wife Kalpana, and my daughters Kavita and Smita for enduring my preoccupation with this work for so long. The book is dedicated to my parents.

Kishor S. Trivedi

APS

automatic protection switching

ARQ

automatic repeat request

ATM

asynchronous transfer mode

BCC

blocked calls cleared

BDD

binary decision diagram

BER

bit error rate

BR

base radio

BTS

base–transceiver system

CDF

cumulative distribution function

CFR

constant failure rate

CME

conditional mean exceedance

CPU

central processing unit

CRTD

conditional response time distribution

CSM

central server model

CTMC

continuous-time Markov chain

DFR

decreasing failure rate

DTMC

discrete-time Markov chain

ESS

electronic switching system

FCFS

first come, first served

FIFO

first in, first out

FIT

failure in time

FTREE

fault tree

GBN

Go Back N

GMR

general(ized) multiplication rule

GSPN

generalized stochastic Petri net

IBP

interrupted Bernoulli process

IFR

increasing failure rate

I/O

input output

IRM

independent reference model

IS

infinite server

ISP

Internet service provider

LAN

local area network

LCFS

last come, first served

LCFS-PR

LCFS-preemptive resume

LCL

lower confidence limit

LLC

logical link control

LRU

least recently used

LST

Laplace–Stieltjes transform

LT

Laplace transform

MAC

medi(um)(a) access control

MAP

Markovian arrival process

METF

mean effort to (security) failure

MGF

moment generating function

MLE

maximum-likelihood estimator

MMBP

Markov modulated Bernoulli process

MMPP

Markov modulated Poisson process

MRM

Markov reward model

MTBF

mean time between failures

MTTF

mean time to failure

MTTR

mean time to repair

MVA

mean-value analysis

NHCTMC

Nonhomogeneous CTMC

NHPP

Nonhomogeneous Poisson process

NPFQN

Non-product-form queuing network(s)

ODE

ordinary differential equation

PASTA

Poisson arrivals see time averages

PN

Petri net

probability density function

PFQN

Product-form queuing network(s)

pmf

probability mass function

prs

preemptive resume

prt

preemptive repeat

PS

processor sharing

QoS

quality of service

RBD

reliability block diagram

RG

reliability graph

RR

round robin

SDP

sum of disjoint products

SEN

shuffle exchange network

SHARPE

symbolic hierarchical automated reliability and

performance evaluator

SLTF

shortest latency time first

SMP

semi-Markov process

SOR

successive overrelaxation

SPN

stochastic Petri net

SPNP

stochastic Petri net package

SREPT

software reliability estimation and prediction tool

SRGM

software reliability growth model

SRM

Software reliability model

SRN

stochastic reward net

SRPTF

shortest remaining processing time first

TDMA

time division multiple access

TLP

total loss probability

TMR

triple modular redundancy

UBT

upside-down bathtub

UCL

upper confidence limit

URTD

unconditional response time distribution

VLSI

very large scale integrated (circuits)

WFS

workstation–file server (system)

This book is accompanied by a password protected companion website for instructors only.

This website contains:

Course Slides

Solutions Manual

To access the material on the instructor's website simply visit www.wiley.com/go/trivedi/probabilityandstatistics2e and follow the instructions for how to register.

Computer scientists and engineers need powerful techniques to analyze algorithms and computer systems. Similarly, networking engineers need methods to analyze the behavior of protocols, routing algorithms, and congestion in networks. Computer systems and networks are subject to failure, and hence methods for their reliability and availability are needed. Many of the tools necessary for these analyses have their foundations in probability theory. For example, in the analysis of algorithm execution times, it is common to draw a distinction between the worst-case and the average-case behavior of an algorithm. The distinction is based on the fact that for certain problems, while an algorithm may require an inordinately long time to solve the least favorable instance of the problem, the average solution time is considerably shorter. When many instances of a problem have to be solved, the probabilistic (or average-case) analysis of the algorithm is likely to be more useful. Such an analysis accounts for the fact that the performance of an algorithm is dependent on the distributions of input data items. Of course, we have to specify the relevant probability distributions before the analysis can be carried out. Thus, for instance, while analyzing a sorting algorithm, a common assumption is that every permutation of the input sequence is equally likely to occur.

Similarly, if the storage is dynamically allocated, a probabilistic analysis of the storage requirement is more appropriate than a worst-case analysis. In a like fashion, a worst-case analysis of the accumulation of roundoff errors in a numerical algorithm tends to be rather pessimistic; a probabilistic analysis, although harder, is more useful.

When we consider the analysis of a Web server serving a large number of users, several types of random phenomena need to be accounted for. First, the arrival pattern of requests is subject to randomness due to a large population of diverse users. Second, the resource requirements of requests will likely fluctuate from request to request as well as during the execution of a single request. Finally, the resources of the Web server are subject to random failures due to environmental conditions and aging phenomena. The theory of stochastic (random) processes is very useful in evaluating various measures of system effectiveness such as throughput, response time, reliability, and availability.

Before an algorithm (or protocol) or a system can be analyzed, various probability distributions have to be specified. Where do the distributions come from? We may collect data during the actual operation of the system (or the algorithm). These measurements can be performed by hardware monitors, software monitors, or both. Such data must be analyzed and compressed to obtain the necessary distributions that drive the analytical models discussed above. Mathematical statistics provides us with techniques for this purpose, such as the design of experiments, hypothesis testing, estimation, analysis of variance, and linear and nonlinear regression.

Probability theory is concerned with the study of random (or chance) phenomena. Such phenomena are characterized by the fact that their future behavior is not predictable in a deterministic fashion. Nevertheless, such phenomena are usually capable of mathematical descriptions due to certain statistical regularities. This can be accomplished by constructing an idealized probabilistic model of the real-world situation. Such a model consists of a list of all possible outcomes and an assignment of their respective probabilities. The theory of probability then allows us to predict or deduce patterns of future outcomes.

Since a model is an abstraction of the real-world problem, predictions based on the model must be validated against actual measurements collected from the real phenomena. A poor validation may suggest modifications to the original model. The theory of statistics facilitates the process of validation. Statistics is concerned with the inductive process of drawing inferences about the model and its parameters based on the limited information contained in real data.

The role of probability theory is to analyze the behavior of a system or an algorithm assuming the given probability assignments and distributions. The results of this analysis are as good as the underlying assumptions. Statistics helps us in choosing these probability assignments and in the process of validating model assumptions. The behavior of the system (or the algorithm) is observed, and an attempt is made to draw inferences about the underlying unknown distributions of random variables that describe system activity. Methods of statistics, in turn, make heavy use of probability theory.

Consider the problem of predicting the number of request arrivals to a Web server in a fixed time interval (0,t]. A common model of this situation is to assume that the number of arrivals in this period has a particular distribution, such as the Poisson distribution (see Chapter 2). Thus we have replaced a complex physical situation by a simple model with a single unknown parameter, namely, the average arrival rate . With the help of probability theory we can then deduce the pattern of future arrivals. On the other hand, statistical techniques help us estimate the unknown parameter based on actual observations of past arrival patterns. Statistical techniques also allow us to test the validity of the Poisson model.

As another example, consider a fault-tolerant computer system with automatic error recovery capability. Model this situation as follows. The probability of successful recovery is c and probability of an abortive error is . The uncertainty of the physical situation is once again reduced to a simple probability model with a single unknown parameter c. In order to estimate parameter c in this model, we observe N errors out of which n are successfully recovered. A reasonable estimate of c is the relative frequency n/N, since we expect this ratio to converge to c in the limit . Note that this limit is a limit in a probabilistic sense:

Axiomatic approaches to probability allow us to define such limits in a mathematically consistent fashion (e.g., see the law of large numbers in Chapter 4) and hence allow us to use relative frequencies as estimates of probabilities.

Probability theory is rooted in the real-life situation where a person performs an experiment the outcome of which may not be certain. Such an experiment is called a random experiment. Thus, an experiment may consist of the simple process of noting whether a component is functioning properly or has failed; it may consist of determining the execution time of a program; or it may consist of determining the response time of a server request. The result of any such observations, whether they are simple “yes” or “no” answers, meter readings, or whatever, are called outcomes of the experiment.

The totality of the possible outcomes of a random experiment is called the sample space of the experiment and it will be denoted by the letter S.

We point out that the sample space is not determined completely by the experiment. It is partially determined by the purpose for which the experiment is carried out. If the status of two components is observed, for some purposes it is sufficient to consider only three possible outcomes: two functioning, two malfunctioning, one functioning and one malfunctioning. These three outcomes constitute the sample space S. On the other hand, we might be interested in exactly which of the components has failed, if any has failed. In this case the sample space S must be considered as four possible outcomes where the earlier single outcome of one failed, one functioning is split into two outcomes: first failed, second functioning and first functioning, second failed. Many other sample spaces can be defined if we take into account such things as type of failure and so on.

Frequently, we use a larger sample space than is strictly necessary because it is easier to use; specifically, it is always easier to discard excess information than to recover lost information. For instance, in the preceding illustration, the first sample space might be denoted (where each number indicates how many components are functioning) and the second sample space might be denoted (where 0 = failed, 1 = functioning). Given a selection from S2, we can always add the two components to determine the corresponding choice from S1; but, given a choice from S1 (in particular 1), we cannot necessarily recover the corresponding choice from S2.

It is useful to think of the outcomes of an experiment, the elements of the sample space, as points in a space of one or more dimensions. For example, if an experiment consists of examining the state of a single component, it may be functioning properly (denoted by the number 1), or it may have failed (denoted by the number 0). The sample space is one-dimensional, as shown in Figure 1.1. If a system consists of two components there are four possible outcomes, as shown in the two-dimensional sample space of Figure 1.2. Here each coordinate is 0 or 1 depending on whether the corresponding component is functioning properly or has failed. In general, if a system has n components, there are possible outcomes each of which can be regarded as a point in an n-dimensional sample space. It should be noted that the sample space used here in connection with the observation of the status of components could also serve to describe the results of other experiments; for example, the experiment of observing n successive executions of an if statement, with 1 denoting the execution of the then clause and 0 denoting the execution of the else clause.

Figure 1.1 A one-dimensional sample space

Figure 1.2 A two-dimensional sample space

The geometric configuration that is used to represent the outcomes of an experiment (e.g., Figure 1.2) is not necessarily unique. For example, we could have regarded the outcomes of the experiment of observing the two-component system to be the total number functioning, and the outcomes would be 0,1,2, as depicted in the one-dimensional sample space of Figure 1.3. Note that point 1 in Figure 1.3 corresponds to points (0,1) and (1,0) in Figure 1.2. It is often easier to use sample spaces whose elements cannot be further “subdivided”; that is, the individual elements of a sample space should not represent two or more outcomes that are distinguishable in some fashion. Thus, sample spaces like those of Figures 1.1 and 1.2 should be used in preference to sample spaces like the one in Figure 1.3.

Figure 1.3 A one-dimensional sample space

It is convenient to classify sample spaces according to the number of elements they contain. If the set of all possible outcomes of the experiment is finite, then the associated sample space is a finite sample space. Thus, the sample spaces of Figures 1.1–1.3 are finite sample spaces.