Statistical Methods in Radiation Physics - James E. Turner - ebook

Statistical Methods in Radiation Physics ebook

James E. Turner

0,0
369,99 zł

Opis

This statistics textbook, with particular emphasis on radiation protection and dosimetry, deals with statistical solutions to problems inherent in health physics measurements and decision making. The authors begin with a description of our current understanding of the statistical nature of physical processes at the atomic level, including radioactive decay and interactions of radiation with matter. Examples are taken from problems encountered in health physics, and the material is presented such that health physicists and most other nuclear professionals will more readily understand the application of statistical principles in the familiar context of the examples. Problems are presented at the end of each chapter, with solutions to selected problems provided online. In addition, numerous worked examples are included throughout the text.

Ebooka przeczytasz w aplikacjach Legimi na:

Androidzie
iOS
czytnikach certyfikowanych
przez Legimi
Windows
10
Windows
Phone

Liczba stron: 698




Cover

Related Titles

Title Page

Copyright

Dedication

Preface

Chapter 1: The Statistical Nature of Radiation, Emission, and Interaction

1.1 Introduction and Scope

1.2 Classical and Modern Physics – Determinism and Probabilities

1.3 Semiclassical Atomic Theory

1.4 Quantum Mechanics and the Uncertainty Principle

1.5 Quantum Mechanics and Radioactive Decay

Chapter 2: Radioactive Decay

2.1 Scope of Chapter

2.2 Radioactive Disintegration – Exponential Decay

2.3 Activity and Number of Atoms

2.4 Survival and Decay Probabilities of Atoms

2.5 Number of Disintegrations – The Binomial Distribution

2.6 Critique

Chapter 3: Sample Space, Events, and Probability

3.1 Sample Space

3.2 Events

3.3 Random Variables

3.4 Probability of an Event

3.5 Conditional and Independent Events

Chapter 4: Probability Distributions and Transformations

4.1 Probability Distributions

4.2 Expected Value

4.3 Variance

4.4 Joint Distributions

4.5 Covariance

4.6 Chebyshev's Inequality

4.7 Transformations of Random Variables

4.8 Bayes' Theorem

Chapter 5: Discrete Distributions

5.1 Introduction

5.2 Discrete Uniform Distribution

5.3 Bernoulli Distribution

5.4 Binomial Distribution

5.5 Poisson Distribution

5.6 Hypergeometric Distribution

5.7 Geometric Distribution

5.8 Negative Binomial Distribution

Chapter 6: Continuous Distributions

6.1 Introduction

6.2 Continuous Uniform Distribution

6.3 Normal Distribution

6.4 Central Limit Theorem

6.5 Normal Approximation to the Binomial Distribution

6.6 Gamma Distribution

6.7 Exponential Distribution

6.8 Chi-Square Distribution

6.9 Student's t-Distribution

6.10 F Distribution

6.11 Lognormal Distribution

6.12 Beta Distribution

Chapter 7: Parameter and Interval Estimation

7.1 Introduction

7.2 Random and Systematic Errors

7.3 Terminology and Notation

7.4 Estimator Properties

7.5 Interval Estimation of Parameters

7.6 Parameter Differences for Two Populations

7.7 Interval Estimation for a Variance

7.8 Estimating the Ratio of Two Variances

7.9 Maximum Likelihood Estimation

7.10 Method of Moments

Chapter 8: Propagation of Error

8.1 Introduction

8.2 Error Propagation

8.3 Error Propagation Formulas

8.4 A Comparison of Linear and Exact Treatments

8.5 Delta Theorem

Chapter 9: Measuring Radioactivity

9.1 Introduction

9.2 Normal Approximation to the Poisson Distribution

9.3 Assessment of Sample Activity by Counting

9.4 Assessment of Uncertainty in Activity

9.5 Optimum Partitioning of Counting Times

9.6 Short-Lived Radionuclides

Chapter 10: Statistical Performance Measures

10.1 Statistical Decisions

10.2 Screening Samples for Radioactivity

10.3 Minimum Significant Measured Activity

10.4 Minimum Detectable True Activity

10.5 Hypothesis Testing

10.6 Criteria for Radiobioassay, HPS N13.30-1996

10.7 Thermoluminescence Dosimetry

10.8 Neyman–Pearson Lemma

10.9 Treating Outliers – Chauvenet's Criterion

Chapter 11: Instrument Response

11.1 Introduction

11.2 Energy Resolution

11.3 Resolution and Average Energy Expended per Charge Carrier

11.4 Scintillation Spectrometers

11.5 Gas Proportional Counters

11.6 Semiconductors

11.7 Chi-Square Test of Counter Operation

11.8 Dead Time Corrections for Count Rate Measurements

Chapter 12: Monte Carlo Methods and Applications in Dosimetry

12.1 Introduction

12.2 Random Numbers and Random Number Generators

12.3 Examples of Numerical Solutions by Monte Carlo Techniques

12.4 Calculation of Uniform, Isotropic Chord Length Distribution in a Sphere

12.5 Some Special Monte Carlo Features

12.6 Analytical Calculation of Isotropic Chord Length Distribution in a Sphere

12.7 Generation of a Statistical Sample from a Known Frequency Distribution

12.8 Decay Time Sampling from Exponential Distribution

12.9 Photon Transport

12.10 Dose Calculations

12.11 Neutron Transport and Dose Computation

Chapter 13: Dose–Response Relationships and Biological Modeling

13.1 Deterministic and Stochastic Effects of Radiation

13.2 Dose–Response Relationships for Stochastic Effects

13.3 Modeling Cell Survival to Radiation

13.4 Single-Target, Single-Hit Model

13.5 Multi-Target, Single-Hit Model

13.6 The Linear–Quadratic Model

Chapter 14: Regression Analysis

14.1 Introduction

14.2 Estimation of Parameters β0 and β1

14.3 Some Properties of the Regression Estimators

14.4 Inferences for the Regression Model

14.5 Goodness of the Regression Equation

14.6 Bias, Pure Error, and Lack of Fit

14.7 Regression through the Origin

14.8 Inverse Regression

14.9 Correlation

Chapter 15: Introduction to Bayesian Analysis

15.1 Methods of Statistical Inference

15.2 Classical Analysis of a Problem

15.3 Bayesian Analysis of the Problem

15.4 Choice of a Prior Distribution

15.5 Conjugate Priors

15.6 Non-Informative Priors

15.7 Other Prior Distributions

15.8 Hyperparameters

15.9 Bayesian Inference

15.10 Binomial Probability

15.11 Poisson Rate Parameter

15.12 Normal Mean Parameter

Appendix

References

Index

Related Titles

Turner, J.E.

Atoms, Radiation, and Radiation Protection

Third Edition

2007

ISBN 978-3-527-40606-7

Lieser, K. H., J.V. Kratz

Nuclear and Radiochemistry

Fundamentals and Applications, Third Edition

2013

ISBN: 978-3-527-32901-4

Martin, J. E.

Physics for Radiation Protection

Second Edition

2003

ISBN: 978-3-527-40611-1

Bevelacqua, J. J.

Basic Health Physics

Problems and Solutions, Second Edition

2010

ISBN: 978-3-527-40823-8

Bevelacqua, J. J.

Contemporary Health Physics

Problems and Solutions, Second Edition

2009

ISBN: 978-3-527-40824-5

All books published by Wiley-VCH are carefully produced. Nevertheless, authors, editors, and publisher do not warrant the information contained in these books, including this book, to be free of errors. Readers are advised to keep in mind that statements, data, illustrations, procedural details or other items may inadvertently be inaccurate.

Library of Congress Card No.: applied for

British Library Cataloguing-in-Publication Data

A catalogue record for this book is available from the British Library.

Bibliographic information published by

the Deutsche Nationalbibliothek

The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.d-nb.de.

© 2012 Wiley-VCH Verlag & Co. KGaA, Boschstr. 12, 69469 Weinheim, Germany

All rights reserved (including those of translation into other languages). No part of this book may be reproduced in any form – by photoprinting, microfilm, or any other means – nor transmitted or translated into a machine language without written permission from the publishers. Registered names, trademarks, etc. used in this book, even when not specifically marked as such, are not to be considered unprotected by law.

Print ISBN: 978-3-527-41107-8

ePDF ISBN: 978-3-527-64657-9

ePub ISBN: 978-3-527-64656-2

mobi ISBN: 978-3-527-64655-5

oBook ISBN: 978-3-527-64654-8

Cover Design Adam-Design, Weinheim

Composition Thomson Digital, Noida, India

James E. Turner, 1930–2008

Dedicated to the memory of James E. (Jim) Turner – scholar, scientist, mentor, friend.

Preface

Statistical Methods in Radiation Physics began as an effort to help clarify, for our students and colleagues, implications of the probabilistic nature of radioactive decay for measuring its observable consequences. Every radiological control technician knows that the uncertainty in the number of counts detected from a long-lived radioisotope is taken to be the square root of that number. But why is that so? And how is the corresponding uncertainty estimated for counts from a short-lived species, for which the count rate dies away even as the measurement is made? One of us(JET) had already been presented with these types of questions while teaching courses in the Oak Ridge Resident Graduate Program of the University of Tennessee's Evening School. A movement began in the late 1980s in the United States to codify occupational radiation protection and monitoring program requirements into Federal Regulations, and to include performance testing of programs and laboratories that provide the supporting external dosimetry and radiobioassay services. The authors' initial effort at a textbook consequently addressed statistics associated with radioactive decay and measurement, and also statistics used in the development of performance criteria and reporting of monitoring results.

What began as a short textbook grew eventually to 15 chapters, corresponding with the authors' growing realization that there did not appear to be a comparable text available. The book's scope consequently broadened from a textbook for health physicists to one useful to a wide variety of radiation scientists.

This is a statistics textbook, but the radiological focus is immediately emphasized in the first two chapters and continues throughout the book. Chapter 1 traces the evolution of deterministic classical physics at the end of the nineteenth century into the modern understanding of the wave–particle duality of nature, statistical limitations on precision of observables, and the development of quantum mechanics and its probabilistic view of nature. Chapter 2 begins with the familiar(to radiological physicists) exponential decay equation, a continuous, differentiable equation describing the behavior of large numbers of radioactive atoms, and concludes with the application of the binomial distribution to describe observations of small, discrete numbers of radioactive atoms. With the reader now on somewhat familiar ground, the next six chapters introduce probability, probability distributions, parameter and interval estimations, and error(uncertainty) propagation in derived quantities. These statistical tools are then applied in the remaining chapters to practical problems of measuring radioactivity, establishing performance measures for laboratories, instrument response, Monte Carlo modeling, dose response, and regression analysis. The final chapter introduces Bayesian analysis, which has seen increasing application in health physics in the past decade. The book is written at the senior or beginning graduate level as a text for a 1-year course in a curriculum of physics, health physics, nuclear engineering, environmental engineering, or an allied discipline. A large number of examples are worked in the text, with additional problems at the end of each chapter. SI units are emphasized, although traditional units are also used in some examples. SI abbreviations are used throughout. Statistical Methods in Radiation Physics is also intended as a reference for professionals in various fields of radiation physics and contains supporting tables, figures, appendices, and numerous equations.

We are indebted to our students and colleagues who first stimulated our interest in beginning such a textbook, and then who later contributed in many ways to its evolution and kept encouraging us to finish the manuscript. Some individual and institutional contributions are acknowledged in figure captions. We would like to thank Daniel Strom, in particular, for his encouragement and assistance in adding a chapter introducing Bayesian analysis.

The professional staff at Wiley-VCH has been most supportive and patient, for which we are extremely thankful. It has been a pleasure to work with Anja Tshcoertner, in particular, who regularly encouraged us to complete the manuscript. We also owe a debt of gratitude to Maike Peterson and the technical staff for their help in typesetting many equations.

We must acknowledge with great sorrow that James E. (Jim) Turner died on December 29, 2008, and did not see the publication of Statistical Methods in Radiation Physics. Jim conceived the idea that a statistics book applied to problems of radiological measurements would be useful, and provided the inspiration for this textbook. He was instrumental in choosing the topic areas and helped develop a large portion of the material. It was our privilege to have worked with Jim on this book, and we dedicate it to the memory of this man who professionally and personally enriched our lives and the lives of so many of our colleagues.

Chapter 1

The Statistical Nature of Radiation, Emission, and Interaction

1.1 Introduction and Scope

This book is about statistics, with emphasis on its role in radiation physics, measurements, and radiation protection. That this subject is essential for understanding in these areas stems directly from the statistical nature of the submicroscopic, atomic world, as we briefly discuss in the next section. The principal aspects of atomic physics with which we shall be concerned are radioactive decay, radiation transport, and radiation interaction. Knowledge of these phenomena is necessary for success in many practical applications, which include dose assessment, shielding design, and the interpretation of instrument readings. Statistical topics will be further developed for establishing criteria to measure and characterize radioactive decay, assigning confidence limits for measured quantities, and formulating statistical measures of performance and compliance with regulations. An introduction to biological dose–response relations and to modeling the biological effects of radiation will also be included.

1.2 Classical and Modern Physics – Determinism and Probabilities

A principal objective of physical science is to discover laws and regularities that provide a quantitative description of nature as verified by observation. A desirable and useful outcome to be derived from such laws is the ability to make valid predictions of future conditions from a knowledge of the present state of a system. Newton's classical laws of motion, for example, determine completely the future motion of a system of objects if their positions and velocities at some instant of time and the forces acting between them are known. On the scale of the very large, the motion of the planets and moons can thus be calculated forward (and backward) in time, so that eclipses and other astronomical phenomena can be predicted with great accuracy. On the scale of everyday common life, Newton's laws describe all manner of diverse experience involving motion and statics. However, in the early twentieth century, the seemingly inviolate tenets of traditional physics were found to fail on the small scale of atoms. In place of a deterministic world of classical physics, it was discovered that atoms and radiation are governed by definite, but statistical, laws of quantum physics. Given the present state of an atomic system, one can predict its future, but only in statistical terms. What is the probability that a radioactive sample will undergo a certain number of disintegrations in the next minute? What is the probability that a given 100-keV gamma photon will penetrate a 0.5-cm layer of soft tissue? According to modern quantum theory, these questions can be answered as fully as possible only by giving the complete set of probabilities for obtaining any possible result of a measurement or observation.

By the close of the nineteenth century, the classical laws of mechanics, electromagnetism, thermodynamics, and gravitation were firmly established in physics. There were, however, some outstanding problems – evidence that all was not quite right. Two examples illustrate the growing difficulties. First, in the so-called “ultraviolet catastrophe,” classical physics incorrectly predicted the distribution of wavelengths in the spectrum of electromagnetic radiation emitted from hot bodies, such as the sun. Second, sensitive measurements of the relative speed of light in different directions on earth – expected to reveal the magnitude of the velocity of the earth through space – gave a null result (no difference!). Planck found that the first problem could be resolved by proposing a nonclassical, quantum hypothesis related to the emission and absorption of radiation by matter. The now famous quantum of action, h=6.6261×10−34 J s, was thus introduced into physics. The second dilemma was resolved by Einstein in 1905 with the revolutionary special theory of relativity. He postulated that the speed of light has the same numerical value for all observers in uniform translational motion with respect to one another, a situation wholly in conflict with velocity addition in Newtonian mechanics. Special relativity further predicts that energy and mass are equivalent and that the speed of light in a vacuum is the upper limit for the speed that any object can have. The classical concepts of absolute space and absolute time, which had been accepted as axiomatic tenets for Newton's laws of motion, were found to be untenable experimentally.

Example
In a certain experiment, 1000 monoenergetic photons are normally incident on a shield. Exactly 276 photons are observed to interact in the shield, while 724 photons pass through without interacting.
a. What is the probability that the next incident photon, under the same conditions, will not interact in the shield?
b. What is the probability that the next photon will interact?
Solution
a. Based on the given data, we estimate that the probability for a given photon to traverse the shield with no interaction is equal to the observed fraction that did not interact. Thus, the “best value” for the probability Pr (no) that the next photon will pass through without interacting is
(1.1)
b. By the same token, the estimated probability Pr (yes) that the next photon will interact is
(1.2)
based on the observation that 276 out of 1000 interacted.

This example suggests several aspects of statistical theory that we shall see often throughout this book. The sum of the probabilities for all possible outcomes considered in an experiment must add up to unity. Since only two possible alternatives were regarded in the example – either a photon interacted in the shield or it did not – we had Pr (no)+Pr (yes)=1. We might have considered further whether an interaction was photoelectric absorption, Compton scattering, or pair production. We could assign separate probabilities for these processes and then ask, for example, what the probability is for the next interacting photon to be Compton scattered in the shield. In general, whatever number and variety of possible outcomes we wish to consider, the sum of their probabilities must be unity. This condition thus requires that there be some outcome for each incident photon.

It is evident, too, that a larger data sample will generally enable more reliable statistical predictions to be made. Knowing the fate of 1000 photons in the example gives more confidence in assigning values to the probabilities Pr (no) and Pr (yes) than would knowing the fate of, say, only 10 photons. Having data for 108 photons would be even more informative.

Indeed, the general question arises, “How can one ever know the actual, true numerical values for many of the statistical quantities that we must deal with?” Using appropriate samples and protocols that we shall develop later, one can often obtain rather precise values, but always within well-defined statistical limitations. A typical result expresses a “best” numerical value that lies within a given range with a specified degree of confidence. For instance, from the data given in the example above, we can express the “measured” probability of no interaction as

(1.3)

(The stated uncertainty, ±0.053, is ±1.96 standard deviations from an estimated mean of 0.724, based on the single experiment, as we shall discuss later in connection with the normal distribution.) Given the result (1.3), there is still no guarantee that the “true” value is actually in the stated range. Many such probabilities can also be accurately calculated from first principles by using quantum mechanics. In all known instances, the theoretical results are in agreement with measurements. Confirmation by observation is, of course, the final criterion for establishing the validity of the properties we ascribe to nature.

1.3 Semiclassical Atomic Theory

Following the unexpected discovery of X-rays by Roentgen in 1895, a whole series of new findings ushered in the rapidly developing field of atomic and radiation physics. Over the span of the next two decades, it became increasingly clear that classical science did not give a correct picture of the world as new physics unfolded. Becquerel discovered radioactivity in 1896, and Thomson measured the charge-to-mass ratio of the electron in 1897. Millikan succeeded in precisely measuring the charge of the electron in 1909. By 1910, a number of radioactive materials had been investigated, and the existence of isotopes and the transmutation of elements by radioactive decay were recognized. In 1911, Rutherford discovered the atomic nucleus – a small, massive dot at the center of the atom, containing all of the positive charge of the neutral atom and virtually all of its mass. The interpretation of his experiments on alpha-particle scattering from thin layers of gold pointed to a planetary structure for an atom, akin to a miniature solar system. The atom was pictured as consisting of a number of negatively charged electrons traversing most of its volume in rapid orbital motion about a tiny, massive, positively charged nucleus.

The advance made with the discovery of the nuclear atom posed another quandary for classical physics. The same successful classical theory (Maxwell's equations) that predicted many phenomena, including the existence of electromagnetic radiation, required the emission of energy by an accelerated electric charge. An electron in orbit about a nucleus should thus radiate energy and quickly spiral into the nucleus. The nuclear atom could not be stable. To circumvent this dilemma, Bohr in 1913 proposed a new, semiclassical nuclear model for the hydrogen atom. The single electron in this system moved in classical fashion about the nucleus (a proton). However, in nonclassical fashion Bohr postulated that the electron could occupy only certain circular orbits in which its angular momentum about the nucleus was quantized. (The quantum condition specified that the angular momentum was an integral multiple of Planck's constant divided by 2π.) In place of the continuum of unstable orbits allowed by classical mechanics, the possible orbits for the electron in Bohr's model were discrete. Bohr further postulated that the electron emitted radiation only when it went from one orbit to another of lower energy, closer to the nucleus. The radiation was then emitted in the form of a photon, having an energy equal to the difference in the energy the electron had in the two orbits. The atom could absorb a photon of the same energy when the electron made the reverse transition between orbits. These criteria for the emission and absorption of atomic radiation replaced the classical ideas. They also implied the recognized fact that the chemical elements emit and absorb radiation at the same wavelengths and that different elements would have their own individual, discrete, characteristic spectra. Bohr's theory for the hydrogen atom accounted in essential detail for the observed optical spectrum of this element. When applied to other atomic systems, however, the extension of Bohr's ideas often led to incorrect results.

An intensive period of semiclassical physics then followed into the 1920s. The structure and motion of atomic systems was first described by the equations of motion of classical physics, and then quantum conditions were superimposed, as Bohr had done for hydrogen. The quantized character of many variables, such as energy and angular momentum, previously assumed to be continuous, became increasingly evident experimentally.

Furthermore, nature showed a puzzling wave–particle duality in its fundamental makeup. Electromagnetic radiation, originally conceived as a purely wave phenomenon, exhibited properties of both waves and particles. The diffraction and interference of X-rays was demonstrated experimentally by von Laue in 1912, establishing their wave character. Einstein's explanation of the photoelectric effect in 1905 described electromagnetic radiation of frequency ν as consisting of packets, or photons, having energy E=hν. The massless photon carries an amount of momentum that is given by the relation

(1.4)

where c=2.9979×108ms−1 is the speed of light in a vacuum. This particle-like property of momentum is exhibited experimentally, for example, by the Compton scattering of photons from electrons (1922). The wavelength λ of the radiation is given by λ=c/ν. It follows from Eq. (1.4) that the relationship between the wavelength and momentum of a photon is given by

(1.5)

In 1924, de Broglie postulated that this relationship applies not only to photons, but also to other fundamental atomic particles. Electron diffraction was demonstrated experimentally by Davisson and Germer in 1927, with the electron wavelength being correctly given by Eq. (1.5). (Electron microscopes have much shorter wavelengths and hence much greater resolving power than their optical counterparts.)

There was no classical analogue to these revolutionary quantization rules and the wave–particle duality thus introduced into physics. Yet they appeared to work. The semiclassical procedures had some notable successes, but they also led to some unequivocally wrong predictions for other systems. There seemed to be elements of truth in quantizing atomic properties, but nature's secrets remained hidden in the early 1920s.

1.4 Quantum Mechanics and the Uncertainty Principle

Heisenberg reasoned that the root of the difficulties might lie in the use of nonobservable quantities to describe atomic constituents – attributes that the constituents might not even possess. Only those properties should be ascribed to an object that have an operational definition through an experiment that can be carried out to observe or measure them. What does it mean, for example, to ask whether an electron is blue or red, or even to ask whether an electron has a color? Such questions must be capable of being answered by experiment, at least in principle, or else they have no meaning in physics. Using only observable atomic quantities, such as those associated with the frequencies of the radiation emitted by an atom, Heisenberg in 1924 developed a new, matrix theory of quantum mechanics. At almost the same time, Schrödinger formulated his wave equation from an entirely different standpoint. He soon was able to show that his formulation and Heisenberg's were completely equivalent. The new quantum mechanics was born.

In the Newtonian mechanics employed by Bohr and others in the semiclassical theories, it was assumed that an atomic electron possesses a definite position and velocity at every instant of time. Heisenberg's reasoning required that, in order to have any meaning or validity, the very concept of the “position and velocity of the electron” should be defined operationally by means of an experiment that would determine it. He showed that the act of measuring the position of an electron ever more precisely would, in fact, make the simultaneous determination of its momentum (and hence velocity) more and more uncertain. In principle, the position of an electron could be observed experimentally by scattering a photon from it. The measured position would then be localized to within a distance comparable to the wavelength of the photon used, which limits its spatial resolving power. The scattered photon would, in turn, impart momentum to the electron being observed. Because of the finite aperture of any apparatus used to detect the scattered photon, its direction of scatter and hence its effect on the struck electron's momentum would not be known exactly. To measure the position of the electron precisely, one would need to use photons of very short wavelength. These, however, would have large energy and momentum, and the act of scattering would be coupled with large uncertainty in the simultaneous knowledge of the electron's momentum. Heisenberg showed that the product of the uncertainties in the position Δx in any direction in space and the component of momentum Δpx in that direction must be at least as large as Planck's constant divided by 2π (=h/2π=1.0546×10−34 J s):

(1.6)

It is thus impossible to assign both position and momentum simultaneously with unlimited precision. (The equality applies only under optimum conditions.) The inequality (1.6) expresses one form of Heisenberg's uncertainty principle. A similar relationship exists between certain other pairs of variables, such as energy E and time t:

(1.7)

The energy of a system cannot be determined with unlimited precision within a short interval of time.

These limits imposed by the uncertainty principle are not due to any shortcomings in our measurement techniques. They simply reflect the way in which the act of observation itself limits simultaneous knowledge of certain pairs of variables. To speculate whether an electron “really does have” an exact position and velocity at every instant of time, although we cannot know them together, apparently has no operational meaning. As we shall see in an example below, the limits have no practical effect on massive objects, such as those experienced in everyday life. In contrast, however, on the atomic scale the limits reflect an essential need to define carefully and operationally the concepts that are to have meaning and validity.

The subsequent development of quantum mechanics has provided an enormously successful quantitative description of many phenomena: atomic and nuclear structure, radioactive decay, lasers, semiconductors, antimatter, electron diffraction, superconductivity, elementary particles, radiation emission and absorption, the covalent chemical bond, and many others. It has revealed the dual wave–particle nature of the constituents of matter. Photons, electrons, neutrons, protons, and other particles have characteristics of both particles and waves. Instead of having a definite position and velocity, they can be thought of as being “smeared out” in space, as reflected by the uncertainty principle. They can be described in quantum mechanics by wave packets related to a probability density for observing them in different locations. They have both momentum p and wavelength λ, which are connected by the de Broglie relation (1.5). Endowed with such characteristics, the particles exhibit diffraction and interference effects under proper experimental conditions. Many quantum-mechanical properties, essential for understanding atomic and radiation physics, simply have no classical analogue in the experience of everyday life.

Example
The electron in the hydrogen atom is localized to within about 1 Å, which is the size of the atom. Use the equality in the uncertainty relation to estimate the uncertainty in its momentum. Estimate the order of magnitude of the kinetic energy that the electron (mass=m=9.1094×10−31 kg) would have in keeping with this amount of localization in its position.
Solution
With Δx=1 Å=10−10m in Eq. (1.6), we estimate that the uncertainty in the electron's momentum is1
(1.8)
We assume that the electron's momentum is about the same order of magnitude as this uncertainty. Denoting the electron mass by m, we estimate for its kinetic energy
(1.9)
since 1eV=1.60×10−19 J. An electron confined to the dimensions of the hydrogen atom would be expected to have a kinetic energy in the eV range. The mean kinetic energy of the electron in the ground state of the hydrogen atom is 13.6eV.

The uncertainty principle requires that observing the position of a particle with increased precision entails increased uncertainty in the simultaneous knowledge of its momentum, or energy. Greater localization of a particle, therefore, is accompanied by greater likelihood that measurement of its energy will yield a large value. Conversely, if the energy is known with precision, then the particle must be “spread out” in space. Particles and photons can be described mathematically by quantum-mechanical wave packets, which, in place of classical determinism, provide probability distributions for the possible results of any measurement. These essential features of atomic physics are not manifested on the scale of familiar, everyday objects.

Example
How would the answers to the last example be affected if
a. the electron were localized to nuclear dimensions (Δx~10−15m) or
b. the electron mass were 100g?
Solution
a. With Δx~10−15m, the equality in the uncertainty principle (1.6) gives, in place of Eq (1.8), Δp≅10−19 kg m s−1, five orders of magnitude larger than before. The corresponding electron energy would be relativistic. Calculation shows that the energy of an electron localized to within 10−15m would be about 200MeV. (The numerical solution is given in Section 2.5 in Turner (2007), listed in the Bibliography at the end of this book.)
b. Since Δx is the same as before (10−10m), Δp in Eq. (1.8) is unchanged. With m=100g=0.1kg, the energy in place of Eq. (1.9) is now smaller by a factor of the mass ratio (9.11×10−31)/0.1≅10−29. For all practical purposes, with the resultant extremely small value of T, the uncertainty in the velocity is negligible. Whereas the actual electron, localized to such small dimensions, has a large uncertainty in its momentum, the “100-g electron” would appear to be stationary.

Quantum-mechanical effects are generally expressed to a lesser extent with relatively massive objects, as this example shows. By atomic standards, objects in the macroscopic world are massive and have very large momenta. Their de Broglie wavelengths, expressed by Eq. (1.5), are vanishingly small. Quantum mechanics becomes important on the actual physical scale because of the small magnitude of Planck's constant.

1.5 Quantum Mechanics and Radioactive Decay

Before the discovery of the neutron by Chadwick in 1932, it was speculated that the atomic nucleus must be made up of the then known elementary subatomic particles: protons and electrons. However, according to quantum mechanics, this assumption leads to the wrong prediction of the angular momentum for certain nuclei. The nucleus of , for example, would consist of six protons and three electrons, representing nine particles of half-integral spin. By quantum rules for addition of the spins of an odd number of such particles, the resulting nuclear angular momentum for would also have to be a half-integral multiple of Planck's constant, . The measured value, however, is just . A similar circumstance occurs for . These two nuclei contain even numbers (6 and 14) of spin-1/2 particles (protons and neutrons), and hence must have integral angular momentum, as observed.

The existence of electrons in the nucleus would also have to be reconciled with the uncertainty principle. In part (a) of the last example, we saw that an electron confined to nuclear dimensions would have an expected kinetic energy in the range of 200MeV. There is no experimental evidence for such large electron energies associated with beta decay or other nuclear phenomena.

If the electron is not there initially, how is its ejection from the nucleus in beta decay to be accounted for? Quantum mechanics explains the emission of the beta particle through its creation, along with an antineutrino, at the moment of the decay. Both particles are then ejected from the nucleus, causing it to recoil (slightly, because the momentum of the ejected mass is small). The beta particle, the antineutrino, and the recoil nucleus share the released energy, which is equivalent to the loss of mass (E=mc2) that accompanies the radioactive transformation. Since the three participants can share this energy in a continuum of ways, beta particles emitted in radioactive decay have a continuous energy spectrum, which extends out to the total energy released. Similarly, gamma-ray or characteristic X-ray photons are not “present” in the nucleus or atom before emission. They are created when the quantum transition takes place. An alpha particle, on the other hand, is a tightly bound and stable structure of two protons and two neutrons within the nucleus. Alpha decay is treated quantum mechanically as the tunneling of the alpha particle through the confining nuclear barrier, a process that is energetically forbidden in classical mechanics. The emitted alpha particle and recoil nucleus, which separate back to back, share the released energy uniquely in inverse proportion to their masses. The resultant alpha-particle energy spectra are therefore discrete, in contrast to the continuous beta-particle spectra. The phenomenon of tunneling, which is utilized in a number of modern electronic and other applications, is purely quantum mechanical. It has no classical analogue (see Figure 1.1).

Figure 1.1 An early scanning tunneling microscope (left) is used to image the electron clouds of individual carbon atoms on the surface of a highly oriented pyrolytic graphite sample. As a whisker tip just above the surface scans it horizontally in two dimensions, electrons tunnel through a classically forbidden barrier to produce a current through the tip. This current is extremely sensitive to the separation between the tip and the surface. As the separation tends to change according to the surface contours during the scan, negative feedback keeps it constant by moving a micrometer vertically up or down. These actions are translated by computer into the surface picture shown on the right. (Courtesy of R.J. Warmack.)

The radioactive decay of atoms and the accompanying emission of radiation are thus described in detail by quantum mechanics. As far as is known, radioactive decay occurs spontaneously and randomly, without influence from external factors. The energy thus released derives from the conversion of mass into energy, in accordance with Einstein's celebrated equation, E=mc2.

Example
Each of 10 identical radioactive atoms is placed in a line of 10 separate counters, having 100% counting efficiency. The question is posed, “Which atom will decay first?” How can the question be answered, and how can the answer be verified?
Solution
Since the atoms are identical and decay is spontaneous, the most one can say is that it is equally likely for any of the atoms, 1 through 10, to decay first. The validity of this answer, like any other, is to be decided on an objective basis by suitable experiments or observations. To perform such an experiment, in principle a large number of identical sources of 10 atoms could be prepared and then observed to see how many times the first atom to decay in a source is atom 1, atom 2, and so on. One would find a distribution, giving the relative frequencies for each of the 10 atoms that decays first. Because the atoms are identical, the distribution would be expected to show random fluctuations and become relatively flatter with an increasing number of observations.
Example
A source consists of 20 identical radioactive atoms. Each has a 90% chance of decaying within the next 24h.
a. What is the probability that all 20 will decay in 24h?
b. What is the probability that none will decay in 24h?
Solution
a. The probability that atom 1 will decay in 24h is 0.90. The probability that atoms 1 and 2 will both decay in 24h is 0.90×0.90= (0.90)2=0.81. That is, if the experiment is repeated many times, atom 2 is expected to decay in 90% of the cases (also 90%) in which atom 1 also decays. By extension, the probability for all atoms to decay in 24h is
(1.10)
b. Since a given atom must either decay or not decay, the probability for not decaying in 24h is 1−0.90=0.10. The probability that none of the 20 atoms will decay is
(1.11)

As these examples illustrate, quantum mechanics does not generally predict a single, definite result for a single observation. It predicts, instead, a probability for each of all possible outcomes. Quantum mechanics thus brings into physics the idea of the essential randomness of nature. While it is the prevalent conceptual foundation in modern theory, as espoused by Bohr and others, this fundamental role of chance in our universe has not been universally acceptable to all scientists. Which atom will decay first? The contrasting viewpoint was held by Einstein, for example, summed up in the words, “God does not play dice.”

Problems

1.1 The dimensions of angular momentum are those of momentum times distance. Show that Planck's constant, h=6.63×10−34 J s, has the units of angular momentum.

1.2 Einstein's famous equation, E=mc2, where c is the speed of light in a vacuum, gives the energy equivalence E of mass m. If m is expressed in kg and c in m s−1, show that E is given in J.

1.3 According to classical theory, how are electromagnetic waves generated?

1.4 Why would the Bohr model of the atom be unstable, according to classical physics?

1.5 Calculate the wavelength of a 2.50-eV photon of visible light.

1.6 Calculate the wavelength of an electron having an energy of 250eV.

1.7 What is the wavelength of a 1-MeV gamma ray?

1.8 If a neutron and an alpha particle have the same speed, how do their wavelengths compare?

1.9 If a neutron and an alpha particle have the same wavelength, how do their energies compare?

1.10 If a proton and an electron have the same wavelength, how do their momenta compare?

1.11 An electron moves freely along the X-axis. According to Eq. (1.6), if the uncertainty in its position in this direction is reduced by a factor of 2, how is the minimum uncertainty in its momentum in this direction affected?

1.12 Why is the uncertainty principle, so essential for understanding atomic physics, of no practical consequence for hitting a baseball?

1.13 Decay of the nuclide to the ground state of by emission of an alpha particle releases 4.88MeV of energy.

a. What fraction of the total mass available is thus converted into energy? (1 atomic mass unit=931.49MeV.)
b. What is the initial energy of the ejected alpha particle?

1.14 Two conservation laws must be satisfied whenever a radioactive atom decays. As a result of these two conditions, the energies of the alpha particle and the recoil nucleus are uniquely determined in the two-body disintegration by alpha-particle emission. These two laws are also satisfied in beta decay, but do not suffice to determine uniquely the energy of any of the three decay products. What are these two laws, which thus require alpha-particle energies to be discrete and beta-particle energies to be continuous?

1.15 The fission of following capture of a thermal neutron releases an average of 195MeV. What fraction of the total mass available (neutron plus uranium atom) is thus converted into energy? (1 atomic mass unit=931.49MeV.)

1.16 Five gamma rays are incident on a concrete slab. Each has a 95% chance of penetrating the slab without experiencing an interaction.

a. What is the probability that the first three photons pass through the slab without interacting?
b. What is the probability that all five get through without interacting?

1.17

a. In the last problem, what is the probability that photons 1, 2, and 3 penetrate the slab without interacting, while photons 4 and 5 do not?
b. What is the probability that any three of the photons penetrate without interaction, while the other two do not?

1.18 Each photon in the last two problems has a binary fate – it either interacts in the slab or else goes through without interaction. A more detailed fate can be considered: 2/3 of the photons that interact do so by photoelectric absorption and 1/3 that interact do so by Compton scattering.

a. What is the probability that an incident photon undergoes Compton scattering in the slab?
b. What is the probability that it undergoes photoelectric absorption?
c. What is the probability that an incident photon is not photoelectrically absorbed in the slab?

1.19 An atom of (half-life=12.4h) has a probability of 0.894 of surviving 2h. For a source that consists of five atoms,

a. what is the probability that all five will decay in 2h and
b. what is the probability that none of the five atoms will decay in 2h?

1.20 What are the answers to (a) and (b) of the last problem for a source of 100 atoms?

1.21 Monoenergetic neutrons are normally incident on a pair of slabs, arranged back to back, as shown in Figure 1.2. A neutron either is absorbed in a slab or else goes through without interacting. The probability that a neutron gets through slab 1 is 1/3. If a neutron penetrates slab 1, then the probability that it gets through slab 2 is 1/4. What is the probability that a neutron, incident on the pair of slabs, will

a. traverse both slabs?
b. be absorbed in slab 1?
c. not be absorbed in slab 2?

Figure 1.2 Neutrons normally incident on a pair of slabs, 1 and 2. See Problems 1.21–1.24.

1.22 If, in Figure 1.2, a neutron is normally incident from the right on slab 2, then what is the probability that it will

a. be absorbed in slab 1?
b. not be absorbed in slab 2?

1.23 For the conditions of Problem 1.21, calculate the probability that a neutron, normally incident from the left, will

a. not traverse both slabs,
b. not be absorbed in slab 1, and
c. be absorbed in slab 2.

1.24 What is the relationship among the three answers to the last problem and the corresponding answers to Problem 1.21?

Note

1. Energy, which has the dimensions of force × distance, has units 1 J=1 Nm. The newton of force has the same units as mass × acceleration: 1 N=1kgms−2. Therefore, 1 J s m−1=1kgms−1, which are the units of momentum (mass×velocity).

Chapter 2

Radioactive Decay

2.1 Scope of Chapter

This chapter deals with the random nature of radioactive decay. We begin by considering the following experiment. One prepares a source of a pure radionuclide and measures the number of disintegrations that occur during a fixed length of time t immediately thereafter. The procedure is then repeated over and over, exactly as before, with a large number of sources that are initially identical. The number of disintegrations that occur in the same fixed time t from the different sources will show a distribution of values, reflecting the random nature of the decay process. The objective of the experiment is to measure the statistical distribution of this number.

Poisson and normal statistics are often used to describe the distribution. However, as we shall see, this description is only an approximation, though often a very good one. The actual number of decays is described rigorously by another distribution, called the binomial.1 In many applications in health physics, the binomial, Poisson, and normal statistics yield virtually indistinguishable results. Since the last two are usually more convenient to deal with mathematically, it is often a great advantage to employ one of them in place of the exact binomial formalism. This cannot always be done without large error, however, and one must then resort to the rigorous, but usually more cumbersome, binomial distribution. In Chapters 5 and 6, we shall address the conditions under which the use of one or another of the three distributions is justified.

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!