Assessment in Student Affairs - John H. Schuh - ebook

Assessment in Student Affairs ebook

John H. Schuh

169,99 zł


A practical, comprehensive manual for assessment design and implementation Assessment in Student Affairs, Second Edition offers a contemporary look at the foundational elements and practical application of Assessment in Student Affairs. Higher education administration is increasingly called upon to demonstrate organizational effectiveness and engage in continuous improvement based on information generated through systematic inquiry. This book provides a thorough primer on all stages of the assessment process. From planning to reporting and beyond, you'll find valuable assessment strategies to help you produce meaningful information and improve your program. Combining and updating the thoroughness and practicality of Assessment in Student Affairs and Assessment Practice in Student Affairs, this new edition covers design of assessment projects, ethical practice, student learning outcomes, data collection and analysis methods, report writing, and strategies to implement change based on assessment results. Case studies demonstrate real-world application to help you clearly see how these ideas are used effectively every day, and end-of-chapter discussion questions stimulate deeper investigation and further thinking about the ideas discussed. The instructor resources will help you seamlessly integrate this new resource into existing graduate-level courses. Student affairs administrators understand the importance of assessment, but many can benefit from additional direction when it comes to designing and implementing evaluations that produce truly useful information. This book provides field-tested approaches to assessment, giving you a comprehensive how-to manual for demonstrating--and improving--the work you do every day. * Build your own assessment to demonstrate organizational effectiveness * Utilize quantitative and qualitative techniques and data * Identify metrics and methods for measuring student learning * Report and implement assessment findings effectively Accountability and effectiveness are the hallmarks of higher education administration today, and they are becoming the metrics by which programs and services are evaluated. Strong assessment skills have never been more important. Assessment in Student Affairs gives you the knowledge base and skill set you need to shine a spotlight on what you and your organization are able to achieve.

Ebooka przeczytasz w aplikacjach Legimi na:

czytnikach certyfikowanych
przez Legimi

Liczba stron: 565

Table of Contents

Title Page



General Purpose of the Book

Intended Audiences

How to Use the Book


About the Authors

Chapter 1: Understanding the Contemporary Assessment Environment

Defining Assessment, Evaluation, and Research

Reasons for Assessment

Selected Historical Documents Related to Assessment Practice in Student Affairs

Assessment in Contemporary Student Affairs Practice

The Politics of Assessment

Discussion Questions


Chapter 2: Designing and Planning an Assessment Project

Principles of Good Practice in Assessment

Developing an Assessment Plan

Questions That Guide the Assessment Process

Discussion Questions


Chapter 3: Framing Assessment with the Highest Ethical Standards

Definition and Use of Ethics in Assessment

A Historical Overview of Research Ethics

What Is Considered Research?

Basic Ethical Principles

Informed Consent

The Project Information Sheet

Risk Considerations

Disseminating the Results

Final Considerations

Discussion Questions


Chapter 4: Measuring Individual Student Learning and Growth

Shift from Inputs to Outcomes

Developing Intended Learning Outcomes

Measuring Learning Outcomes

Final Considerations

Discussion Questions


Chapter 5: Program Outcomes and Program Review

Developing and Measuring Program-Level Outcomes

Program Review

Final Considerations

Discussion Questions


Chapter 6: Facilitating Data Collection and Management

Definition and Use of Data Collection and Data Management

Data Collection

Choosing a Method

Sampling Strategies

Accessing Sources and Using Existing Data

Managing Assessment Data

Working with Corporate Vendors

Discussion Questions


Chapter 7: Using Qualitative Techniques in Conducting Assessments

Definition and Use of Qualitative Techniques

Selecting Qualitative Techniques: A Forms Approach

Using the Forms Approach

Analyzing Qualitative Data

General Qualitative Data Coding

Other Coding Systems

Interview Data Analysis

Observation Data Analysis

Review Data Analysis

Your Qualitative Skillset

Discussion Questions


Chapter 8: Using Quantitative Techniques in Conducting Assessments

Definition and Use of Quantitative Techniques

Selecting Quantitative Techniques: A Keywords Approach

Using the Keywords Approach

Analyzing Quantitative Data

Describe Statistics

Differ Statistics

Relate Statistics

Predict Statistics

Your Quantitative Skillset

Discussion Questions


Chapter 9: Developing and Selecting Instruments

Definition and Use of Instrumentation

Developing Instruments

Guidelines for Developing Instruments

Assuring Quality

Administering the Instrument

Discussion Questions


Chapter 10: Assessing Student Campus Environments

Conceptualizing and Theorizing Campus Environments

The Purpose of Assessing Campus Environments

Assessing Campus Environments

Closing Thoughts about Assessing Campus Environments

Discussion Questions


Chapter 11: Assessing Quality through Comparisons

Assessment Is Grounded in Comparison

Approaches to Assessing Quality

Final Considerations

Discussion Questions


Chapter 12: Getting Assessment Projects Started and Ensuring Sustainability

Starting Assessment Projects

Common Barriers to Understanding Assessment and Strategies for Success

Strategies for Overcoming Common Obstacles

Leadership for Assessment

Starting and Sustaining Assessment Projects

Discussion Questions


Chapter 13: Reporting Assessment Results and Bringing about Change

First-Year Student Interest Groups at Mid-North College

Reporting and Sharing Assessment Results

Using Results to Take Action to Improve

Discussion Questions


Chapter 14: Developing a Culture of Assessment

What Is a Culture of Evidence?

Developing a Culture of Assessment in Student Affairs

Strategies to Develop a Culture of Assessment

Discussion Questions


Chapter 15: Taking a Look at Assessment in the Future: A Look into Our Crystal Ball

Definition and Purposes of Assessment

Assessment Methods

Reporting Results

Ethical Issues


Culture of Assessment

Discussion Questions


Appendix: Designing and Implementing an Assessment Project



End User License Agreement






































































































































































































































































































































































































Table of Contents

Begin Reading

List of Tables

Chapter 8: Using Quantitative Techniques in Conducting Assessments

Table 8.1 Selecting the Correct Assessment Statistic

Chapter 13: Reporting Assessment Results and Bringing about Change

Table 13.1 Two Paradigms of Assessment

Assessment in Student Affairs

Second Edition

John H. SchuhJ. Patrick BiddixLaura A. DeanJillian Kinzie




Copyright © 2016 by John Wiley & Sons, Inc. All rights reserved.

Published by Jossey-Bass

AWiley Brand

One Montgomery Street, Suite 1000, San Francisco, CA 94104-4594—

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400, fax 978-646-8600, or on the Web at Requests to the publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, 201-748-6011, fax 201-748-6008, or online at

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Readers should be aware that Internet Web sites offered as citations and/or sources for further information may have changed or disappeared between the time this was written and when it is read.

Jossey-Bass books and products are available through most bookstores. To contact Jossey-Bass directly call our Customer Care Department within the U.S. at 800-956-7739, outside the U.S. at 317-572-3986, or fax 317-572-4002.

Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at For more information about Wiley products, visit

Library of Congress Cataloging-in-Publication Data is available

ISBN 9781119049609 (Hardcover)

ISBN 9781119051084 (ePDF)

ISBN 9781119051169 (ePub)

Cover design by Wiley

Cover image: ©Studio-Pro/iStockphoto



Another book on assessment in student affairs? Hasn't this topic been covered in its entirety? These are legitimate questions that any prospective reader could raise about another book being published on assessment in student affairs. In fact, we, as authors, have written numerous pieces on assessment in student affairs that in many respects trace the development of the art and science of assessment in student affairs over the past 25 years. While it is very difficult to identify a contemporary book on student affairs practice that does not address topics related to assessment, such has not always been the case, as we will affirm in this volume.

Assessment in student affairs has evolved over the past several decades to the point where we believe that assessment is in the process of being institutionalized as a part of student affairs practice. But assessment still is not as common as we think it should be, and we hope that over the next couple of years assessment will become a part of routine student affairs practice, similar to staff selection, for example, in residence life. Moreover, as we point out in Chapter 1, assessment, evaluation, and research have not been differentiated clearly in the student affairs literature until fairly recently (see Suskie, 2009), and while these activities have clearly different purposes, they were used synonymously for years, incorrectly in our opinion. So, this book will try to add to the clarity of the definition of assessment and provide a myriad of examples of assessment projects.

Not only has assessment been differentiated from evaluation and research in the past several decades, the purposes of assessment have become sharper. We can thank Ewell (2009) for his elegantly written piece that discusses the tensions related to the accountability and improvement dimensions of assessment. Virtually all assessment projects fit into one of these two categories, and in some cases assessment projects can fit into both of them. Contemporary thinking about the purposes of assessment is more streamlined and fits projects neatly into one purpose or the other, or both.

The technical side of conducting assessments has evolved over the past number of years, and while we believe that predicting where technological improvements will take us in the future is difficult, we are confident that new technologies will be developed in the future. Our experience takes us back to, in effect, distributing questionnaires by hand, collecting them, copying responses onto coding sheets, transferring the data to Hollerith cards, having them analyzed by mainframe computers, and making sense of the results, in effect, by hand. This approach, while effective in its day, was incredibly time consuming and is the assessment equivalent of making long distance telephone calls by dialing “0” and asking the operator to place the call. In this book we offer what we think are contemporary approaches to the practice of assessment by using current technology that is widely available.

In this volume we also look at other aspects of assessment that are evolving continuously, such as protecting the rights of participants, developing a culture of assessment in student affairs, differentiating between individual and group assessment projects, identifying strategies for getting assessment projects started, and reporting results. This book is much more than an update of the first edition. Rather, in identifying the concepts for this book, we started from scratch and identified those topics that in our opinion will be of great practical value for our readers. In short, we have overhauled the topic of assessment and developed what we trust our readers will find is a fresh look at very important topics covered under the umbrella of assessment in student affairs.

Finally, we thought it would make sense for a book to be written with the graduate student and young professional in mind. Curricula in student affairs preparation programs, in our view, ought to have a course on assessment as the CAS standards indicate. Our view is that this volume could be used as the primary textbook for such a course. With that in mind the reader will find case studies, scenarios, and discussion questions throughout the book.

General Purpose of the Book

As the previous paragraph suggests, this book has been designed as a text for a course on assessment in student affairs. We also think that this book can be used for staff development purposes for student affairs educators and others who are interested in assessing the potency of services, programs, and learning experiences as related to student growth and development. All of the authors have offered courses, workshops, and seminars and done consulting related to assessment in student affairs. Our view is that the existing literature is valuable in providing important information about assessment for graduate students preparing for careers as student affairs educators, but we also believe that a book especially tailored for a graduate course or staff development has great utility. With that in mind we have developed this book with the purpose to provide a primer on topics that provide a foundation for students preparing for careers as student affairs educators or those staff who seek more information to inform their practice as they work with college students.

Intended Audiences

Our primary audience for this book consists of those who are preparing for careers as student affairs educators or those who work with college students and seek more information about the extent to which students learn and grow from the experiences they offer. We trust that graduate students will have a course on assessment in student affairs in their curriculum. Accordingly, this book can be used as the primary text for such a course. If an assessment course is not part of the prescribed curriculum, this book can be used for an independent study of assessment in student affairs. Because of the cases and discussion questions, our opinion is that the book will lend itself well to a formal course or an independent study.

We also realize that a number of people come to careers in ways other than through student affairs graduate programs. Included in this career path, for example, are academic advisors, career counselors, recreation coordinators, and so on. Though their career path may be outside that of those with graduate degrees in student affairs education, they are not absolved from assessment activities. Consequently, they could choose to use this book to provide a foundation for their work in assessment as student affairs educators through staff development activities offered by their institution or through independent reading on their own.

Still others enter student affairs education by switching from faculty roles or perhaps from outside higher education. They, too, have an obligation to participate and perhaps lead an ongoing program of assessment in student affairs. Included in this set of professionals are those who lead learning centers, coordinate retention programs, or serve in executive roles with student affairs being included in their portfolios. This book can provide them with background information as they work with staff in developing assessment projects.

Beyond staff in student affairs, others may have an interest in issues related to accountability and improvement in higher education. Those who serve on regional accrediting committees or governing boards, or who advise legislators or other policy groups, might find this volume useful. Our view is that oversight of higher education will continue to tighten in the future, and this volume can provide those providing such oversight with a foundation for the questions they should be asking.

How to Use the Book

While this book does not have to be used in a linear sense, meaning that one's reading should start with Chapter 1, then following with Chapter 2, 3, and so on, ultimately finishing with the last chapter, we think that might be the best approach. Our assumption is that most readers will not know a great deal about assessment, and even may not have thought about it much before beginning to read this book. The chapters are designed to build on each other with information from the early chapters providing a foundation for those that follow. So, our recommendation is that readers start with the first chapter and then follow through the rest of the book sequentially.

This book, however, is not designed to be read in isolation. That is, we have provided two features that are designed to provide a basis for group discussions and perhaps group projects. One of the features is the introduction of a case study or scenarios that can be used to apply and illustrate the elements of the chapter. The other is a set of questions found at the end of each chapter designed to stimulate discussion about its contents. The questions are designed to prompt in-class discussion, for small group work outside of class, or for reflection and contemplation by those who are reading the book outside a group exercise. Most important, the cases and the discussion questions are designed to make active learning very much a part of the experiences of those who read the book.

If the book is used for staff development in a student affairs unit, the same principles apply. That is, the cases can be used to illustrate the content of the chapter and the questions at the end of the chapter are designed to stimulate discussion and further thinking about the topic of the chapter.

We provide foundational information about assessment in Chapter 1 by differentiating between assessment, evaluation, and research. We also look at the reasons why student affairs educators should be concerned about assessment, using Ewell's (2009) primary proposes of assessment (accountability and improvement) as our taxonomy. Then we move into a review of the development of assessment through a review of selected source materials, beginning with the Student Personnel Point of View that was published in 1937 (National Association of Student Personnel Administrators, 1989). Finally, we identify the role of assessment in contemporary student affairs education.

Chapter 2 takes an expansive view of planning for assessment. It begins with identifying principles of good practice in assessment, followed by suggestions about how to develop an assessment plan. We look at questions to be used to guide the assessment process and then identify questions for discussion about developing an assessment. The chapter emphasizes the importance of how to develop a problem statement and identify the purpose of an assessment project.

We think it is imperative that assessment projects be conducted with the highest ethical standards. That is, the rights of participants in assessment projects should be of paramount importance at all times. Accordingly, we have positioned our discussion of ethics early in this volume. Our view is that all studies must be conducted with the highest ethical standards and, as a consequence, we think our readers should have a firm understanding of their ethical obligations before embarking on any assessment project. Chapter 3 introduces basic principles of ethical research, establishes the need for ethical practice in assessment, describes how to apply ethical standards to assessment projects, and provides recommendations for ensuring that the highest ethical standards are met in assessment work. It also describes the importance of the relationship that those who conduct assessments need to establish with their campus's institutional review board.

The change in emphasis on measuring student learning outcomes rather than organizational inputs is an important element of Chapter 4. We believe that this emphasis reflects an important change in the values of higher education. For example, accrediting agencies are interested in how institutions measure student learning and how they add value to the student experience (see for example, Standard Five of the Commission on Institutions of Higher Education of the New England Association of Schools and Colleges). Individual learning outcomes and measuring student growth are the focus of the chapter. Learning and development outcomes frameworks are provided and discussed and strategies for developing learning outcomes are provided. The chapter concludes with a discussion of how to measure learning outcomes.

Whereas Chapter 4 is designed to provide basic information about measuring individual student learning, Chapter 5 takes a look at students in the aggregate by discussing program assessment and review. The case study for the chapter is a carryover from Chapter 5, but reflects a different emphasis. A substantial part of the chapter is devoted to discussing program review, including developing a framework for program review and describing the elements of program review.

Moving into the technical aspects of assessment, Chapter 6 addresses issues related to data collection and management. Implicit in data collection are determining an appropriate sample size, deciding on a data collection strategy, and managing the data. We point out that existing data can be used in assessment projects and will accelerate the assessment process.

Chapter 7 explores various aspects of qualitative assessment. We assert that the methodological approach needs to be tied to the goals of the assessment project. A discussion of primary forms of qualitative data collection is presented, including interviewing, observing, and reviewing. Then, the chapter explores data analysis and provides questions for further discussion about qualitative techniques.

Using quantitative techniques in conducting student affairs assessment is the focus of Chapter 8. Four keywords are introduced to assist in selecting a qualitative technique: describe, differ, relate, and predict. Quantitative data analysis also is discussed in the chapter with a special emphasis on selecting an appropriate statistical technique to use in a quantitative project.

In Chapter 9 we provide a discussion of instrument development and selection. This is a key component of the assessment process, since the use of an instrument that is inappropriate for the assessment will result in a study that will not accomplish the objectives of the project. Similarly, the development of a flawed instrument will result in a study in which the consumers of the project cannot have confidence. The chapter places special emphasis on developing instruments of high quality and also provides strategies for administering instruments.

Assessing student environments is the topic of Chapter 10. We assert that the campus environment has a significant influence on student experiences and as a consequence we believe that environmental assessment projects should be of particular interest to student affairs educators and others on campus. The chapter introduces foundational theories about campus environments, discusses the purposes of assessing campus environments, and then describes various approaches to assessing campus environments. It provides an important differentiation between assessing campus culture and campus climate.

Chapter 11 looks at various ways of measuring how institutions can compare themselves with peers using external data, standards, and frameworks. It includes a discussion of assessing quality through institutional comparisons. It provides a variety of ways of assessing quality through various forms of benchmarking such as external peer benchmarking, best practices benchmarking, capability benchmarking, and productivity benchmarking.

In Chapter 12 we explore a chronic problem found all too often in student affairs divisions, and that is starting and sustaining assessment projects. Many challenges exist, sometimes real and sometimes perceived, to undertaking assessment projects. At times assessments are required, such as when an institution is facing its periodic regional accreditation. But otherwise, how can student affairs divisions weave assessment into their annual work routine? This chapter provides strategies for getting assessment projects started, identifies the role of the assessment coordinator in a student affairs division, and pinpoints barriers to undertaking assessment in student affairs, and provides recommendations for sustaining assessment projects over time.

Chapter 13 asserts that once an assessment has been completed, the results will need to be shared with stakeholders and the findings may indicate that changes need to be made. Since our concern is that too often assessment findings are reported in ways that are uninteresting to stakeholders, we provide suggestions for crafting results that are informative and attractive, and also how to use results to take action in effect applying the results to student affairs practice.

Making assessment part of an organization's routine is central to developing a culture of assessment as is recommended in Chapter 14. We define what a culture of evidence is in this chapter, identify points of resistance to developing assessment projects, and conclude with strategies designed to develop an assessment culture.

Chapter 15 features our speculation about the future of assessment. Included are our hunches about the continued sharpening of the definition of assessment, our view about the future of assessment methods, and some thoughts about reporting results and our ongoing concerns about protecting those who agree to participate in assessment projects.

The reader will find a number of internal references throughout this volume. That is, there will be suggestions in one chapter to refer to other chapters to develop a more complete understanding of a particular topic. We hope this approach will assist the reader in understanding how assessment topics inform each other. For example, some of the same strategies that can be used by staff to assist them in sustaining assessment projects over time are aspects of developing a culture of assessment. Or, the development or selection of an appropriate survey instrument is a necessary feature of undertaking a quantitative assessment.

We have provided an Appendix for instructors and graduate students that provides advice on undertaking assessment projects as part of a graduate course on assessment in student affairs. The appendix provides a 12-week framework for completing an assessment project. We believe that actual practice will help readers of this book sharpen their assessment skills and understand some of the challenges and rewards that are associated with assessment in student affairs.

We trust that this volume will provide a fresh, contemporary look at assessment in student affairs, a central element of student affairs practice. We anticipate that the readers of this volume will be able to undertake assessment projects as part of their student affairs practice. We look forward to hearing from them as they contribute to the learning and development of the college students with whom they work.


Ewell, P. T. (2009).

Assessment, accountability, and improvement: Revisiting the tension

. Champaign, IL: National Institute for Learning Outcomes Assessment.

National Association of Student Personnel Administrators. (1989).

Points of view

. Washington, DC: Author.

New England Association of Schools and Colleges. (2016).


. Standard Five.

Suskie, L. A. (2009).

Assessing student learning: A common sense guide

(2nd ed.). San Francisco, CA: Jossey-Bass.


John H. Schuhis director of the Emerging Leaders Academy at Iowa State University where he also is Distinguished Professor Emeritus. At Iowa State University he served as a department chair for six and a half years and he was director of the School of Education for 18 months. In a career that has spanned over 45 years, he held administrative and faculty assignments at Wichita State University, Indiana University (Bloomington), and Arizona State University. He received his Master of Counseling and PhD degrees from Arizona State.

Schuh is the author, coauthor, or editor of over 235 publications, including 30 books and monographs, 80 book chapters, and over 110 articles. Among his books are three volumes on assessment, including Assessment Methods for Student Affairs, Assessment Practice in Student Affairs: An Applications Manual (with M. Lee Upcraft), and Assessment in Student Affairs (also with M. Lee Upcraft). Other recent books include Student Services (fifth edition edited with Susan Jones and Shaun Harper), One Size Does Not Fit All: Traditional and Innovative Models of Student Affairs Practice (with Kathleen Manning and Jillian Kinzie), and Student Success in College (with George D. Kuh, Jillian Kinzie, and Elizabeth Whitt). He was associate editor of the New Directions for Student Services Sourcebook Series after serving as editor for 13 years. He was associate editor of the Journal of College Student Development for 14 years and was book review editor of The Review of Higher Education from 2008 to 2010. Schuh has made over 300 presentations and speeches to campus-based, regional, national, and international meetings. He has served as a consultant to more than 80 institutions of higher education and other educational organizations. Schuh is a member of the Evaluator Corps of the Higher Learning Commission of the North Central Association of Colleges and Schools, where he also serves as a team chair for accreditation visits.

John Schuh has received the Research Achievement Award from the Association for the Study of Higher Education, the Contribution to Knowledge Award from the American College Personnel Association, the Contribution to Research or Literature Award, and the Robert H. Shaffer Award for Academic Excellence as a Graduate Faculty Member from the National Association of Student Personnel Administrators. The American College Personnel Association elected him as a Senior Scholar Diplomate. Schuh was chosen as one of 75 Diamond Honorees by ACPA in 1999 and as a Pillar of the Profession by NASPA in 2001. He is a member of the Iowa Academy of Education and has received a number of institutional awards including the Distinguished Alumni Achievement Award from the University of Wisconsin-Oshkosh, his undergraduate alma mater. Schuh received a Fulbright award to study higher education in Germany in 1994, was named to the Fulbright Specialists Program in 2008, and had a Fulbright specialists' assignment in South Africa in 2012.

J. Patrick Biddix is an associate professor of Higher Education in the Department of Educational Leadership and Policy Studies at the University of Tennessee. His areas of expertise include college student involvement outcomes, technology in higher education, and research design. He teaches graduate courses in research methodologies, assessment and evaluation, and special topics in higher education and student affairs. In 2015, he received a Fulbright Award to study college student technology use at Concordia University in Montreal, Canada.

Biddix received his PhD in education with a concentration in higher education from the University of Missouri in St. Louis. He holds a graduate certificate in Institutional Research from the University of Missouri in St. Louis as well as an MA in Higher Education Administration from the University of Mississippi and a BA in Classical Civilization from the University of Tennessee. His published research includes numerous articles in top-tier student affairs and communication technology venues. He currently serves on four journal editorial boards and one national commission.

Biddix is an appointed member to the University of Tennessee Institutional Review Board (IRB), the College of Education, Health, and Human Services Tenure and Promotion Committee, and serves the department of Educational Leadership and Policy Studies in a number of capacities, including serving as academic program coordinator, accreditation liaison, program review coordinator, and graduate student advisor. Prior to coming to the University of Tennessee, he worked for six years as Associate Professor and Higher Education Program Coordinator at Valdosta State University and for four years as a student affairs professional at Washington University in St. Louis. He has received three faculty excellence awards (2010, 2011, 2015).

Laura A. Dean is professor of College Student Affairs Administration at the University of Georgia; she also serves as program coordinator of the master's program. She has been an educator throughout her career, in settings ranging from high school to major research-extensive universities. She has worked as a teacher, admissions officer, counselor, director of student activities and orientation, Dean of Students, vice president for Student Affairs, and graduate faculty member. Prior to her current position, she served as the director of Counseling/Associate Dean of Student Development at Manchester College (IN), as Dean of Student Development (SSAO) at Pfeiffer University (NC), and as vice president for Student Development/Dean of Students at Peace College (NC). She also served in 2010 as the Interim Dean of Students at the University of Georgia. Dean earned her bachelor's degree in English at Westminster College (PA). After teaching high school and working in college admissions, she then earned her master's degree in counseling and her PhD in Counselor Education/Student Development in Higher Education, both from the University of North Carolina at Greensboro.

Her publications and presentations focus largely in the areas of assessment and the use of professional standards of practice. She has served on the editorial boards of the College Student Affairs Journal and the Journal of College Counseling. Dean has been extensively involved professionally in organizations including the American Counseling Association, American College Counseling Association (ACCA), ACPA-College Student Educators International, NASPA-Student Affairs Administrators in Higher Education, and the Council for the Advancement of Standards in Higher Education (CAS). She served as president of ACCA and represented that organization on the CAS Board of Directors for nearly two decades. She was CAS president, having previously served on the executive council as Member at Large and as the CAS publications editor. She has been recognized for her contributions with awards including the ACCA Professional Leadership Award, ACPA Senior Professional Annuit Coeptis award, ACPA Diamond Honoree, NASPA Robert H. Shaffer Award for Academic Excellence as a Graduate Faculty Member, NASPA Region III Outstanding Contribution to Student Affairs through Teaching, the Georgia College Personnel Association Paul K. Jahr Award of Excellence, and the Distinguished Alumni Outstanding Achievement Award from the School of Education at University of North Carolina at Greensboro.

Jillian Kinzie is the associate director for the Center for Postsecondary Research and the National Survey of Student Engagement (NSSE) Institute at Indiana University Bloomington School of Education. She conducts research and leads project activities on effective use of student engagement data to improve educational quality, and studies evidence-based improvement in higher education. She managed the Documenting Effective Education Practices (DEEP) project and Building Engagement and Attainment of Minority Students (BEAMS), and also serves as senior scholar on the National Institute for Learning Outcomes Assessment (NILOA) project, an initiative to study assessment in higher education and assist institutions and others in discovering and adopting promising practices in the assessment of college student learning outcomes.

Kinzie earned her PhD from Indiana University in higher education with a minor in women's studies. Prior to this, she served on the faculty of Indiana University and coordinated the master's program in higher education and student affairs. She also worked as a researcher and administrator in academic and student affairs at Miami University and Case Western Reserve University. Her scholarly interests include the assessment of student engagement, how colleges use data to improve, student and academic affairs partnerships and the impact of programs and practices to support student success, as well as first-year student development, teaching and learning in college, access and equity, and women in underrepresented fields.

She has coauthored numerous publications including Using Evidence of Student Learning to Improve Higher Education (Jossey-Bass, 2015); Student Success in College: Creating Conditions that Matter (Jossey-Bass, 2005/2010); Continuity and Change in College Choice: National Policy, Institutional Practices, and Student Decision Making, and the second edition of One Size Does Not Fit All: Traditional and Innovative Models of Student Affairs Practice (Routledge, 2008/2014). She serves as coeditor of New Directions in Higher Education, on the editorial board of the Journal of College Student Development and the Journal of Learning Community Research, and on the boards of the Council for the Accreditation of Educator Preparation (CAEP), the National Society for Collegiate Scholars, and the Gardner Institute for Excellence in Undergraduate Education. In 2001, she was awarded a Student Choice Award for Outstanding Faculty at Indiana University and in 2005 and 2011 she received the Robert J. Menges Honored Presentation by the Professional Organizational Development (POD) Network, the Shaffer Distinguished Alumni Award in 2012, and in 2015, was awarded the honor of Senior Scholar by the American College Personnel Association (ACPA).


While assessment has not always been a central activity in student affairs practice in higher education, it is becoming an institutional imperative in contemporary times, as Kinzie (2009) points out, “Every college or university must decide how to most effectively assess student learning outcomes for institutional improvement and accountability (p. 4).” Livingston and Zerulik (2013) add the following observation about the centrality of assessment to student affairs practice as follows, “Assessment is an essential element in any successful student affairs division” (p. 15).

This chapter begins with a case study related to the potential role of assessment as part of implementing a new program. Then, we provide definitions of assessment, evaluation, and research, terms that are important to understand in the development of projects designed to determine the effectiveness of programs, activities, and experiences developed by student affairs educators. We follow that with a brief discussion of the historical development of assessment in student affairs practice and the centrality of assessment in contemporary institutional accreditation, student affairs practice and the education of student affairs educators. We conclude with questions to consider in the development of an assessment plan to address the dynamics identified in the case study.

Learning Communities at Mid-Central University

Sean is an area coordinator at Mid-Central University (MCU) in the residence hall system. As such, Sean has responsibility for four buildings, each housing about 240 students, four graduate assistants (one for each building), and 16 resident assistants. MCU is a regional institution, with most of its students majoring in education, business, or liberal arts. Predominantly, the students are the first in their families to attend college and many have significant amounts of federal financial aid.

Sean is in her second year of service at MCU and noted that as opposed to other institutions with which she was familiar, MCU did not have any learning communities (LC). Sean had served as a graduate assistant in the residence halls at State University while pursuing her master's degree and was used to having many learning communities in residence halls. She was surprised that MCU did not have any learning communities when she interviewed for her position but decided to accept the position with the hope that learning communities could be established though no promises were made that LC units would be established at MCU. She spent her first year investigating why MCU did not have any of these special residential units and it turned out that a variety of reasons contributed to the lack of learning communities, among them the philosophy of the residence department, lack of funding, and potentially, lack of student interest.

From Sean's point of view, the idea behind a learning community was to use the concept to improve retention. In the pilot project she was developing, two learning communities would be implemented in the trial program. Twenty students majoring in business would be assigned to one of the learning communities and another 20 education majors would be assigned to the other learning community. The students in each learning community would be assigned to three courses in the curriculum and a community advisor (CA) would be hired to provide support and enrichment, such as organizing study groups, arranging for tutoring as necessary, and organizing a field trip for the student participants once per month in the fall semester.

Sean briefed her staff at the end of the first academic year about wanting to implement two trial learning communities the next academic year. The concept was foreign to many of the staff and several asked this question: How did Sean know that the students needed this experience? Sean indicated that such would be a part of a pilot project of learning communities that was being planned.

She managed to convince the assistant director of student housing for residential programs that implementing two learning communities on a trial basis was worth undertaking but she was cautioned by Sami, the assistant director, that she would run into a series of hard questions as she had conversations with other member of the central office staff. And, Sami was clear about one central concern that was paramount in his mind: Whenever programs were implemented, senior staff would want to know how the program could be improved from one year to the next.

Sean also met with the fiscal officer of the residence life department who wanted to know what the cost of the program would be. Sean thought that adequate compensation for the community advisors would be a free room plus a monthly stipend of $100 for each CA plus an operations budget for each LC of $2,000 for modest programming efforts. The fiscal officer left Sean with this question: How would Sean demonstrate that the resources were used wisely?

The final discussion Sean had was with the director of the residence life department, Casey. While Casey was generally supportive of the program, there were some doubts about the effort required to implement learning communities. Would the establishment of the learning communities be worth Sean's time? Are the outcomes Sean has identified consistent with the purposes of residence halls at the university? What about staff time in organizing room assignments for the participants? Wouldn't working with the Registrar's office and the two academic programs, business and elementary education, take a lot of time? How would the benefits of the program be communicated to senior administrators? Wouldn't recruitment of participants take a tremendous effort? And, most important, how would Sean determine if the program made a difference?

Sean is faced with a daunting number of questions related to assessment, because without data she really can't answer the questions posed by the various administrators who will have an influence as to whether or not the learning communities will be implemented on a trial basis and what the future of these new units might be. We cannot be certain if Sean was ready for all of the questions raised by these administrators, even though learning communities are common on many campuses (see Benjamin, 2015).

Defining Assessment, Evaluation, and Research

Before we move further into this chapter, it is important that we are clear by what we mean by assessment. We'll also compare and contrast the term assessment with evaluation and research, since the terms often are used interchangeably—however, to our way of thinking, each represents a very different purpose.


We think the definition of the term assessment that we introduced in the first edition of this book is still relevant in contemporary student affairs practice. We defined assessment this way:

“Assessment is any effort to gather, analyze, and interpret evidence which describes institutional, departmental, divisional, or agency effectiveness” (Upcraft & Schuh, 1996, p. 18).

To this definition we would add program or initiative effectiveness. In the case of our example, an assessment of the learning community initiative at MCU would be conducted to determine the extent to which the program achieved its goals. It is also important to note that for the purposes of this book, we are interested in students in the aggregate. We will be addressing individual student learning to the extent described in Chapter 4. We would, in the context of this volume, be interested in the aggregate scores of students who might have taken the College Senior Survey ( or the National Survey of Student Engagement ( if the instrument measured an aspect of the student experience pertinent to the study being conducted.

Effectiveness, for the purpose of this definition, can take on many dimensions. Most important, we think of effectiveness as a measure of the extent to which an intervention, program, activity, or learning experience accomplishes its goals, frequently linked to how student learning is advanced. Goals will vary from program to program but typically they are linked to the goals of a unit, the division in which it is located, or the goals of the institution. So, for example, at a commuter institution with no residence halls, the development of community as an institutional goal might have a different definition than the development of community at a baccalaureate college where nearly all students live on campus.


We also defined the term evaluation in the first edition of this book but we think evaluation needs a bit of updating and for that we rely on the work of Suskie (2009). We defined evaluation, in effect, as the use of assessment data to determine organizational effectiveness. Suskie provides a more nuanced definition of evaluation by asserting, “…that assessment results alone only guide us; they do not dictate decisions to us” (p. 12). She adds that a second concept of evaluation is that “…it determines the match between intended outcomes…and actual outcomes” (p. 12). In our LC example, we might learn that participation in the learning community programs does not result in increased retention but we might find out that students who participate earn a higher grade point average at a statistically significant level. If the LCs were established with a goal of improving retention and that did not occur, the higher GPAs may or may be sufficient evidence to determine that the LCs should continue.

Suskie (2009) adds that evaluation also “…investigates and judges the quality or worth of a program, project, or other entity rather than student learning” (p. 12). We might find, for example, that participation in the LCs resulted in improved retention for the participants. But, suppose if when all the costs are tallied in our case study, what was found was that the program cost $8,990 per student. In the case study, it is important to note that the resources of MCU are modest, and with 40 students proposed to participate in the programs (20 in the education LC and 20 in the business LC), if the aggregate cost was $359,600, this amount is likely to be far more than could be sustained by the university's budget. So, while the goal of the program (increased retention) was met, the costs were prohibitive. Strictly speaking, the data suggested that the program was a success (retention was improved), so from an assessment point of view it should be continued, but from an evaluation perspective, it should not (the program was cost prohibitive).


Our experience is that student affairs educators can be worried by the thought of undertaking assessments because they think what they are contemplating is conducting a research study, similar to writing a dissertation as part of completing a doctoral degree or conducting a study that would form the basis for a manuscript that would be submitted to an international journal with rigorous acceptance rates. We submit that such is not the case with assessment. Rather, we assert that while research methods are used in the process of conducting an assessment, we are not advocating a level of rigor that would be required to complete a doctoral dissertation. Suskie (2009), again, is informative on this point: “Assessment…is disciplined and systematic and uses many of the methodologies of traditional research” (p. 14). She adds, “If you take the time and effort to design assessments reasonably carefully and collect corroborating evidence, your assessment results may be imperfect but will nevertheless give you information that you will be able to use with confidence to make decisions…” (p. 15).

We would like to identify several distinctions between assessment and research that further illustrate the point. First, assessments are guided by theory but research frequently is conducted to test theories. So, in our LC example, the theory of student engagement (Astin, 1984) could guide an assessment of the LC but a research study of Astin's theory of student engagement might look at a number of ways that students are involved in the life of the campus outside the classroom and determine the potency of each of them.

Second, research often is not time bound to the extent that assessments are time bound. For the LCs to be continued beyond a trial period, say two years, the inquiry will need to be completed in time to make a decision about the efficacy of the LC program. If one is conducting a research study, it is completed when all aspects of it have been finished at the highest level of sophistication possible. If an extra month is needed for further data analysis, then the extra month is taken. In our example, a decision needs to be made so that space in the residence hall can be reserved for the LC, staff need to be hired, budgets need to be prepared, and so on.

Third, it is common for assessments to have a public or political dimension to them. Participants will want to know if the program is going to be continued and if so, in what form. Residence hall leaders may have an interest in supporting the program or opposing it strictly on budgetary grounds. Senior institutional leaders might find the program useful as a talking point in meeting with prospective students, their parents, and legislators. Research in many cases will not attract much interest beyond those who have a disciplinary interest in it. This is not always the case, but we would submit that assessments of student programs typically engender broader interest than research projects in many disciplines on campus.

Fourth, assessments typically are funded out of unit or divisional budgets, which can put something of a strain on providing support for all of the activities planned by the unit, especially if resources are not designated for assessment in organizational base budgets. Research often is financed through special support such as a grant or contract secured specifically to support the project. In developing a proposal the investigator will prepare a budget to address the costs of the project. It is quite common for faculty, in particular, to seek funding from a source external to the institution to underwrite the research project.

Reasons for Assessment

Why Should We Be Concerned about Assessment?

In looking through our case study, it is clear that the reasons for assessment are varied depending on one's point of view. Sean's staff wanted to know if students needed the program. Sami was concerned about how the program might be improved from one year to the next. The fiscal officer wanted evidence that the expenditures for the learning communities were wise and prudent. Casey, the director, wanted to know about the time spent on the program, and how its value might be communicated to senior administrators. All of these questions were legitimate in the eyes of the persons asking them. In many respects the questions reflected their responsibilities at MCU and what they were responsible for at the university.

As we pointed out in the first edition of this volume, there are many reasons for conducting assessments, even more than those illustrated by our case. But our view is that perhaps the most central reasons for conducting assessment are identified by Ewell (2009): accountability and improvement. Ewell analyzes the reasons behind accountability and improvement in his paper and in the end concludes that both are important.

In our case study, we find a number of questions being raised about the proposal, but in the end the concerns essentially deal with accountability or improvement or both. In this section we provide reasons for why student affairs educators should be concerned about assessment and use the term program interchangeably with the terms initiative, learning experience, or activity.

Assessment for Accountability

Ewell (2009) characterizes assessment for accountability as “Accountability requires the entity held accountable to demonstrate, with evidence, conformity with an established standard of process or outcome” (p. 7). Volkwein (2011) offers this question, which illustrates the accountability dimension of assessment: “Is the student or program or institution meeting educational goals?” (p. 11). With accountability come such features as answerability to stakeholders, shared governance, organizational transparency, and so on. Let's unpack these a bit in the context of the case study.

Sean wants to implement learning communities because she knows from her personal experiences and from the literature that learning communities lead to student growth and improved retention rates for first year students, the results of which are beneficial to students and the institution. But those who are responsible for approving her decision are either not familiar with the literature or are skeptical that the results she asserts will occur at MCU. Sean knows that learning communities are a common feature of residential living on many campuses, though not at MCU, and her experiences with them elsewhere have been very positive. So, as an element of her proposal, she will have to agree that she will conduct an assessment that will determine, to the greatest extent possible, that the learning communities achieved their goals. In short, she needs to make sure that the implementation of learning communities will result in the outcomes she has asserted will occur.

This element of planning underscores another difference between assessment and research. Typically assessments are conducted for local, that is, campus-based, purposes such as to determine if a program or other initiative meets its goals. While there may be some individuals beyond the campus who are interested in whether or not a program works, such as others in the field, the generalizability of the findings are limited. In research projects, particularly those that employ quantitative methods, generalizing to a broader population could very well be a purpose of the project. Note that in describing the circumstance for Sean, the individuals with whom she consulted appeared to want to know very specifically about the value of the LC experience at MCU. Whether they are aware of the research literature focused on LCs is beside the point. They are narrowly focused on MCU, which typically is a difference between assessment and research. Sean needs to address their questions with data from MCU.

Sean has asserted that participation in learning communities will result in improved retention for the participants and increased learning for them. Improved retention is an outcome that would be hard to argue with, but traditionally the residence halls at MCU have been seen as places to which students can escape from the pressures of class and enjoy the social aspects of their collegiate experience. Now, Sean is indicating that student learning such as increased self-awareness (see Kennedy-Phillips & Uhrig, 2013) will occur for those who participate in the learning communities. Casey wonders if self-awareness really should be an outcome that the department should care about or even see as desirable. Casey puts it this way: “We've been providing good quality service for students for years, and messing around with learning outcomes that are hard to measure and really are not part of our mission might be a distraction from what we are trying to accomplish in our residence halls.” So, this element of the program and potential assessment underscores a basic tenet of assessment: Is the assessment measuring an outcome that is consistent with the unit's mission? Sean might have to retreat from the student learning outcome element of the proposal because the residence life department has not established learning outcomes and simply focus on increasing retention as the central reason for establishing learning communities.

Students who will live in the learning communities are central to the proposal. Since living in a learning community will be on a voluntary basis, students will need to be recruited to the learning communities. Since MCU has no tradition of offering these experiences, explaining how students will benefit from the experience will be a challenge. Fortunately, Sean has been working with a student advisory group in developing the proposal. Sean realized the challenges in establishing the learning communities and recruiting students to live in them, so an advisory group was formed from the start and the members of the group have agreed that they will help recruit participants and they will be serving as community advisors (CAs) in the learning communities. Sean knows that establishing a program as complex as this one will be a challenge, so she has borrowed from the concepts of Weick (1984) that she read about in graduate school, which essentially are to start small and try to build on successes if she expands that program in the future. Weick concluded, “Changing the scale of a problem can change the quality of resources that are directed at it” (p. 48).

An aspect of accountability is the extent to which resources are expended on behalf of a program. This was the concern of the fiscal officer. So, the matter of assessment to determine efficiency and affordability is captured under our accountability dimension. In the case of the LC proposal, which was not going to cost lots of money, the issue had more to do with staff time. Casey, the senior leader, was concerned about that. Wellman (2010) points out that “Ideally, to look at cost-effectiveness, one would look at the role of funds in producing educational value added, or the translation of inputs into