This file was prepared for electronic distribution by the inforM staff.
Questions or comments should be directed to

                           CHAPTER TWO

                    CONNECTION OR COLLISION?
                        BY PAT HUTCHINGS

NWSA's project, "The Courage to Question," is the story of what
happens when two apparently very different educational movements
collide. On the one hand, there is the women's studies movement,
some twenty years old now and understood to have an overtly
political agenda. On the other hand, there is the assessment
movement, a more recent arrival on the scene dating back less than
a decade. Assessment's agenda is not only less overtly political
than that of women's studies, it is also perhaps harder to define
since its purpose, methods, and practice on campuses have been
characterized by considerable uncertainty, variety, and its own
evolution. To understand how women's studies has both contributed
to and benefitted from assessment, it is necessary to understand
the fluid history of the assessment movement itself.


Although the most salient feature of assessment for many campuses
has been that it is mandated, there are in fact powerful ideas
about education behind today's call for assessment. Ten years ago,
Alexander Astin argued that traditional ways of thinking about
quality in higher education--as a function of resources and
reputation (high student SATs, faculty Ph.D.s, endowment, library
holdings, and so forth)--told too little, even misled. Rather,
Astin argued, the real measure of quality was found in a college's
results, its contribution to student learning, the "value added"
from the experiences it provided.

By the mid-1980s, this new view of quality had taken hold in an
undergraduate reform movement growing within the academy and
spearheaded by two influential reports. In late 1984, a National
Institute of Education study panel (on which Astin sat) issued
"Involvement in Learning," which argued that to strengthen learning
one needed to involve students in their studies, set high
expectations, and assess and provide feedback.[1] In early 1985,
the Association of American Colleges' Integrity in the College
Curriculum also made this learning/assessment link, calling it
scandalous that colleges failed to assess the impacts of their

Behind both reports lies a view that quality is indeed a function
of student learning. And behind that view lies a set of questions
that are at the heart of today's assessment movement: 

* What do the courses and instruction we provide add up to for

* What do our students know and what can they do ?

* Are they learning what we think we are teaching?

* Does their achievement match what our degrees imply?

* How do we know and ensure that?

* How can the quantity and quality of student learning be improved?

These are hard questions--and important ones--in that they call up
even more fundamental questions about the purposes of our
educational programs and institutions. The good news is that over
the past ten years of the assessment movement many campuses have
taken these questions seriously and have become increasingly adept
at answering them in useful ways.[3]

                     THE ASSESSMENT MOVEMENT

In the early 1980s, the number of institutions engaged in assessing
student learning was just a handful: Alverno College, King's
College (Penn.), Miami-Dade Community College, Northeast Missouri
State University, and the University of Tennessee-Knoxville. What
these campuses were doing and what they meant by assessment varied
wildly--from attention to individual student learning at Alvemo,
for instance, to the collection of data to satisfy a state
performance-funding formula in Tennessee. 

Then, in 1987, came the report from the National Governors'
Association (NGA), Time for Results, with a call from its Task
Force on College Quality for the nation's colleges and universities
to begin doing assessment. 

     The public has a right to know and understand the quality of
     undergraduate education that young people receive from
     publicly funded colleges.... They have a right to know that
     their resources are being wisely invested and committed.... We
     need not just more money for education, we need more education
     for the money.[4]

Assessment activities that had been developed at Alverno College
over the previous decade were cited as a model for other campuses
to follow. It was "time for results," and the presumption was that
assessment would produce those results.

Not long after the NGA report came a series of state mandates
requiring public colleges and universities to begin doing
assessment and reporting results. Although the mandates and the
motives behind them differed considerably, state after state jumped
onto the assessment bandwagon to show their seriousness about
educational quality, to control costs, to enforce accountability,
or to prompt improvement. By 1990, forty states (up from four or
five in the mid-1980s) had in place or in progress some kind of
assessment initiative. Further incentives entered the picture in
the fall of 1988, when the U.S. Department of Education began to
insist that accrediting bodies, regional and programmatic, require
"information on student achievement" (read: assessment) from the
institutions and programs they accredited.
Today's higher-education landscape reflects the power of these
external mandates for assessment. According to a 1991 American
Council on Education survey, 81 percent of colleges and
universities report having some form of assessment activity
currently underway. Just over half of the public institutions are
working under a state mandate to develop a student assessment
program, with eight in ten of these having already submitted
required data. Two-thirds say that assessment is part of a
self-study for a regional accrediting agency. Notably, too,
significant numbers of institutions are planning further assessment


As the amount of assessment activity has risen, so too has its
character. Many campuses undertook assessment begrudgingly at
first. Uncertainty about what to do in the face of new (and often
unclear) state mandates, as well as concerns about possible misuse
of data, ran high. Today, however, campuses report that assessment
has made a positive difference. Fifty-two percent of the nation's
colleges and universities report that assessment has led to changes
in curricula or programs. Faculty members involved in assessment
report that their view of teaching and their activities in the
classroom also have been affected. (Four in ten institutions
estimate that more than 40 percent of faculty members have
participated in assessment.) Elaine El-Khawas of the American
Council on Education summarizes: "Assessment has had widespread
early influence, growing over a few years' time to a point where
most institutions of higher education can see some impact of their
assessment activities."[6]

One factor that has shaped the direction of assessment has been
state-level action. Earlier fears that states would roll out
mandatory statewide tests have not been borne out. Rather,
two-thirds of the states chose to follow the more permissive path
charted by Virginia: Each public institution is to practice
assessment in ways of its own choosing, consistent with its
particular mission and clientele, with required reports focused
largely on evidence that it has put findings to use in making

A second factor stems from a kind of invention by necessity. Many
of the questions posed by assessment mandates could not, in fact,
be answered by existing, commercially available instruments. The
Educational Testing Service (ETS) and American College Testing
(ACT) quickly rallied to the market demand with tests aimed at
learning in general education and, subsequently, the major.
Although many of those new instruments have become increasingly
useful and relevant, they are not always a good match for campus
curricula, and many campuses began inventing their own methods and
approaches by necessity. As of 1991, 69 percent were developing
their own instruments, an increase from 34 percent in 1988.

The good news here is that while assessment was initially seen by
many as synonymous with an SAT- or ACT-like test, it now includes
a wide range of faculty-designed approaches, many of which not only
provide rich data but constitute educationally meaningful
experiences for students. Portfolios in particular (a method
employed by several of the programs participating in the "Courage
to Question" project) have gained popularity, with 45 percent of
institutions using them as part of an assessment venture by 1991.
Looking at the program for the American Association for Higher
Education's National Conference on Assessment in Higher Education
for the past few years, one sees a wide range of rich methods,
including focus groups, interviews, projects, capstone course
activities, surveys of current students and graduates, transcript
analysis, the use of external examiners, and student

In addition to a richer and more varied set of assessment methods,
one now sees a more sophisticated conception of assessment. Many
campuses have come to embrace a view of assessment that ties it
firmly to learning and offers genuine hope for real undergraduate

* Focus on improving rather than proving. 
Because assessment arrived on many campuses as a state-mandated
requirement, the need often was perceived as proving something to
skeptical publics. That need is not without warrant, but campuses
that have come to understand assessment as gathering and using
information for internal improvement rather than for external proof
have gotten further and to more interesting places faster.

* Focus on student experience over time. 
The early focus of assessment tended to be "outcomes"--which is
understandably what outside, policy-making audiences were most
concerned about and also what existing methods were most suited to.
For purposes of improvement, however, campuses quickly found that
they needed to know not only outcomes but also the experiences and
processes (teaching, curriculum, services, student effort, and the
like) that led up to those outcomes.

* Use multiple methods and sources of information. 
To understand what was behind these outcomes, clearly a single
"snapshot" approach to assessment would not be sufficient. As
campus assessment programs have grown more sophisticated and
comprehensive, a variety of methods have been adopted and invented
to help provide the fullest possible picture of what students are
learning and how learning might be improved. Tests may be used, but
so are interviews with students, surveys of employers, judgments by
external examiners, and portfolios of student work over time. 

* Pay attention at the outset to issues of how information will be

In assessment's early days, often with state-mandated deadlines
just around the corner, the rush to "get some information" was
almost inevitable. Gradually, however, campuses have learned to
think harder in advance about what information will actually be
helpful, to whom, and under what conditions. Using assessment for
improvement means focusing on significant, real questions. 

* Provide occasions to talk about and interpret information. 
The gap between information and improvement is considerable; what
is needed to close it, many campuses have found, are occasions
where faculty members, administrators, students, and others can
talk together about the meaning of the information that assessment
has made available. Is it good news? Bad news? What action is
implied? Where is improvement needed and how should it be pursued? 

* Involve faculty members. 
Faculty members have long practice in making judgments about
student work; their expertise in doing so is crucial in deciding
what questions assessment should focus on, what the data add up to,
and what should be done to improve. Since the single most important
route to improvement is through the classroom, faculty members in
particular must be active participants in the assessment process.
Assessment is not primarily an administrative task--it is an
educational process.

* Involve and listen to students.
Assessment needs the information that students--and only
students--can provide. But listening to students is important
ultimately because it is students' ability to assess themselves and
to direct their own learning that will matter most. It is no
accident that assessment was introduced to higher education in a
report called Involvement in Learning.

                       FEMINIST ASSESSMENT

At first glance, feminist assessment looks much like the practice
that has emerged on many campuses to this point. The principles of
assessment enacted by the programs featured in this project are
congruent with those (characterized by the previous list, for
instance) that have evolved on many campuses where assessment is
"working." What distinguishes feminist assessment, however, is the
way these principles have been arrived at. Whereas many campus
programs have been shaped largely by pragmatic concerns, feminist
assessment is shaped by a coherent system of values and by feminist

Consider, for instance, the shift away from multiple-choice tests.
Faced with state mandates to assess the outcomes of general
education, often with a pressing deadline, many campuses were quick
to seize on new (or newly visible) instruments from the testing
companies--ETS's Academic Profile and ACT's College Outcomes
Measurement Program. What became increasingly clear, however, was
that data from those tests--returned months later in a handful of
subscores--shed little light on questions of improvement. What did
it mean that students scored 76 percent on critical thinking? Was
that good or bad? If bad, what should be changed? Even if the data
had been more intrinsically useful--more connected to curricula and
teaching--the chances of their being used were drastically
diminished by general faculty contempt for such exams. As a result,
many campuses now have minimized the role of such tests in a larger
assessment program or actually dropped them from their current
activities. What rules the day are more qualitative, faculty-driven
approaches and a range of methods beyond tests.

Feminist assessment shares the view that standardized tests should
play a minimal role in assessment. What is striking, however, is
that the programs highlighted in "The Courage to Question" came to
that conclusion not out of practical necessity but out of a view of
learning itself and of knowledge. In a feminist view of the world,
knowledge does not come in little boxes. Women's studies programs
have considered it a given that learning is both about a subject
and about how that subject might explain, influence, or make one's
daily life choices easier, clearer, or more complex. It is assumed
that what students learn in class will affect their lives outside
of the class because gender is not contained by the walls of the
classroom. Students may never see Egyptian art outside the slides
shown in art history class, but they will see some of the ways men
and women or power and powerlessness play out their complex
dynamics elsewhere. They probably will witness this in their first
half hour after class. Relatedly, knowledge is not purely objective
but is understood to be socially constructed and "connected." This
is not, clearly, a view of learning that makes multiple-choice
tests the method of choice.

The principle of student involvement provides a second illustration
of the distinctiveness of feminist assessment. Campuses that relied
heavily on commercially available tests administered as an add-on
to regular academic work quickly found themselves up against
student motivation problems. One campus sent letters to several
thousand students who were scheduled to take one of the
multiple-choice exams then popular. Of the several thousand who
received the letter, only thirty-some appeared. On other campuses,
student participation was stimulated with free T-shirts, pizzas,
and--in a couple cases--with cash! Even where students were induced
to show up, however, motivation to do their best was clearly low,
and cases of actual sabotage (random filling in of the black dots)
began to appear. All of this, of course, made the "results" of such
tests highly suspect and threw national norming attempts into
disarray. As a consequence, campus after campus has realized that
more useful assessment will result when students who are invested
in the process see that assessment matters--to them and to the
institution. One now sees more integral forms of assessment taking
precedence--often designed by faculty members and administered in
courses, sometimes required for graduation, and, on a few campuses,
counting toward grades.

Feminist assessment, too, takes student involvement in the
assessment process to be imperative. Students, as this book's title
puts it, should be "at the center." But that position stems not
from an attempt to fix practical and psychometric problems caused
by low student motivation; feminist assessment is student-centered
because of a theoretical, practical, and personal commitment to
women--and ultimately to all students--to how they learn and thus
to the things students themselves can tell us about how they learn.
Feminist assessment comes out of a fundamental commitment to the
individual and her voice, her account of her own story, and a
refusal to wash out individual or group differences.

In addition, it should be noted that feminism is the source of some
of the cautiousness about how assessment should be done. As
feminists, we "locate ourselves" as questioners and skeptics since
so much of what we have been told has turned out to be incomplete
or distorted. We also assume there is politics underlying issues of
knowledge, and it causes us to ask about the uses to which
assessment will be put. Who has the power to determine the
questions? What methods are most appropriate to embrace the many
voices and ways of speaking? What methods help reveal the unspoken?

                       A FINAL REFLECTION

At the outset of "The Courage to Question," Caryn McTighe Musil
asked me if I would be involved in the project. I was pleased to do
so because I am committed to women's studies and intrigued by the
possibility of more systematic information about the kinds of
learning that go on in such programs. As I told Caryn, however, the
eagerness of my response also was largely a function of a hope--a
hope that the kind of assessment women's studies programs would
invent would be precisely the kind that I had become persuaded, in
my role as director of the AAHE Assessment Forum, could make a
lasting difference in the quality of undergraduate education.

That, in my view, is indeed what has happened. The general
assessment movement and the women's studies movement have
intersected at several very productive points. Much more is said
about these points in subsequent chapters, but one sees, for
instance, the interest in multiple measures that has come to
characterize the assessment movement more generally now bolstered
by women's studies' commitment to multiple voices. Assessment's
focus on student experience over time both has informed and been
enhanced by a commitment to the authority of experience as a source
of knowledge in feminist assessment and classrooms. In both the
general assessment movement and in feminist assessment, the need to
involve faculty members and students has been clear. Feminist
assessment has pushed this principle further yet by examining and
questioning the very nature of the classroom experience and the
essential teacher-student relationship.

No doubt feminist assessment will continue to evolve, as will
assessment more generally. My hope is that this volume will
contribute to developments in both areas and that we will see a new
infusion of energy and direction out of the ways of thinking about
students, learning, and pedagogy that characterize the assessment
work that has now come to pass in the programs featured here.

1. Involvement in Learning: Realizing the Potential of American
Higher Education, Final Report of the Study Group on the Condition
of Excellence in American Higher Education (Washington: National
Institute of Education, 1984).
2. Integrity in the College Curriculum (Washington: Association of
American Colleges, 1985).
3. Much of my thinking about assessment in general grows out of
long conversations with my colleague, Ted Marchese, at AAHE. See
especially the article "Watching Assessment: Questions, Stories,
Prospects," co-authored with Ted Marchese, in Change 22
(September/October 1990).
4. Time for Results: The Governors' 1991 Report on Education
(Washington: National Governors' Association Center for Policy
Research and Analysis, 1986).
5. Elaine El-Khawas, Campus Trends, 1991, Higher Education Panel
Reports, No. 81 (Washington: American Council on Education, 1991).
6. Ibid., 15.