Outcomes Assessment Guidelines and Resources for Disciplines and Programs
This is an example of an analytic rubric, in which multiple performance indicators, or primary traits, are assessed individually. A holistic rubric aggregates these criteria into a single grading scale, so that (for example) an “A” essay might be distinguished by all of the features noted under a “4” above with particular characteristics having to do with organization, development, and so on. A holistic rubric is useful for grading purposes, but it is typically too crude a measure to be employed in outcomes assessment work.
Rubrics can be developed collaboratively with students and in the classroom setting they have the additional advantage of helping to make grading practices as transparent as possible. As assessment tools, part of their value is that they require instructors to “norm” themselves against a set of consensus evaluative criteria, enabling us to define (and hold to) our common teaching goals more sharply than we might otherwise do. Rubrics also let us identify specific areas where our students are having trouble achieving significant learning outcomes for our courses.
Links to websites that contain rubric-building templates and examples may be found in Appendix D.
(Standardized tests may be useful measures if instructors agree to teach the skills that such tests can be shown to measure, and they have the advantage of providing departments with a national standard by which to measure their students. But standardized tests are costly to administer; students are often insufficiently motivated to do their best work when taking them; and as noted, they may not measure what faculty in the program actually teach.)
A3. Indirect Assessment Methods
One caveat: indirect assessment measures should be used to augment, not substitute for, more direct measures. Ideally, in fact, multiple assessment methods should be employed whenever possible, so that student surveys (for example) can become a useful additional check against data derived from doing embedded assessment or administering standardized tests.
Consult the IR office (and its publications) for assistance with particular information you need. Some disciplines (particularly in occupational education) will have strong, if indirect evidence of student learning in these data.
(Some examples include the “minute paper” at the end of a class students respond quickly and anonymously to two questions: “what was the most important thing you learned today?” and “what important question remains unanswered?” CATs are ideal ways of helping instructors in specific classes determine what their students know, don’t know or are having difficulty learning. When you adjust teaching practices in light of the information you gather from a CAT, you’re completing the feedback loop that is successful outcomes assessment. If members of your discipline agree to employ CATs regularly, consider detailing their efforts in a report that can become part of an annual assessment report, but these cannot replace the direct assessment activities expected from each discipline/unit.)
B. A Sample Course-Program Assessment Matrix
This is the simplest matrix that allows a discipline to gather information about assessment being conducted in its courses and to map that information against its broader, certificate or degree-level goals. Each instructor in the discipline fills out the first matrix for the courses she or he teaches, then the discipline aggregates the data. This allows the discipline to see where it has disparate goals within the same course and whether all of its outcomes are being assessed somewhere in the curriculum.
Completed by Each Instructor for His or Her Own Courses
Name of Instructor: Tschetter
Degree Program Learning Outcomes [listed and numbered]
To the Instructor: For each course you taught last year or are teaching this year, place an X under every goal that you actively teach and significantly assess in a major exam or project. Leave the other cells blank.
(adapted from Barbara Walvoord, Assessment Clear and Simple)
Why Should (or Must) We Do Assessment?
In addition to the intrinsic value of SLO Assessment there are, of course, other reasons we must engage in assessment. Colleges throughout the country are now required by regional accrediting bodies to document and assess student learning. Other governmental agencies charged with funding education see assessment as a way of enabling colleges to demonstrate that learning is taking place in their classes and programs. Colleges themselves can use assessment data for research and planning purposes, including budget allocation. Students (along with parents, employers, etc.) increasingly ask for evidence of what kind of learning a particular course, program or degree results in to help in their decision-making processes. These largely external pressures to document and assess student learning worry some instructors, who may view all accountability measures as potentially intrusive, leading to the loss of academic freedom (more on that later) and even to the imposition of a corporate culture upon American higher education. It may reassure us to learn that the assessment movement is now 30 years old, that its basic methodologies were developed and refined at some of the nation’s best colleges and universities, that professors—not bureaucrats—led this process and that assessment is being practiced at colleges and universities all over the world today.
A major recent stimulus to do outcomes assessment at the institutional, program and course levels comes from RCCD’s accrediting body, the Accrediting Commission for Community and Junior Colleges (ACCJC), which dramatically altered its standards for reaccreditation in 2002. ACCJC now asks community colleges to assess student learning at all levels of the institution, including every course being offered and use this information to improve teaching and learning. Visiting accreditation teams will want to see evidence at RCCD that disciplines not only have a systematic plan for assessing student learning in their courses, but that they are actually using that plan.
Thus, outcomes assessment serves at least three critical purposes: to help with planning and resource allocation decisions, to provide clear evidence of learning that is already taking place and to improve learning in areas where it is deficient,
Share with your friends:
The database is protected by copyright ©essaydocs.org 2020