Outcomes Assessment Guidelines and Resources for Disciplines and Programs



Download 156.04 Kb.
Page1/3
Date25.05.2016
Size156.04 Kb.
  1   2   3



Outcomes Assessment Guidelines and Resources

for

Disciplines and Programs




Riverside Community College District


Office of Institutional Effectiveness
Web Resources:
http://www.rcc.edu/administration/academicaffairs/effectiveness/assess/index.cfm


Last Revised: February 14, 2008



Table of Contents

TOPIC PAGE


I. Introduction 3

II. Guidelines for Disciplines Doing Outcomes Assessment at RCCD 4


  1. Option 1 (collaborative course-based assessment) 4

  2. Option 2 (course-based assessment by individual instructors) 5

  3. Option 3 (program-based assessment) 6



III. RCCD General Educational Student Learning Outcomes 7
IV. Rubric for Evaluating Discipline Assessment Activities 8
V. Glossary 9
VI. Appendix/Resources 11


  1. Student Learning Outcomes and Assessment Methods 11

A1. Student Learning Outcomes 11

A2. Direct Assessment Methods 12



  1. Sample of Analytic Rubric 13

A3. Indirect Assessment Methods 14
B. A Sample Course-Program Assessment Matrix 15

1. Individual course to program matrix 15

2. Department-wide summary courses to program matrix 15
C. Further Reading 16
D. Websites 16

1. General 16

2. Writing Student Learning Outcomes 16

3. Using Embedded Assessment Techniques 17

4. Writing & Using Rubrics 17

5. Collections/Portfolios/e portfolios 17



6. Classroom Assessment Techniques 17
E. Frequently Asked Questions 18

I. Introduction


As a district with nearly 2,000 courses and more than a 100 programs listed in its catalogs, comprehensive assessment of student learning is a daunting process. It will not happen overnight, and no one expects it to, but Riverside Community College District (RCCD) has made great strides in the area of outcomes assessment. It’s vital that assessment be seen as ongoing and not something we do only when our discipline/unit is up for program review. Please review these guidelines and remember the District Assessment Committee (DAC) and its co-chairs are available to help you with all parts of the assessment process.
The guidelines for doing outcomes assessment at RCCD have been developed (and revised several times) by the DAC, which also mentors disciplines in the development of their assessment plans. In addition, the DAC ranks the assessment work of disciplines against a standard rubric [see below for the criteria] as reported in the Program Review (PR) self-studies and makes suggestions for improvement. The DAC is composed of roughly 20 faculty members and staff personnel, representing a broad cross-section of the college community, all of whom have devoted themselves to studying outcomes assessment. Committee members have developed guidelines, ranking criteria, and other helpful forms for their colleagues. Voting members, two from each of the campuses, are elected by the faculty of the campus they represent. We invite suggestions for improving these guidelines and encourage all interested faculty to join the committee.
Ranking Rubric for Evaluating Discipline Assessment Activities



Stages


Sample discipline behaviors


5

  • Assessment plan is in place for more than one course or program.

  • Data has been collected to assess learning in one or more courses or programs.

  • Results of data collection have been used to improve student learning in at least one course or program.

4

  • Assessment plan is in place for at least one course or program.

  • Data has been collected to assess learning in at least one course or program.

  • Results of data collection have been used to improve student learning.

3

  • Assessment plan is in place for at least one course or program.

  • Data has been collected to assess learning in at least one course or program.

  • Results of data collection have not been used to improve student learning.

2

  • Assessment plan is in place for at least one course or program.

  • No data has been generated to assess learning.

1

  • Discussion of student learning outcomes and assessment of student learning has taken place.

  • No plan in place to assess learning for any course or program.

0

  • No information received regarding assessment activities.

    • No discussion has occurred at the discipline level.

When questions arise, contact Sheryl Tschetter (7039) or Kristina Kauffman (8257), co-chairs of DAC. To review assessment plans of other disciplines at RCCD, visit the DAC website at http://www.rcc.edu/administration/academicaffairs/effectiveness/assess/index.cfm and look at the samples under the “documents” section.
Faculty Driven

RCCD has engaged in systematic, institution-wide efforts to assess student learning for several years. RCCD instructors participate in this process and are responsible for all steps of these efforts including:




  • defining student learning outcomes in courses, programs, certificates, and degrees;

  • determining student achievement of those outcomes; and (most important of all)

  • using assessment results to make improvements in pedagogy and curriculum.

Administration, particularly Institutional Effectiveness, acts primarily in an advisory and support role.


As a condition of all PR self-study processes, each RCCD discipline/unit is expected to engage in outcomes assessment efforts and to report on those efforts on a regular basis. Discussion of specific guidelines for RCCD disciplines engaged in outcomes assessment work follows this introduction. After these guidelines, there are sections that include a glossary of terms, overview of student learning outcomes assessment and methods, sample matrix, suggestions for further reading, and website of best practices from other institutions.

II. Guidelines for Disciplines Doing Outcomes Assessment at RCCD



What Is Outcomes Assessment?
Outcomes assessment is any systematic inquiry whose goal is to document learning or improve the teaching/learning process. (ACCJC defines assessment simply as any method “that an institution employs to gather evidence and evaluate quality.”) It can be understood more precisely as a three-step process of:


  1. Defining what students should be able to do, think, or know at the end of a unit of instruction (defining, that is, the student learning outcomes)

  2. Determining whether, and to what extent, students can do, think or know it.

  3. Using this information to make improvements in teaching and learning.

If this sounds partly recognizable, that’s because all good teachers instinctively do outcomes assessment all the time. Whenever we give a test or assign an essay, look at the responses to see where students have done well or not so well, and reconsider our approach to teaching in light of that information, we’re doing a form of assessment. Outcomes assessment simply makes that process more systematic.

The DAC has struggled over the years with the slipperiness of this concept, often pausing in its work to remind itself of what “assessment” does and does not mean. Faculty frequently mistake it for something it is not. Though it over simplifies a bit, we suggest that you ask yourselves these questions to be sure that you are actually engaged in outcomes assessment:


  • Are you demonstrating, in more tangible ways than simply pointing to grading patterns and retention/success data, that learning is taking place in your discipline? If you are, you are doing outcomes assessment. You are documenting student learning.




  • Are you identifying, with some precision, areas in your discipline where learning is deficient, and working actively to improve learning? If so, you are doing outcomes assessment. You are trying to enhance and improve student learning in light of evidence you’ve collected about it.



Isn’t Assessment the Same Thing As Grading?
No—at least not as grading students on papers and exams, and in courses overall, is usually done. Traditional grading is primarily evaluative, a method for classifying students. Outcomes assessment is primarily ameliorative, designed to improve teaching and learning. The emphasis in outcomes assessment always falls on Step 3: using information about student learning patterns in order to improve. This is sometimes referred to as “closing the feedback loop”—something that must always be our ultimate aim in doing this kind of assessment.

Grades typically reflect an aggregate of competencies achieved (or not achieved) by a student on an assignment or for a class. Knowing that a particular student got a “B” in a course, or even knowing that 20% of the students in a class got an “A” and 30% got a “B,” won’t tell us very much about how well students in general did in achieving particular learning outcomes in the course. However, disaggregating those grades using outcomes assessment techniques may reveal that 85% of the students demonstrated competency in a critical thinking outcome, while only 65% demonstrated competency in a written communication outcome. That may lead us to investigate ways of teaching students to write more effectively in the course—resulting ultimately in improved learning.

Grades are also often based on a number of factors (e.g., attendance, participation or effort in class, completion of “extra credit” assignments) that may be unrelated to achievement of learning outcomes for the course. That may be why the GPAs of high school and college students have risen sharply over the past 15 years, while the performance of these same students on standardized tests to measure writing, reading, and critical thinking skills has markedly declined.

Outcomes assessment methodologies may actually help us grade our students more accurately, and give students more useful feedback in time for them to improve the work they do in the course later on. But simply pointing to grading patterns in classes and courses is not a form of outcomes assessment.



Why Should We Do Assessment?
The best reason for systematically assessing student learning is the intrinsic value of doing so. Effective teaching doesn’t exist in the absence of student learning. Assessment is part of the broad shift in higher education today toward focusing on student learning, on developing better ways of measuring and improving it. Assessment results implicitly ask us to fit our teaching, as much as we can, not to some set of timeless pedagogical absolutes but to the messy reality of specific classrooms, where actual students in one section of a class may require a substantially different kind of teaching than their counterparts in another. Assessment helps us identify optimal methodologies for specific situations. Done well, outcomes assessment makes us happier teachers because it makes us better teachers. And it makes us better teachers because it makes our students better learners. The primary purpose for doing assessment, then, is to improve learning.
RCCD’s mission statement reflects our dedication to access and success for our diverse student body. We offer instruction resulting in multiple educational goals including degrees, transfer, certificates, and occupational programs. We also provide support to our students and our communities. One method we utilize to ensure our instruction remains relevant and reflects the needs of our current student body is outcomes assessment. Below, please find general references and specific ideas related to outcomes assessment at RCCD.


What Does a Discipline Need to Do? Program Review Templates and Expectations:
The comprehensive and annual program review templates at our website http://www.rcc.edu/administration/academicaffairs/effectiveness/review.cfm contain specific assessment-related questions you’ll be asked to address as you complete your self-study. Here’s a quick overview of district expectations:


  • All disciplines must be routinely engaged in systematic assessment of student learning at the course and/or program level.

  • Disciplines with courses that meet (or could meet) general education requirements should focus on assessing those courses they think map to General Ed SLOs. See the RCCD GE outcomes in the appendix of this document.

  • Disciplines with programs leading to certificates should focus on defining program-level SLOs and assessing achievement of those SLOs.

III. Options for Student Learning Outcomes Assessment

I


Option 1: Course-based assessment for disciplines in which multiple instructors teach the same course and can work collaboratively to define and assess SLOs for that course.
f at least one course in the discipline is taught by a number of different instructors, choose such courses and work collaboratively to assess learning outcomes in them. You may want to:

  • focus on one course at a time, perhaps assessing a single SLO in the initial effort or perhaps several related SLOs;

  • involve part-time instructors in the process as much as possible;

  • focus on an assignment or examination toward the end of the semester in which students can be expected to demonstrate what they’ve gained from the course;

  • develop a rubric to assess learning in student work taken from many different sections of the course; or

  • use objective tests as a way of evaluating students embedding common questions in such tests and aggregate results to see where students are having success or difficulty1.

You can learn more about how to develop rubrics and do course-embedded assessment by looking at examples from English, CIS, and Chemistry at the RCCD assessment website. Keep careful records of these course-based assessment projects, which should include sections on methodology, results, and analysis of results. Sometimes minutes of department or discipline meetings are useful appendices to show how results are used to improve teaching and learning. Plan to share results in the annual PR update.

I


Option 2: Course-based assessment for disciplines in which instructors mostly teach different courses, or in which collaboration among instructors is not possible in defining and assessing SLOs.
f you teach in a discipline with few or no courses taught by multiple faculty members, you may not find collaboration feasible or desirable. And even some disciplines whose faculty do teach identical courses may not be able to work collaboratively on a process to do assessment. In that case, ask every instructor (full- and part-time) to choose a course she or he routinely teaches and develop an individual assessment project for it. Ask instructors to:


  • Identify a significant SLO for a course they teach (they may be able to agree on a common, generic SLO to look at).

  • Choose a major graded assignment for the class that they believe measures that SLO.

  • Develop a rubric (or, for objective examinations, some other method of analysis) by which achievement of that SLO can be assessed.

  • Do the actual assessment.

  • Report on and analyze the results of the assessment, in writing.

It’s useful to plan some discipline meeting time for discussion of these results, but even more vital that they be captured in written form. Make sure that the reports describe how the instructor would change or improve the teaching of this assignment (and the assignment itself). Discipline discussion should identify common problems instructors are having and potential solutions to those problems. Keep careful records and share results on annual PR updates.



S
Option 3: Program-level assessment for disciplines, particularly in occupational education, which may find looking at learning patterns in sequences of classes (or classes that are required for a certificate) desirable.
ome disciplines, particularly in occupational education, may want to focus their efforts on program-level assessment. If you want to assess sequences of courses that lead to credentials, we suggest the following strategy:


  • Work as a discipline to identify SLOs for the program or certificate, involving adjuncts as much as possible.

  • Identify assessment methods that will enable the discipline to determine whether, and to what extent, those SLOs are achieved. (Student performance on national or state licensing exams may be a good initial assessment method, as are surveys of students on self-perceived learning gains, alumni, and employers. Focus groups might also be used for assessment purposes.)

  • Interpret the results of the assessment in order to make improvements in the program or teaching of courses in it.

  • Keep careful records of your work and share results on annual PR updates.


Besides the direct assessment methods outlined earlier, all disciplines may want to consider other approaches as they develop and implement their comprehensive assessment plan. See Appendix A for examples of indirect assessment techniques. These methods should be employed in addition to, not in lieu of, the approaches outlined in section II.






IV. Appendix/Resources

A. Student Learning Outcomes and Assessment Methods


Assessment can either be direct, focusing on actual student work (essays, exams, nationally normed tests) where we look for evidence that learning has been achieved, or indirect, where we look for signs that learning has taken place through proxies or such “performance indicators” as surveys, focus groups, retention or transfer rates, etc. Both methods of assessment can be valuable, and in fact the experts agree that no single method should ever be relied on exclusively. The first step in any assessment plan is to define the student learning outcomes for the course or program under consideration: the things we want students to be able to do (or think or know) by the time they’ve finished a course of study.


A1. Student Learning Outcomes

Student learning outcomes for courses or programs should share the following characteristics:


  • They should describe the broadest and most comprehensive goals of the course or program

(Assessment theorist Mark Battersby refers to these as “integrated complexes of knowledge” or competencies. They should focus on what a student should be able to do with the knowledge covered, not simply on what the instructor will cover. Courses and programs may typically have three to five outcomes, though fewer or more are possible.)

  • They should employ active verbs, usually taken from the higher levels of Bloom’s taxonomy (reprinted in the appendix to this document)—e.g., students should be able to “analyze” or “evaluate,” not “define” or “describe.”

  • As much as possible, they should be written in intelligible language, understandable to students.

  • As often as possible, they should arrived at collaboratively, as instructors who teach the same class or in the same program come to consensus about the key objectives of that unit of instruction.

(For course-level SLOs, instructors will undoubtedly have SLOs of their own in addition to consensus ones.) Adjunct instructors—and students themselves—should be involved in the process of developing SLOs as much as possible.

  • SLOs should be measurable.

Ideally, they should contain or make reference to the product (papers, projects, performances, portfolios, tests, etc. through which students demonstrate competency) and the standard (e.g., “with at least 80% accuracy”) or criterion by which success is measured. When the behavior/product and standard are specified, the SLO is sometimes referred to as made “operational.”
Consult Appendix D for links to the websites like Teachopolis, which have program tools for building SLOs, and 4Faculty.org, with useful overviews of SLOs and assessment methodology.
A2. Direct Assessment Methods

Some effective direct assessment methods that can be employed to measure achievement of SLOs in courses or programs include:


Embedded assessment: The main advantage with embedded assessment is that it simplifies the assessment process, asking instructors to evaluate existing student work, but in a different way than they usually do and for a different purpose. It’s usually good practice to collect such assessment data in a way that would make evaluation of individual instructors impossible. Some examples include:


  1. existing tests, exams, or writing prompts to identify learning trends in a particular course or group of related courses;




  1. a common final in which questions are mapped to specific learning outcomes for the course, then the results aggregated. (A variation of this approach would require all instructors in a course to ask a set of common questions on a part of an exam, but permit them to develop instructor-specific questions for the rest of the exam ;)




  1. student writing on a variety of late-term essay assignments for evidence that certain learning outcomes have been met;




  1. portfolios are collections of student work over time (a semester, a college career) that are used to assess either individual student learning or the effectiveness of the curriculum. Collected work may include papers, exams, homework, videotaped presentations, projects, and self-assessments. This is a particularly effective method of assessing institutional learning outcomes;




  1. capstone courses are usually ones taken in a student’s final semester in a program and intended to allow students to demonstrate comprehensive knowledge and skill in the particular degree pattern. Capstone courses (and capstone projects usually required in such course) integrate knowledge and skills associated with the entire sequence of courses that make up the program. Assessing student performance in these classes therefore approximates assessment of student performance in the major as a whole;




  1. scoring rubrics to assess student performance captured in portfolios, capstone courses, or individual essays or performances. Individual instructors can employ them on their own, too.

Develop the rubric by looking at a specific assignment—an essay, a demonstration, an oral report—in which student learning cannot be measured with numerical precision. Develop (whether alone or with others) a scoring guide or checklist that will indicate various skill levels for various “primary traits,” with clearly delineated language suggesting the degree to which the assignment demonstrates evidence that the SLO has been achieved. [See table 1.]

If our SLO were “students should be able to write an adequately developed, well-organized essay that contains few major errors in grammar or diction,” a simple rubric by which to assess sample essays might look something like this:
Table 1

Learning Outcome

1-little or no evidence

2-insufficient evidence

3-adequate evidence

4-clear evidence

Organization, Focus, and Coherence

A very disorganized essay, with inadequate or missing introduction, conclusions and transitions between paragraphs.

An essay with significant organization problems, and/or inadequate introduction, conclusion and/or transitions.

An organized essay, though perhaps marginally so, with adequate introduction, conclusion and transitions.

A well-organized essay, with an effective introduction,

conclusion and logical transitions between paragraphs



Development

An essay with major development problems: insufficient, confusing and/or irrelevant support for major points.

An essay with significant development problems: support for major points often insufficient or confusing.

A developed essay, though perhaps marginally so, with adequate support for most major points.

A very well developed essay, with full and effective support for all major points.

Conventions of Written English

Many significant errors in grammar, punctuation and/or spelling.

Frequent minor errors and occasional major errors in grammar, punctuation and/or spelling

Occasional minor errors but infrequent major errors in grammar, punctuation and spelling.

Few or no errors in grammar, punctuation or spelling.

This is an example of an analytic rubric, in which multiple performance indicators, or primary traits, are assessed individually. A holistic rubric aggregates these criteria into a single grading scale, so that (for example) an “A” essay might be distinguished by all of the features noted under a “4” above with particular characteristics having to do with organization, development, and so on. A holistic rubric is useful for grading purposes, but it is typically too crude a measure to be employed in outcomes assessment work.


Rubrics can be developed collaboratively with students and in the classroom setting they have the additional advantage of helping to make grading practices as transparent as possible. As assessment tools, part of their value is that they require instructors to “norm” themselves against a set of consensus evaluative criteria, enabling us to define (and hold to) our common teaching goals more sharply than we might otherwise do. Rubrics also let us identify specific areas where our students are having trouble achieving significant learning outcomes for our courses.
Links to websites that contain rubric-building templates and examples may be found in Appendix D.


  1. standardized tests, particularly nationally normed tests of such institution-wide learning outcomes as critical thinking or writing, or discipline-specific tests like exams required of nursing or cosmetology students;

(Standardized tests may be useful measures if instructors agree to teach the skills that such tests can be shown to measure, and they have the advantage of providing departments with a national standard by which to measure their students. But standardized tests are costly to administer; students are often insufficiently motivated to do their best work when taking them; and as noted, they may not measure what faculty in the program actually teach.)


  1. a course-program assessment matrix can also be used to map course SLOs with those of a broader certificate or degree pattern. It allows a discipline to gather more precise information about how its learning goals are being met more locally, in specific courses. Gaps between program and aggregated course outcomes can be identified. For a sample matrix of this kind, see Appendix B.


A3. Indirect Assessment Methods
One caveat: indirect assessment measures should be used to augment, not substitute for, more direct measures. Ideally, in fact, multiple assessment methods should be employed whenever possible, so that student surveys (for example) can become a useful additional check against data derived from doing embedded assessment or administering standardized tests.


  1. Student surveys and focus groups. A substantial body of evidence suggests that student self-reported learning gains correlate modestly with real learning gains. You may want to consider surveying students (or a sampling of students) at the end of a course of instruction (or after graduation from a program) to determine what they see as their level of achievement of the course or program’s learning outcomes. You may also want to gather a representative group of students together for more informal conversation about a particular course or program when it has ended, asking them open-ended questions about its effect upon them. Surveys of alumni can also produce meaningful assessment data. These techniques are particularly valuable when done in conjunction with more direct assessment measures.




  1. Faculty surveys. Instructors can be asked, via questionnaires, about what they perceive to be strengths and weaknesses among their students.




  1. Data likely to be kept by the Office of Institutional Research on retention, success and persistence, job placement information, transfer rates and demographics may also be strong assessment tools.

Consult the IR office (and its publications) for assistance with particular information you need. Some disciplines (particularly in occupational education) will have strong, if indirect evidence of student learning in these data.




  1. Classroom Assessment Techniques (CAT). DAC encourages instructors to familiarize themselves (and routinely employ) some of the classroom-based assessment techniques that Thomas Angelo and Patricia Cross detail in their text on the subject published by Jossey Bass.

(Some examples include the “minute paper” at the end of a class students respond quickly and anonymously to two questions: “what was the most important thing you learned today?” and “what important question remains unanswered?” CATs are ideal ways of helping instructors in specific classes determine what their students know, don’t know or are having difficulty learning. When you adjust teaching practices in light of the information you gather from a CAT, you’re completing the feedback loop that is successful outcomes assessment. If members of your discipline agree to employ CATs regularly, consider detailing their efforts in a report that can become part of an annual assessment report, but these cannot replace the direct assessment activities expected from each discipline/unit.)


B. A Sample Course-Program Assessment Matrix

This is the simplest matrix that allows a discipline to gather information about assessment being conducted in its courses and to map that information against its broader, certificate or degree-level goals. Each instructor in the discipline fills out the first matrix for the courses she or he teaches, then the discipline aggregates the data. This allows the discipline to see where it has disparate goals within the same course and whether all of its outcomes are being assessed somewhere in the curriculum.



Completed by Each Instructor for His or Her Own Courses

Name of Instructor: Tschetter
Degree Program Learning Outcomes [listed and numbered]
To the Instructor: For each course you taught last year or are teaching this year, place an X under every goal that you actively teach and significantly assess in a major exam or project. Leave the other cells blank.

Course

Program SLO 1

Program SLO 2

Program SLO 3

Program SLO 4

Program SLO 5

11

x

x










12
















13
















14
















21
















23

x

x

x







26
















30
















4







x

x




(and so on)


















Department-Wide Summary


Course

Program SLO 1

Program SLO 2

Program SLO 3

Program SLO 4

Program SLO 5

11

100%

45%










12

100%




100%







13




84%

45%

59%

5%

14




37%

58%

100%




21

100%

100%

100%







(and so on)
















(adapted from Barbara Walvoord, Assessment Clear and Simple)

Assessment FAQs
Why Should (or Must) We Do Assessment?
In addition to the intrinsic value of SLO Assessment there are, of course, other reasons we must engage in assessment. Colleges throughout the country are now required by regional accrediting bodies to document and assess student learning. Other governmental agencies charged with funding education see assessment as a way of enabling colleges to demonstrate that learning is taking place in their classes and programs. Colleges themselves can use assessment data for research and planning purposes, including budget allocation. Students (along with parents, employers, etc.) increasingly ask for evidence of what kind of learning a particular course, program or degree results in to help in their decision-making processes. These largely external pressures to document and assess student learning worry some instructors, who may view all accountability measures as potentially intrusive, leading to the loss of academic freedom (more on that later) and even to the imposition of a corporate culture upon American higher education. It may reassure us to learn that the assessment movement is now 30 years old, that its basic methodologies were developed and refined at some of the nation’s best colleges and universities, that professors—not bureaucrats—led this process and that assessment is being practiced at colleges and universities all over the world today.

A major recent stimulus to do outcomes assessment at the institutional, program and course levels comes from RCCD’s accrediting body, the Accrediting Commission for Community and Junior Colleges (ACCJC), which dramatically altered its standards for reaccreditation in 2002. ACCJC now asks community colleges to assess student learning at all levels of the institution, including every course being offered and use this information to improve teaching and learning. Visiting accreditation teams will want to see evidence at RCCD that disciplines not only have a systematic plan for assessing student learning in their courses, but that they are actually using that plan.

Thus, outcomes assessment serves at least three critical purposes: to help with planning and resource allocation decisions, to provide clear evidence of learning that is already taking place and to improve learning in areas where it is deficient,




Share with your friends:
  1   2   3




The database is protected by copyright ©essaydocs.org 2020
send message

    Main page