Multimodal Feedback And Assessment: Improving Visual Art Feedback by Combining Audio, Video, and Text Introduction



Download 77.46 Kb.
Date conversion16.04.2016
Size77.46 Kb.

Running head: MULTIMODAL FEEDBACK AND ASSESSMENT



Multimodal Feedback And Assessment:

Improving Visual Art Feedback by Combining Audio, Video, and Text

Jason Leath

Pepperdine University


Multimodal Feedback And Assessment:

Improving Visual Art Feedback by Combining Audio, Video, and Text

Introduction

Determining the worth of a visual art project is a challenging task in part, due to the vague parameters of what makes “good art.” Artwork and creativity are comprised of nuanced variations that work together in order to form an influential and creative work of art. These nuances, however, create a grey area when trying to define creativity and more importantly assess a creative work. Modern school systems have turned towards quantifiable criteria as a means of assessing student growth and development in the visual arts. Tools such as rubrics, checklists, and scales have been introduced in the hopes of revealing a specified level of artistic development. Yet, no matter how useful tools like rubrics are in quantifying growth, they are limited in their capacity for providing individualized feedback, which can help to motivate students and promote their creativity. Assessments in art should be more than a place keeper for student progress; in fact, they should promote artistic growth. Through screencasting, the educator can provide feedback that utilizes a variety of modalities, which optimizes the comprehension of feedback. Screencasting will personalize their message to the students and allow the educator to address the singular nuances in every work of art.

This research will take a brief look at the historical trends in art assessments. Next, it will present the characteristics of the ideal feedback for students, and specifically feedback for the visual arts. Once these points are established, the research will explore technology’s role in modern education, and how it can improve assessments in the visual arts.

Historical Information

Art education has historically shifted its opinion on the value of assessing student work (Bensur, 2002; Gentile & Murnyack, 1989; Gruber & Hobbs, 2002); and overall, the community has not viewed assessments favorably. Gruber and Hobbs’s (2002) research on the history of assessments in art education found that, historically, art education has had little use for quantifying the production of art students. During most of the twentieth century art educators believed that art was more of a tool for self-expression, rather than an opportunity for student learning. Assessments in art education did exist in the 1920s, as a tool to measure artistic intelligence, rather than ability. This style of assessing factual knowledge, rather than artistic skill, rapidly fell out of favor after the ‘20s. Sometime after World War II suggestions for assessments began reappearing in art education texts; but the self-expressive movement, which assessed more of the process than the product, overshadowed them. (Gruber & Hobbs, 2002) This focus on the process came in part due to the writings of educational theorists such as Dewey (1934) who wrote, “the artist is controlled in the process of his work by his grasp of the connection between what he has already done and what he is to do next. . . he has to see each particular connection of doing and undergoing in relation to the whole that he desires to produce" (p. 47). These suggestions to compare an art student’s work to the body of all of their artwork over time, is an important and valid step in developing artistic skill. He makes the important suggestion to look for connections in current and past work, as well; yet he stops just short of suggesting that the work should be assessed in a specific manner.

In the 1960s and 1970s a transition of emphasis from self-expression to a structured approach of education began to take place in the arts this was due in part to educational theorists such as Broudy (1972) who wrote, “When we speak of value standards, we are not speaking of meter sticks and weighing scales. Rather, we are comparing an act or an object with an ideal that imagination has extrapolated” (p14). His suggestion of a comparison between the created work and what it could ideally be is a call to evaluate or assess artwork created by emerging artists or students.

The 1980s began the movement of Discipline-Based Art Education (DBAE), which is the foundation for modern day art education curriculum; yet, DBAE did not truly assess student’s work on anything more than a theoretical level. There was some writing in the 1980 and 1990s arguing in favor of assessments in art such as in Zimmerman (1992):

“Teaching students only about elements and principles of design, for example, does not provide them with an authentic or complete understanding of works of art. Students should learn to grasp relationships and integrate knowledge, not only to reproduce knowledge, but to create understandings.” (p. 16)

Zimmerman goes on to cite Archblad and Newman (1988) and Wiggins (1989) criteria for authentic student assessment; whose first criteria is to “evaluate students on tasks that approximate disciplined inquiry” (p. 15). This recommends that we should assess art students in ways that are similar to how they are taught, by using questions specific to their area of study in order to help gain insight into the information. Eisner’s (1996) has similar ideas of which good criteria make up assessments. He believes that procedures should be user friendly, art tasks should relate to bigger ideas, they should have multiple solutions, and content should address a variety of sensory and cognitive modes. All of these criteria relate specifically to art-based pedagogy, and they align with Archblad and Newman’s and Wiggins’ thoughts on assessing students in ways that mirror how they were instructed.

In the early 2000s, art programs turned to assessments to help promote student engagement, hold the students accountable for their work, and quantify the level of their growth. Yet, they had not fully embraced the number of technological tools at their disposal. Burton’s (2001) research points out that only 51% of teachers used technology for assessment and grading; while 27% never used it, and 7% used it infrequently. Unfortunately, while other categories of technology use were clearly defined (i.e. research on the internet for lesson planning and preparation) this category was simply left as the “use of electronic technology for assessment and grading” (p. 143) This research showed that little progress had been made in modernizing assessments with the available technologies. However, Gruber & Hobbs (2002) found the majority of art educators disliked assessment, and the positions taken by art education writers ranged from denouncing evaluations, to ignoring the issue all together. Those who did accept assessment on a limited basis (undefined by Gruber & Hobbs) warned against its harmful effects on creativity, or they used assessments only to measure progress and not the product. Gruber & Hobbs (2002) conclude with the statement, “Now, with the call for accountability in all of education, including art education, assessment has come to the forefront with a vengeance. And art education has not adequately done its homework” (p. 16).

More recent studies suggest that technology assessments in all fields are beginning to emerge. Video, audio and other bundled media approaches are being used more frequently to provide a multimodal tool for teacher feedback (Brewer, 2012; Crook, A et al., 2012; Cruikshank, 1998; Lee, Pradhan, and Dalgarno 2008; Mayer, 2003; Middleton, 2010; Moreno &Valdez, 2005; Palaigeorgiou and Despotakis, 2010; Woodhouse, 2012). The recent growth in studies of multimedia feedback suggests that this is the beginning of a widely used trend across multiple disciplines.



The Nature of Assessment and Feedback

Educating art students requires a different approach than in other subjects. As stated earlier, art is a nuanced entity; and so art education must make an effort to teach students how to work with these subtle differences. Burton (2001) found that 84% of teachers he surveyed preferred direct observation of artwork as their assessment method of choice, and individual talks with students occurred frequently or very frequently at 71% of the time. This research reveals that looking while discussing the artwork, is the most frequent method of assessing student work among his respondents. Also shown is that written assessments, which prevail in other subject areas, were primarily selected as “infrequently” used (43%). Burton’s study suggests art educators tend to prefer less standardized approaches when assessing student artwork, and they would rather use discussion of the work as a primarily means of evaluation. This aligns with the idea that art is nuanced and the best way to evaluate it is through open, face-to-face discussions.

Hattie and Timperley (2007) define feedback as consequence of performance and information provided by a facilitator regarding aspects of a student’s performance or comprehension. Feedback used in education has been established as an important way to improve knowledge and skill acquisition (Bangert-Drowns, et al. 1991; Pridemore & Klein, 1995; Shute 2008), and that feedback is also a significant factor in motivating student learning (Lepper & Chabay, 1985; Narciss & Huth, 2004; Shute 2008). Shute’s (2008) study evaluates whether feedback should be timely, supportive, or specific. In regards to timing, Shute found both positive and negative effects, leading to inconsistent data on the importance of a rapid response. On the one hand, timely feedback can be a motivator, and it can provide associations of the outcomes to their causes. However, on the other hand, it may promote less mindful behaviors due to reliance on information that was not present at the time of evaluation (Schmidt & Bjork, 1992; Shute, 2008). Their findings also indicate that rather than complexity, the nature and the quality of the content within feedback may be more important. This suggests that the way the message is delivered is not as important as the message itself. Simply put, if it provides information about how to attain the program’s goals, then it is successful as feedback (Kulhavy, White, Topp, Chan & Adams, 1985; cited in Shute, 2008). In regards to specificity, Shute found that “reducing uncertainty may lead to higher motivation and more efficient task strategies” (p. 157). Shute also found that feedback is significantly more effective when it gives details that provide direction for improvement, rather than just indicating accuracy (Bangert-Drowns et al., 1991; Pridemore & Klein, 1995; Rotheram, 2008; Shute, 2008).

The importance of specific feedback is mirrored in Gentile & Murnyack’s (1989) research, “To give feedback the teacher must call attention to good parts of the performance or to the progress that is being made, while also demonstrating how to improve the weak spots” (p. 34). The feedback must not promote mindlessness by providing students with the answers rather than challenging them to build upon it in their own search for the correct answer (Bangert-Drowns et al., 1991, Shute 2008), this aligns with other resarch in education which shows that students need to invest cognitive energy into problem solving and creating associations. (Mayer & Moreno, 2003; Mousavi, Low & Sweller, 1995; Pass, Touvinen, Tabbers & Van Gerven, 2003) This challenge is clarified by Bensur (2002) when discussing art education, “where the solutions to a problem are not always black and white and often require a great deal of gray area to make them visually interesting” (p. 22). This gray area makes the assessing student artwork a challenging task, and explains the shifting positions on assessment over time.



Defining and Assessing Creativity

As quoted previously, Gura (2008) stated that assignments in the visual arts should inspire and engage students to higher levels of creativity. Creativity is a challenging concept to work with, because it exists in the gray area referenced by Bensur (2002). This means that creativity does not have a concrete definition, and therefore individuals, including both educators and students alike, can value it differently. The definition given to creativity by Kasof (1995) suggests that it must satisfy two criteria, “First it must be original, rare, or novel in some way. Second, creativity must be valued by individuals in the context in which it appears; In other words, it must be perceived as approved, accepted, appropriate or good”(p. 6). Though somewhat vague, this definition does make some important points. The idea that creativity must possess something that is original and valued means that the determination of what is creative and what is not, becomes an issue of judgment rather than measurement (Boughton, 2009). Boughton goes on to stress that this idea of judgment affects the way art teachers develop a curriculum, and the assessments that go along with the lessons. Assessments are often overlooked as a means to promote creativity; yet they are powerful tools that can be utilized to increase student creativity. (Boughton, 2009) This is corroborated by Bensur (2002) quoting a teacher from his research, who states that, “Assessment has ‘allowed’ judgments of strategy and aesthetic quality to be more reliable. Gains in aesthetic thinking were ‘improved’ from periodic assessment. Strong assessment ‘brought’ about improved skill learning with my students, too” (p. 20).

Stake and Munson (2008) believed that “To understand quality one must constantly recognize differences among gradations in quality. Identifying quality is in large part, a matter of comparison” (p. 15). In the case of artistic education quality and creativity are directly related. An art student can craft a detailed and technically sound work, yet without the creative component it cannot be considered the highest possible quality, and would fall somewhere in the gray area mentioned previously. A powerful work of art exists in the nuanced grey areas that strike emotion in its viewers, and while it may have technical merit, technique alone will never cause a work of art to be great.

There are many available approaches when deciding what to say during video feedback. Smith (1968) states that aesthetic criticism should have four parts: description, analysis, interpretation and evaluation, in order to fully evaluate a work of art. This is a sensible way to model thought for our students. By beginning with the broadest view, description, the educator can slowly narrow the focus of their critique by providing analysis and interpretation to ultimately reach the specifics of a final evaluation. Gentile and Murnyack (1989) ideas align with Smith, and they state the belief that in the initial stages the teacher must state the critical thinking process in order to model the process to their students. Barrett (1999) provides insight specific to critiquing photographs. He believes that people who critique art have a responsibility to address the questions that artwork raises, and suggest the questions it fails to ask. Critics also must fully describe what they are looking at; even what they feel is obvious material, in order to reveal the full picture. Finally, the critic may also compare it to other works by the photographer, provide evaluations, and dig deeper into the image than the artist themselves may have. This is also the job of the art educator, to help students gain a better understanding of their work and that of other artists, and so they should carefully model this critical thinking process when they critique student work. Video feedback provides an opportunity for students to listen to this process while the educator provides visual examples of their evaluation.



Quantifiable Measurements

If it is agreed that inspirational assessments in art are judgment-based and comprised of the gray areas that make up creativity, then how can arts assessments be quantified? According to Stake and Munson (2008) a good assessment should be “scalar, multidimensional, and based on criteria,”(p. 14) and it is more objective than subjective as well. Yet their definition of a qualitative assessment is more appropriate for what as been discussed thus far; it states that a qualitative assessment is “holistic, contextual, empirical, and empathetic” (p.14). Stake and Munson do not lean in one direction of assessment more than the other, but rather encourage a balance between the two. They suggest the classroom teacher should establish balance because they work from their understanding of quality, which is gained through experience. This gives the classroom teacher opportunities to create experiential learning, personal interpretations, and multi-contextual connections. They do warn however, that this freedom allows teachers to choose against taking up great ideas from other art education sources, including new ideas of assessment. This freedom to design and implement curriculum of their choosing could prevent teachers from trying new techniques because they have no need to do so. It could also be that teachers are wary of “the next fad” in educational design. Whatever the reason, this is something that needs to be taken into account when asking art teachers to try new technologies. The idea of educational freedom in art is supported by Dorn’s (2003) research, which suggests that “teachers with appropriate training have the ability to evaluate student performances, can govern themselves and set their own intuitive standards for providing valid and reliable estimates of their own students’ performances” (p. 367). He continues to state that an experienced teacher’s understanding of art can be a positive force in assessment with the knowledgeable use of rubrics as a guide. Even in his support of rubrics, Dorn pairs them with the teacher’s experience in order to make them a positive force, suggesting that rubrics alone are not enough to properly assess art students.

Attempting to use a single tool to accurately assess students is a challenging proposition. Bensur (2002) points out that, “An important part of measuring standards in art is the application of tactile and visual skills, which are necessary to the development of creative and critical thinking. These skills cannot be measured with a single instrument” (p. 19). In most cases which utilize a single instrument, that instrument is a rubric. There is nothing flawed in using a rubric; however, a rubric does have its limitation. According to McCollister’s (2002) research on rubrics in the classroom, as an assessment tool, rubrics can help to create a scaffolding for finished worked, and they do reveal some successful qualities and characteristics of successful artwork. However, McCollister also points out that, “The limitation of rubric use can be sameness or less variation and less risk taking in the students’ solutions to the problems that lessons pose” (p.51). She goes on to state that, “Extensive use of criteria rubrics can hamper personal responsibility, creativity and independence” (p. 51). Therefore, we can look at rubrics as a type of double-edged sword, in that they do a good job measuring certain concrete characteristics; but where they fall short is in measuring the grey areas that make up creativity. Stake and Munson also found rubrics to be limiting, “Conceptualizing quality in arts education with scales and rubrics can be problematic. Student performance is more complex than any checklist that a teacher or assessment expert can make” (p. 16).

Yet, rubrics do have intrinsic value because they “provide less experienced students with clear information about what to do to improve their work” (McCollister, 2002, p. 47). McCollister elaborates by stating, “The use of a rubric that is rich in description allows the teacher to disclose a great body of information to a large number of students, answer many questions, and demystify the learning at hand” (p. 48). Also, argued by McCollister is that a good rubric can be a starting point for discussion about a work of art, either during the process or afterwards. This indicates that there may be a place in the creative gray area for rubrics to be clarifying agents, but only when they are paired with discussions or other modes of review.

According to Kennedy (1995; as cited in English, 2010) assessments require many capabilities in order to succeed. They should be straight forward, student centered, based on coursework, differentiated, flexible, evidence and judgments, as well as student self assessment. The use of portfolio assessments has met such criteria. English’s (2010) research “indicated that portfolio assessment can be valid, reliable method of assessing student growth and development” (p107). English also referenced Blaikie et al. (2004) stating that students felt portfolio assessments helped them to comprehend their growth and progress over the course of the year. “Good portfolios do more that provide evidence for assessment” (Boughton, 2009, p. 12) they enable more creative engagement, and require students to demonstrate their level of interest. To achieve a good portfolio, one must provide in-depth and sustained reflection, and engaged interest in the pursuit of thematic content. (Boughton, 2009) Portfolio reviews allow students and teachers to assess the growth and improvement over time, the development of artistic skills, personal expression, and the student’s conceptualization and development of projects. (Anderson & Milbrandt, 2005; Dorn, Madeja, & Sabol, 2004; Popovich 2006) Dorn’s (2003) study on “Models for Assessing Art Performance” confirms that teachers trained in portfolio assessment, can use that training to conduct valid and reliable assessment of student artwork. This conclusion was supported by a high level of corroboration between educator quantification of expressive behavior, after training in portfolio review.

In English’s (2010) report on visual arts assessment she concludes by stating that alternative forms of assessment can engage students, increase their motivation, increase creativity, and above all else, “aid students in realizing their growth and their areas for improvement, their progress and their potential” (p. 125). Portfolio reviews are one form of alternative assessment; however, with the dramatic increase in new technologies other appropriate methods for feedback exist. The challenge is encouraging art educators to experiment with these technologies and to find ways to incorporate them into their classrooms. Burton (2001) found that art educators “are apathetic or have strong negative feelings about new strategies, in particularly those associated with electronic technology” (144). This suggests that early in the 21st Century, and possibly event today, art teachers are unwilling to attempt using new technologies in their classrooms. Many possible reasons exist as to why art teachers do not readily accept new technologies. One reason could be that art assessments must be built upon trust, as mentioned previously (Stake and Munson, 2008) or it could be that teachers see their role as guides in a “journey of learning,” (Popovich, 2006) and while the students are active participants in the experience, technology may be perceived as interfering in that dynamic. However, Popovich goes on to suggest a need for educational evolution and that, “The time has come for art educators to move beyond traditional teaching in the visual arts and strive to develop their own pedagogical approaches to curriculum, instruction, and assessment that draw inspiration from best practices and contemporary curriculum research” (p. 38).



The Role of Technology

The argument for new technologies in education has existed, with varying levels of support, for a long time. Dewey (1916) argued that, “Every step from savagery to civilization is dependent upon the invention of media which enlarge the range of purely immediate experience and give it deepened as well as wider meaning by connecting it with things which can only be signified or symbolized” (p. 252). Recent technology, or “media” as Dewey put it, can allow for art teachers to build upon the limited framework of the rubric and quantitative assessments pushed forward in the 1990s and 2000s, in order to create much deeper methods of assessment. Art educators can use these technologies in conjunction with a rubric or regular assessment in order to revisit the ideal of evaluating the process while also quantifying the product. West (2011) also suggests that in the light of these modern tools the need for standardized annual assessments, such as the no child left behind program, may be more limiting than helpful. He continues to state that teachers now have the ability to provide nuanced and detailed feedback at every step of the learning process; and he suggests that this feedback paired with regular evaluation could be the new model of the educational system.

New technologies have utilized the multimodal principle for both instruction and feedback; which stipulates that, “meaning and knowledge are built up through various modalities (images, texts, symbols, interactions, abstract design, sound, ect.), not just words” (Gee, p. 224). However, when using a rubric, feedback is given in only one manner, written. Studies have shown that “Students engaged in learning that incorporates multimodal designs, on average, outperform students who learn using traditional approaches with single modes” (Metiri Group, 2008). Why is it that the multimodal approach makes such a difference in learning environments? According to the Metiri Group (2008) visualization helps us to make sense of large volumes of information. They also state that our brains code visual and text/auditory input in separate channels, which allows us to process information through both channels simultaneously. They recommend technology as a means to vary the “level of interactivity, modality, sequencing, pacing, guidance, prompts, and alignment to student interest, all of which influence the efficiency of learning” (Nguyen & Sweller, 2006; cited in Metiri Group 2008). Mayer (2003) points out that while this level of interactivity may not be as efficient as completely interactive e-learning, the learners are still actively engaged because they are selecting, organizing, and integrating information. The learner is actively choosing what information is most relevant from the presented auditory and visual messages, and deciding which information to store in their long-term memory (p. 130).

As mentioned earlier, Dorn (2003) suggests that viable alternatives exist to pencil and paper assessments. Existing technology can help to create more of these alternatives that utilize the multimodal principle, the art educator’s experience, and the working relationship between the teacher and student. The challenge then becomes choosing the technology that appropriately delivers the message. Rice (1993) performed a study on media appropriateness for organizational communication using Social Presence Theory (SPT). SPT is defined as the degree to which the presence of the communicating participants is conveyed through a specific technology. (Short et al., 1976) Essentially, Short and Rice were looking for the perceived appropriateness of media during different types of communication. SPT is dependent upon the words conveyed, but also the non-verbal cues, such as body language or tone of voice. In Rice’s (1993) study he compared communications in the following settings: face-to-face, phone, text, voicemail, email, and in meetings. His work showed that in all occasions, face-to-face communication was preferred. This idea directly relates to the traditional approach of having discussions with students about their artwork, because both art teachers and art students prefer to have personal conversations when discussing art projects. Therefore, it would make sense to use a technology that resembles face-to-face communication as closely as possible. Mayer (2003) referred to this as the personalization effect, where students benefit more from multimedia explanations when then instructor’s tone and words are conversational. This effect could be attributed to learner’s willingness to believe that they are in a face-to-face conversation when the tone is less formal, and the opposite is true when the learner struggles to understand information presented in a formal manner.

A study performed by Keil and Johnson (2002) examined the usage of voicemail and email in relation to Social Presence Theory. Their findings pointed out strengths in both written and spoken feedback. The students appreciated the fact that email was timely and easily referenced at a later date. They also liked the voicemail because they appreciated hearing their instructors voice and the more personalized approach it provided. Personalized approaches were also studied by Moreno and Mayer (2000) and they found that “students who learned by means of a personalized explanation (either as speech or as on-screen text) were better able to use what they learned to solve new problems that students who received a neutral monolog” (p. 731). Their findings indicated that multimedia programs could result in better learning if the communication model addresses the student as a participator rather than an observer. With increased technological resrouces, a means of providing feedback for student work that can be recorded and presented digitally exist in the use of video; which allowing the students to receive a personal message, the verbal and non-verbal nuances of their educators feedback, and reference the recordings for future projects.

Cruikshank’s (2002) study states that, “Video feedback allows instructors to model a reader response, with the addition of cues that have the potential to help students take in feedback as part of an ongoing conversation about their work instead of personal criticism” (p. 95). This model of feedback can create a personal approach that helps to keep the working relationship between the art educator and the student. Video could potentially lessen any of the negative feelings associated with critiquing something as personal as artwork; or as Cruikshank states, “feedback may be perceived as friendly because the students can hear tone of voice, recognizing that we as teachers are encouraging them and not criticizing them” (p. 95).

Research suggests that these concepts can be used to create a system of video feedback for art students. Video is a combination of both moving images and audio recording, to create a multimodal presentation. Video can easily be replayed as proved in the Keil and Johnson study (2002), and it can be just as personal as face-to-face conversation as in the Moreno and Mayer study (2000) points out. With video, not only can artwork be shown and discussed, but rubrics and other writing materials can also be placed on screen, directly accessing the principles of the multimodal theory. The use of video in English classes at the Open University and the University of Warwick (Stannard, 2012) state that students preferred feedback as both the visuals and audio, as presented in video. They also felt as though the teacher was providing more input on their work, because their teachers would elaborate more and develop points further than they would in a written comment. Stannard’s writings also showed that the students were able to watch the videos multiple times, and they preferred the human feel of the videos.

Middleton (2010) found that 60% of students surveyed said that, “receiving feedback encouraged me to take more notice of the feedback compared to normal methods, ” (p. 10) and 80% were interested in their teacher continuing to use video feedback methods. Crook, et al., (2011) also found that 80% of the students studied preferred video as a way of receiving feedback, and they would like their teachers to continue using that method. Video was also found by Crook, et al. to increase student engagement outside of the classroom, as 58.1% of the students watched the videos with their peers, and many of them preferred the emotive and personal approach that video provided. Above all else, students cited that the video feedback in the study was clear and easily understood. Crook et al (2011) showed that every teacher in the study would consider using video again in future feedback to students, revealing an interest in the approach from both parties involved. In some cases, video enhanced the experience for the teachers by removing some of the monotony of writing comments, and allowed for educators to freely express their thoughts in detail. Video feedback has also displayed the potential to motivate students and increase their engagement due to its personal nature and ability to provide in-depth, constructive, and meaningful criticism. (Cruikshank, 2002) Research by Middleton (2010) and Rotheram (2009) found that the visual media could enhance learning while enabling educators to say a great deal in a given period of time, as compared to written methods of feedback. Middleton (2010) goes on to suggest that “the pedagogic benefits of video/audio media can be exploited within a Web 2.0 context to provide a new, interactive resource to enhance the feedback experience for both students and staff” (p.12).

These multimodal approaches have proven to be effective and even more so when paired with goals. Hattie and Timperly (2007) found that “The most effective forms of feedback provide cues or reinforcement to learners; are in the form of video-, audio-, or computer assisted instructional feedback; and/or relate to goals”(p.84). This reinforcement varies depending on the form the feedback takes, but research “across more than 7,000 studies and 13,370 effect sizes” (p. 84) demonstrated that the best presentations for feedback were video, audio or computer assisted instruction; all of which related to or were paired with goals.

Though most reviews of video feedback, or screencasting, in the classroom have been positive; a study by Lee, Pradhan, and Dalgarno (2008) found no correlation between the effects of screencasts on learning. Palaigeorgiou and Despotakis (2010) also noted the pedagogical challenges with incorporating screencasting into a curriculum. The most prominent challenges were the lack of support or interaction when viewing the screencast, and the inability to work with the information and encode it while viewing the videos. In their conclusion Palaigeorgiou and Despotakis (2010) point out that paring the screencasts with a written approach, such as guidelines or a rubric, add another layer of modality or present an opportunity to encode the information. Another possible opportunity to encode the information could be created by working with the material while watching the video or taking notes on the screencast, and would be an improved approach. Rodway-Dyer and Dunne (2009) found mixed results when studying the effects of audio feedback for written work. Their research pointed out similarities to Palaigeorgiou and Despotakis’s work in that, unless the feedback was well organized and designed to give direct steps for improvement, it was not well received by the students. However, students valued the timeliness, reported listening to the same feedback on multiple occasions, and believed that it would help them in their future performance.



Cognitive Load

According to John Sweller, who developed cognitive load theory in the 1980s, it is more difficult to process information if it is coming at us both verbally and in written form at the same time. Since people cannot read and listen well at the same time, displays filled with lots of text must be avoided. On the other hand, multimedia that displays visual information, including visualizations of quantitative information, can be processed while listening to someone speak about the visual content. (Reynolds, 2011)

Using a multimodal approach to feedback, in and of itself, is not a panacea for our educational needs. Adding more methods of input to our feedback deepens, and hopefully enriches, the message. However, if too much information is presented, or if it is presented in an unorganized fashion, the feedback may not make the transition from the student’s working memory to their long-term memory. Mayer and Moreno (2003) say it best when then suggest, “a central challenge facing designers of multimedia instruction is the potential for cognitive overload – in which the learner’s intended cognitive processing exceeds the learner’s available cognitive capacity” (p. 43). Therefore, now that we have the capacity to present information through multiple pathways, we must begin looking at how much information each pathway can process. Researchers in this study of limits during instructional design have organized their findings under the title, Cognitive Load Theory (CLT).

“Technology as [an] instructional medium involves perceiving and processing information in different presentation modes and sensory modalities” (Brunken, Plass & Leutner, 2003). CLT makes three assumptions about how humans process information based on cognitive science (Mayer, Moreno, 2003). The first assumption is that when looked at from the lens of CLT, these modalities take two forms, auditory and visual (including text,) referred to as the dual-channel assumption. The second assumption states that there is a limited amount of processing capacity available for each channel. Finally, the third assumption theorizes that truly meaningful learning requires significant cognitive processing in both the verbal and visual channels. These three assumptions are taken into account when looking at how people learn, and how we should create instructional design to best accommodate that learning.

In broad strokes, CLT suggests that humans go about learning in the following fashion. People have limited working memories, which can only store small amounts of information at a time; while, their long-term memories are vast, and potentially limitless. In order to transfer information from the working memory to the long-term memory, humans must develop schemas. Schemas are defined as a way to categorize information, facilitate its recall from long-term memory, and aid the grouping information in the working memory to reduce cognitive strain. This model then suggests that the most effective way to learn is not only to facilitate the creation of schemas, but also to increase the learner’s skill at adapting new information to previously constructed schemas. This allows the learner to draw upon previous knowledge and effectively facilitate the transitional process of moving information from the working memory to the long-term memory (Kirschner, 2002; Mayer & Moreno, 2003; Mousavi, Low, Sweller, 1995; Paas, Touvinen, Tabbers & Van Gerven, 2003; van Merrienboer & Sweller, 2005). This theory of learning is better explained by van Merrienboer & Sweller (2005) when they state:

If human cognitive architecture includes a massive long-term memory holding uncountable schemas and if working memory must be limited to ensure the important information is long-term memory is not corrupted by random processes, then the aim of instruction should be to accumulate rapidly systematized, coherent knowledge in long-term memory. Aiding the accumulation of useable rather than random knowledge in long-term memory means that information need not be freely discovered by learners but rather be conveyed in a manner that reduces unnecessary working memory load. (p. 155)

This process of selecting relevant verbal and visual cues, build associative connections, and organizing them based on prior knowledge is referred to by Brunken, Plass & Leutner’s (2003) as the generative principle of multimedia learning.

Van Merrienboer & Sweller (2005) suggest that working memory must be limited in capacity because when working with new and disorganized information, the number of possible combinations for the unknown elements increases exponentially with each new element added, making it impossible to remember everything. Humans need to systematically organize information, and without enough time to do so, the information cannot be transferred into our long-term memories. Mayer & Moreno (2003) describe this inability to process information as cognitive overload. This overload is caused by the presented task exceeding the processing capacity of the learner’s cognitive system. They continue by stating that meaningful learning requires substantial cognitive processing, and that more over, our system for processing information has severe limitations. Therefore, presenting massive amounts of information in any format, including multimodal, will result in cognitive overload.

This limiting intrinsic cognitive load “cannot be altered by instructional interventions because it is determined by the interaction between the nature of the materials being learned and the expertise of the learner” (van Merrienboer & Sweller, 2005; p. 105). This expertise of the learner plays a large role in determining what information is necessary and what is redundant. The more experienced the learner, the increased likelihood that they are attempting to process unnecessary information, which can impose undue cognitive load (Yeung, Jin, Sweller, 1997). This idea suggests that educators must understand their student’s level of expertise, and then look for redundancies in their multimodal approaches. Otherwise, teachers run the risk of overloading the cognitive functions of their students, which will prevent the transfer knowledge into their long-term memories.

Results from research conducted by Mousavi, Low and Sweller (1995) suggest two things. First, that a consideration of a student’s level of expertise and the needs of their working memory appear to be a major determinant of successful instructional design; and secondly, “working memory capacity can be increased with dual-presentation mode” (p. 332). Their findings show that with a dual presentation mode, referred to in this paper as multimodal, working memory can be effectively increased, since information is presented for two different pathways into long-term memory. This attention to a student’s expertise can also alleviate the distracting aspects of unnecessary and redundant information. These findings are repeated in Mayer (2003) and referred to as the coherence effect. “The coherence effect refers to the findings that students learn more deeply from a multimedia explanation when extraneous material is excluded rather an included” (p. 132). Moreno & Valdez (2005) reached the same conclusion in their CLT research, stating that their experiment “strong supported the dual-core hypothesis, which predicts that students learn better when provided with visual representations rather than visual or verbal representations alone” (p. 43). This research suggests that we can reduce cognitive load, and increase the transfer of information from working memory to long-term memory through presenting information in both auditory and visual means, or in other words a multimodal approach, and is referred to by Brunken, Plass & Leutner’s (2003) as the modality effect.



Conclusion

From assessing students’ artistic intelligence, to standing firmly against assessments in art, education has changed its position on assessments multiple times during the last 100 years. Currently, assessments are used as a way to prove student growth and quantify progress. However, aesthetic education deals with creativity and personal judgment. This human element creates a grey area filled with subtle nuances, and so creating meaningful assessments becomes a difficult task. Rubrics are frequently used as tools to quantify student work; but they do little to provide specific and detailed feedback to the students, and they do nothing to address the details unique in each work of art. Research based on modern technologies, such as the multimodal learning principle, suggests that feedback is more meaningful if it is delivered through multiple pathways (audio, visual, written, etc.). Current video technology, allows for art educators to not only deliver feedback through multiple pathways, but it also gives them the ability to touch on the fine details that exist in works of art, something that rubrics have struggled to achieve. This is affirmed by West (2011) when he states, “Digital technologies create opportunities to measure student performance in a much more nuanced and multi-faceted manner than previously was the case” (p. 8). When video feedback is paired with quantifiable grading tools such as rubrics, they not only generate accurate assessments of student work, but it also delivers detailed feedback to the student in a personal way. This delivery style can help to engage the art student and promote creativity and an interest in improving their artistic ability. “In short,” states Mayer (2003) “the promise of multimedia learning is that teachers can tap the power of visual and verbal forms of expression in the service of promoting student understanding”(p. 127). It seems though, that the largest challenge may be convincing art educators to keep up with the technological momentum, accept new technologies, and eventually, utilize them in their classrooms.


References

Anderson, T. & Milbrandt, M. (2005). Art For Life. New York, NY: McGraw-Hill.

Bangert-Drowns, R. L., Kulik, C. C., Kulik, J.A., and Morgan, M. T. (1991). The instructional effect of feedback in test-like events. Review of Educational Research, 61(2), 213-238.

Barrett, T. (1999). Criticizing Photographs: An Introduction to Understanding Images. New York, NY: McGraw-Hill.

Bensur, B. (2005). Frustrated voices. Art Education, 55(6), 18-23.



Boughton, D. (2009). Promoting Creativity in the Art Class through Assessment. http://www.niu.edu/assessment/committees/CAN/PresentationsPapersArticles/ArtEd-CreativityPaper.doc

Brewer, T. (2008). Developing a bundled visual arts assessment model. Visual Arts Research, 34(1). 63-74.

Brookings. (2011). Using technology to personalize learning and assess students in real-time. Washington, DC: West, D.

Broudy, H. (1972). Enlightened Cherishing: An Essay on Aesthetic Education. Champaign, IL: University of Illinois Press.

Brunken, R., Plass, J., Leutner, D. (2003). Direct measurement of cognitive load in multimedia learning. Educational Psychologist, 38(1), 53-61.

Burton, D. (2001). How do we teach? Results of a national survey of instruction in secondary art education. Studies in Art Education, 42(2), 131-145.

Crook, A. Mauchline, A. Maw, S. Lawson, C. Drinkwater, R. Lundqvist, K.,…Park, J. (2012). The use of video technology for providing feedback to students: can it enhance the feedback of experience for staff and students? Computers & Education, 50. 386-396. doi:10.1016/j.compedu.2011.08.025

Cruikshank, I. (2002). Video: a method of delivering student feedback. Journal of Art & Design Education, 17(1). 87-95. doi:10.1111/1468-5949.00109

Dewey, J (1934). Art as Experience. New York, NY: Perigee Books

Dewey, J (2004-07-01). Democracy and Education [with Biographical Introduction] (p. 252). Neeland Media LLC. Kindle Edition.

Dorn, C. (2003). Models for assessing art performance (MAAP): A K-12 Project. Studies in Art Education, 44(4). 350-370.

Dorn, C. Madeja, S. Sabol, R. (2004). Assessing Expressive Learning: A Practical Guide for Teacher Directed Authentic Assessment in K-12 Visual Art Education. Mahwah, NJ: Lawrence Eribaum associates.

Eisner, E.W. (1996). Overview of evaluation and assessment: concepts in search of practice. In D. Boughton, E.W. Eisner, & J. Ligtvoet (Eds.), Evaluating and Assessing the Visual Arts in Education: International Perspectives. 1-16. New York, NY: Teachers College Press.

English, A. (2010). Assessing the Visual Arts: Valid, Reliable, And Engaging Strategies. Retrieved from http://www.evergreen.edu/mit/docs/ConnectionSpring2010.pdf

Gee, J.P. (2007). What Video Games Have to Teach Us About Learning and Literacy. New York, NY: Palgrave Macmillan.

Gentile, J. R., Murnyack, N. C. (1989). How shall students be graded in disciplined-based art education? Art Education, 42(6), 33-41.

Gruber, D., Hobbs, J. (2002). Historical analysis of assessment in art education. Art Education, 55(6), 12-17. doi: 10.2307/3193974

Gura, M. (2008). Visual Arts Units for All Levels. Available from http://www.iste.org/store/product?ID=680

Hattie, J. & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1). 81-112. doi: 10.3102/003465430298487

Kasof, J. (1995). Explaining creativity: the attribution perspective. Creativity Research Journal, 8(4), 311-366.

Keil, M., Johnson, R. (2002). Feedback channels: using social presence theory to compare voice mail to e-mail. Journal of Information Systems Education, 13(4). 295-302.

Kirschner, P. (2002). Cognitive load theory: implications of cognitive load theory on the design of learning. Learning and Instruction 12. 1-10.

Lee, M & Dalgarno, B. (2008). The effectiveness of screencasts and cognitive tools as scaffolding for novice object-oriented programmers. Journal of Information Technology Education, 7. 61-80.

Leeds Metropolitan University. (2009). Sounds good: quicker, better assessments using audio feedback. Leeds, England: Rotheram, B.

Lepper, M. R., & Chabay, R. W. (1985). Intrinsic motivation and instruction: conflicting views on the role of motivational processes in computer-based education. Educational Psychologist, 20(4), 217-230.

Mayer, R. (2003). The promise of multimedia learning: using the same instructional design methods across different media. Learning and Instruction 13, 125-139. http://dx.doi.org.lib.pepperdine.edu/10.1016/S0959-4752(02)00016-6

Mayer, R., Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38(1), 43-52.

McCollister, S. (2002). Developing criteria rubrics in the art classroom. Art Education, 55(4), 46-52. doi: 10.2307/3193968

Metiri Group. (2008). Multimodal Learning Through Media: What the Research Says.

Middleton, A. (2010). Media-Enhanced Feedback case Studies and Methods.

Mousavi, S., Low, R., Sweller, J. (1995). Reducing cognitive load by mixing auditory and visual presentation modes. Journal of Educational Psychology, 87(2), 319-334.

Moreno, R. & Mayer, R.E. (2000). Engaging students in active learning: The case for personalized multimedia messages. Journal of Educational Psychology, 92(4), 724-733.

Moreno, R. & Valdez A. (2005). Cognitive load and learning effects of having students organize pictures and words in multimedia environments: the role of student interactivity and feedback. Educational Technology, Research and Development, 53(3). 35-45.

Narciss, S. & Huth, K. (2004). How to design informative tutoring feedback for multi-media learning. In H.M. Niegemann, D. Leutner, & R. Brunken (Ed.), Instructional Design for Multi-Media Learning (p. 181-195). Munster, NY: Waxmann.

Palaigeorgou, G. & Despotakis, T. (2010). Known and unknown weaknesses in software animated demonstrations (screencasts): a study in self-paced learning settings. Journal of Information Technology Education, 9. 82-98.

Pass, F., Tuovinen, J., Tabbers, H., Van Gerven, P. (2003). Cognitive load measurement as a means to advance cognitive load theory. Educational Psychologist 38(1), 63-71.

Popovich, K. (2006). Designing and implementing exemplary content, curriculum and assessment in art education. Art Education, 59(6), 33- 39.

Pridemore, D.R., Klein, J.D. (1995). Control of practice and level of feedback in computer-based instruction. Contemporary Educational Psychology, 20, 444-450.

Rice, R. (1993). Media appropriateness: using social presence theory to compare traditional and new organizational media. Human Communication Research, 19(4), 451-484.

Rodway-Dyer, S & Dunne, E (2009). Technology Enhanced Feed-Forward for Learning. York, UK: The Higher Education Academy.

Schmidt, R.A., & Bjork, R. A. (1992). New conceptualizations of practice: common principles in three paradigms suggest new concepts for training. Psychological Science, 3(4), 207-214.

Short, J. Williams, E. & Christie, B. (1976). The Social Psychology of Telecommunications. New York, NY: Wiley.

Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153-189. doi: 10.3102/0034654307313795

Smith, R. A. (1968). Aesthetic criticism: the method of aesthetic education. Studies in Art Education, 9(3), 12-31.

Stake, R., Munson, A. (2008). Qualitative assessments of arts education. Arts Education Policy Review, 109.6, 13-21.

Stannard, R. (2012, January 10). Talking feedback: moving cursors and voice comments could revolutionize the way teachers correct learner’s work. The Guardian.

van Merrienboer, J., Sweller, J. (2005). Cognitive load theory and complex learning: recent developments and future directions. Educational Psychologist, 17(2), 147-177.



Yeung, A., Jin, P., Sweller, J. (1997). Cognitive load and learner expertise: spilt-attention and redundancy effects in reading with explanatory notes. Contemporary Educational Psychology, 23, 1-21.

Zimmerman, E. (1992). Assessing student’s progress and achievements in art. Art Education, 45(6), 14-24. doi: 10.2307/3193312


The database is protected by copyright ©essaydocs.org 2016
send message

    Main page