One of the culminating activities of our Coaching Practicum course is the creation of a one-page document to articulate central Coaching Belief Statements. In the past, this assignment has taken a traditional, text-based format. As we hoped for this document to serve an authentic purpose at participant sites, we wondered if a visual representation might better support these aims. Using a variety of different technology tools, our students created visually-appealing belief statement representations that pushed them to clearly and succinctly communicate their vision for coaching. But for every gain this format offered, we had to weigh corresponding losses.
After reading Kelly Gallagher's Readicide a number of years ago, I've been hesitant about using post-it notes in the reading classroom. In his book, he describes the tedious interruptions to flow as disruptive to the reading experience. Of course he is speaking in extremes, but his words have stuck with me to the point where I've almost discarded sticky notes altogether...almost.
“Look at the process it took for [the student] to get there as a learner, and how he or she reflected on that experience." This timely post from Eduotopia looks at assessing project-based learning and the dangers of relying on final projects alone. I've written about this topic previously and identified some strategies specific to assessing digital multimodal compositions. Emphasizing the process of design by looking closely at student thinking through embedded activities is at the core of this work, and really, important to writing instruction more broadly.
But I had never thought about characterizing assessment as play. One of the most powerful ideas from the article is repositioning the assessment process as participatory to enable student agency. Our recent work in assessing our Digital Salon projects in ELA Methods resonates well with this approach. However, I thought of the process as reflection-based rather than play-based. How does taking a play-based stance open up new opportunities? Using "play" evokes.... creativity; multiple pathways to sense-making; collaborative innovations; unintended consequences; whole-child engagement; moments of 'flow'; an array of interpretations. Most importantly, it better captures the experience of learning through doing, in all of its iterations and patterns as opposed to the unidirectional nature of most lesson designs and assessments. This rings true to the experiences of teachers' play we're currently exploring in our upcoming LRA paper. A much-needed counterpoint to the over-reliance on assessment data that limits process and outcome in many K-12 classrooms. Awhile back, I posted about a PD opportunity that helped me to reimagine what engaging webinars might look like. After testing these strategies out in our Coaching program, I was ready to give it a go for the Methods course. Throughout the semester, there were many times when I wanted to meet with students to discuss important themes arising from their work. Given the constraints of the blended syllabus, this wasn't always possible as sometimes we wouldn't meet face-to-face for multiple weeks. Enter the webinar. To address this need, I planned interactive webinar activities around the following three topics.
As I'm currently teaching both ELA Methods and the Practicum in Student-Centered Coaching, I often find great resonance between the ideas being discussed by beginning teachers and experienced coaches. It's amazing to know we are all often much more 'on the same page' than we might think.
One of my coaches is currently embarking on a coaching cycle with a teacher who wants to design a menu of assessment options for his students to choose from in order to demonstrate their learning within a unit. They are currently working through the steps that we have been thinking about in ELA methods over the past two weeks--how to make sure that assessments are aligned to the standards, to compare the rigor and experience of students engaging in those assessments, and to determine how to scaffold to support needs. They are working with this resource which might be helpful to consider. It connects Bloom's Taxonomy and Webb's Depth of Knowledge (another way to think about how activities ask students to engage with content in different ways). It also gives a nice list of examples for what different activities might look like depending on what level you are focusing on. webbs-dok-flip-chart by on Scribd The wide-ranging and ever-expanding presence of technology tools entering our classrooms brings a multitude of potential opportunities for innovative new projects. Multimodal composition comes to life through student created digital stories, podcasts, websites, and large-scale projects that include all of the above. In many ways, digital projects pose the same challenges to traditional forms of assessment as familiar examples of low-tech projects. Similar to project-based learning, digital projects raise questions about contribution and collaboration, the process of documenting problem solving, and determining exactly what constitutes evidence of craft.
Teachers will also likely encounter the age-old composition struggle of balancing process and product, to weigh creativity and originality alongside the standards of conventions. However, I would argue that the challenge of assessment is even more pronounced for digital stories because of the added complexities of working with technology tools–a task that is often new for both teachers and students alike. Prior experience and exposure can play heavily on a student’s ability to successfully produce a high quality digital project, though even the most tech-savvy can fall victim to the unlucky technical glitch or misstep that can cause serious setbacks to the progress of the project. How do you account for these considerations in your grading rubric? And, what characteristics or skills does your rubric actually even assess? These are always tough questions, only made harder with digital projects. We recently explored these questions in the University of Wisconsin-Madison’s ELA methods course I taught last year to preservice elementary teachers. While inevitably each teacher will come to his/her own interpretation of what assessment looks like, the following references might provide helpful starting points to begin the conversation. By no means an exhaustive list, these are practitioner-friendly tools to help us all inquire into what matters and why in crafting assessments of digital projects. National Writing Project, Digital Is, Multimodal Assessment Tool The starting point really began for us when we came across the National Writing Project’s Digital Is site that houses the NWP Multimodal Assessment Project. One of the most interesting features of this project is how the process is documented–helping new and veteran teachers alike dive into the complexities related to crafting a multimodal assessment rubric. This was particularly enlightening for my preservice teachers, as it noted the limitations of using the widely accepted 6+1 Traits rubric for these new types of digital projects. The latest project draft presented highlights five elements of multimodal composition that teachers might use to guide and formulate assessment tools: context, artifact, substance, process management and technical skills, and habits of mind. Troy Hicks, Crafting Digital Writing & Assessing Digital Writing The straightforward examples and discussions of the digital writing process in Hick’s book were helpful and practical for our classroom discussion and application. While providing examples and analysis of teacher-created projects, Hicks draws attention to the intentionality of the process as a whole: teacher intentionality in designing the assignment, student intentionality throughout the production process, and the degree to which assessment is intentionally authentic in nature. Intentionality is highlighted through his application of author’s craft to digital writing using MMAPS (p. 20-21): mode, media, audience, purpose, situation for the writer, and situation of the writing (for a deeper discussion of this very helpful heuristic see the Digital Writing Workshop pp. 56-59). These characteristics are equally useful for designing an assessment that is responsive and attentive to the goals of the project. In addition, students were particularly drawn to the discussion of “Habits of Mind” which mirrored many of the less quantifiable but highly valued goals in their perspective: curiosity, openness, engagement, creativity, persistence, responsibility, flexibility, metacognition (p. 26). His follow-up book, Assessing Digital Writing, goes one step further to provide a collaborative protocol for looking closely at student writing. While not providing any easy answers, the perspectives offered highlight the ways in which craft is similar and different in digital contexts and how this impacts the assessment process. Formative Tools Once the purpose and goals for a project are clearly established and a rubric has been created (or co-created with students), the role of formative assessment and feedback is just as much if not more so important when implementing digital projects. These projects can often feel “fuzzy” and uncertain as you move forward the first few times. I have often found that keeping the limits “open” and encouraging my students to take risks requires me to have less control and clarity over how I envision the final product. For me this choice feels empowering for my students, however, they don’t always interpret the responsibility of the unknown in the same way. Instead, many students have encountered anxiety and confusion over not quite understanding “what” exactly they were producing (even as I reassuringly supported them in embracing the freedom of the opportunity). Later on, I often found this anxiety becoming my own as I struggled to assess the work of: the student who produced the bare minimum and used no feedback to make improvements; the student who had ambitious ideas but failed to pull it together; the student who crashed her computer and lost everything to start anew the night before and submit something less than her best but the best she could manage. Of course these examples are no different from the assessment challenges of traditional writing assignments; they only become much more widespread and amplified when new technology tools are added to the mix. The addition of carefully designed formative and informal assessments, however, aided in my analysis of the project and student work, helping to validate the meaningfulness of the assignment through the insights and skills students were developing-even when their final pieces were not necessarily successful in terms of traditional rubric categories. Some strategies I’ve employed include:
It seems I have been writing non-stop for the past year or so, but not as much of that writing has ended up on this blog. My MCEA students have all graduated and mostly accepted beginning teacher jobs; Teachology has moved on to it’s third student cohort group; prelims have been successfully completed; and I have moved on to a full time position as outreach specialist. The most interesting shift for me has been one of perspective along the professional trajectory of educators: from working with preservice teachers to instructional coaches. Lately, my brain has been consumed with ideas for how student-centered coaching might be adapted and adopted in ways that support preservice teacher supervision and coaching around digital learning. In both of these contexts, I’ve been considering the missing piece of effectiveness as measured through student evidence. Perhaps my favorite part of Diane Sweeney’s work on student-centered coaching might be repositioning “data” as evidence of student thinking and skills through authentic engagement in literacy activities. Yet, I continue to struggle with how to discuss and interpret such evidence: 1) within digital contexts and 2) within student teaching contexts.
Interpreting Student Evidence in Digital Contexts This often leads to the question of assessing digital texts, for example, something I continue to explore and feel confounded by. Last fall, I was excited to get my hands on a new book by Troy Hicks and the National Writing Project, Assessing Digital Writing: Protocols for Looking Closely, that tackles this question by emphasizing collaborative protocols for looking closely at writing. Drawing on habits of mind and broad considerations for digital writing, the book eventually makes the argument that maybe digital texts should not be evaluated through the same methods and approaches as traditional texts, because they do, in fact, do something different. Using these resources and samples of some of our own student work, I worked with my (then) student Gracie Binder to lead a session on Assessing Digital Writing at our fall Teachology conference and again in the spring at WEMTA. Our experience elicited really good discussion over the complexities of looking at student writing in digital contexts–particularly as we consider what ‘evidence’ is ‘evidence of.’ How do we distinguish strategic moves? Or awareness of those strategic moves? Or is it enough that students can make those moves without consciously articulating the why? Or is the why the crucial piece of critical dispositions that we are supporting students to identify, interrogate, and flexibly adapt across contexts? I think I left more confused than our attendees. The biggest takeaway seems to be that none of this work is easy. There is no exact conversion or method. We are constantly in the process of questioning, exploring, and revising as we make our way. And yet, perhaps there is something more freeing here in the opportunity to chart this territory with our students that feels more authentic or responsive than simply turning to 6-Traits Rubrics. Which also brings me back around to reconsidering how we might re-approach traditional writing beyond the rubric as well. |
This BlogWonderings on teaching. learning. and everything in between. Archives
April 2019
Categories
All
|