It seems I have been writing non-stop for the past year or so, but not as much of that writing has ended up on this blog. My MCEA students have all graduated and mostly accepted beginning teacher jobs; Teachology has moved on to it’s third student cohort group; prelims have been successfully completed; and I have moved on to a full time position as outreach specialist. The most interesting shift for me has been one of perspective along the professional trajectory of educators: from working with preservice teachers to instructional coaches. Lately, my brain has been consumed with ideas for how student-centered coaching might be adapted and adopted in ways that support preservice teacher supervision and coaching around digital learning. In both of these contexts, I’ve been considering the missing piece of effectiveness as measured through student evidence. Perhaps my favorite part of Diane Sweeney’s work on student-centered coaching might be repositioning “data” as evidence of student thinking and skills through authentic engagement in literacy activities. Yet, I continue to struggle with how to discuss and interpret such evidence: 1) within digital contexts and 2) within student teaching contexts.
Interpreting Student Evidence in Digital Contexts This often leads to the question of assessing digital texts, for example, something I continue to explore and feel confounded by. Last fall, I was excited to get my hands on a new book by Troy Hicks and the National Writing Project, Assessing Digital Writing: Protocols for Looking Closely, that tackles this question by emphasizing collaborative protocols for looking closely at writing. Drawing on habits of mind and broad considerations for digital writing, the book eventually makes the argument that maybe digital texts should not be evaluated through the same methods and approaches as traditional texts, because they do, in fact, do something different. Using these resources and samples of some of our own student work, I worked with my (then) student Gracie Binder to lead a session on Assessing Digital Writing at our fall Teachology conference and again in the spring at WEMTA. Our experience elicited really good discussion over the complexities of looking at student writing in digital contexts–particularly as we consider what ‘evidence’ is ‘evidence of.’ How do we distinguish strategic moves? Or awareness of those strategic moves? Or is it enough that students can make those moves without consciously articulating the why? Or is the why the crucial piece of critical dispositions that we are supporting students to identify, interrogate, and flexibly adapt across contexts? I think I left more confused than our attendees. The biggest takeaway seems to be that none of this work is easy. There is no exact conversion or method. We are constantly in the process of questioning, exploring, and revising as we make our way. And yet, perhaps there is something more freeing here in the opportunity to chart this territory with our students that feels more authentic or responsive than simply turning to 6-Traits Rubrics. Which also brings me back around to reconsidering how we might re-approach traditional writing beyond the rubric as well. Comments are closed.
|
This BlogWonderings on teaching. learning. and everything in between. Archives
April 2019
Categories
All
|