I finished reading “When Prison is a Refuge” in this week’s Chronicle Review, about people who have suffered horrible abuse and deprivation and for whom, in the short term, prison is a place where they are warm, safe, and can get medical (but not psychological) help; but prison is a false hope because of the consequences for them after they are released. Then I turned the page and started to read “The Future of the Professoriate” (subtitle on the cover: “It certainly will involve less tweed”) and thought “Who gives a shit?”. I put it down, then took the pile of catalogs that have been accumulating on the kitchen counter and threw them in the recycle bin.
November 22, 2013 by Isobel Stevenson · No Comments · Student Achievement
One of the great things about our grant-funded work is that it is a package deal, and part of the package is an external evaluator who asks extremely good questions. So this blog post is to mull over some of her wonderings.
Is there some threshold level of efficacy at which facilitative coaching becomes appropriate?
Given that I have seen the feedback questions used quite successfully with kindergartners—and our definition of facilitative coaching is merely an elaboration of supporting people in asking and answering the feedback questions—then I think the answer is no.
I don’t mean to be trite, so here are a couple of qualifiers. First, when someone describes someone else as “not reflective”, I really wonder about that diagnosis. It is very hard for me to believe that someone who has made it through graduate school and has a track record of professional success can be described adequately as “not reflective”. What I can believe is that they cannot find time to be reflective, although even that strains credulity. You can’t stop thinking just like you can’t stop breathing, and am I really the only person who spends her time in the shower or stuck in traffic thinking about what has happened recently and making meaning of it? I think not.
I can more readily believe that people are too cognitively overwhelmed to think about what they are doing in a way that helps them make meaning. There are lots of interesting things to read on that topic, such as Malcolm Gladwell on the difference between panicking and choking and Karl Weick’s various articles on sense-making, which I think are brilliant. I think also Donald Schön’s work is really instructive here. When he writes about the reflective practitioner, he talks about professionals interrogating the problem they are trying to solve, drawing on their previous experience as a starting point. Well, if you’re brand new, every problem is a new problem. Likewise, Gary Klein talks about how experienced practitioners (he started his research with fire-fighters) spend the bulk of their time figuring out what the problem is, because once they understand the problem they know what to do. But if you’re brand new…
I understand that the novice factor does provide an argument for having a mentor rather than a coach. And I would not argue that the coach should never tell the client what to do. But when the coach does that, I want it to be done intentionally, not because the coach doesn’t have another strategy to employ. And I think that helping someone clarify the target is a valuable activity that is not synonymous with telling someone what to do.
Which leads me to another major factor here—and something that we assume means that a person is not reflective—is that often people are just not clear on the target. This happens when:
- New principals (although not just new principals) don’t have frames to help them make sense of what they are experiencing. Problems that they could explain as teachers turn out not to be explicable from their new point of view;
- There are so many urgent problems to deal with that it’s difficult if not impossible to know which one to tackle first—which connects back to the idea of cognitive overload.
I think that the next step in working with someone in this situation is to ask questions like:
- Do you have a clear idea of the target?
- How can you get a clearer idea of the target?
- What other resources do you have to help with…?
- Have you experienced anything analogous to this before? How did you handle it then?
- Have you seen anyone else deal with this kind of issue? What did he or she do that worked?
This last question frequently leads to a discussion about how that other person’s context was different, but it often provides a perfectly viable starting point.
Don’t you need some image of what competence looks like in order to assess your own performance?
Yes, I agree that that is true. Speaking for myself, I do have a bit of a biased reaction to the word “rubric” for a couple of reasons, mainly because of the way I saw it used too often by teachers who seemed to think that a) because they had written down what they considered quality performance to be, the kids should know what to do with that, and b) thought of a rubric as a way to grade rather than as a starting point for co-creating clarity of target.
I would, therefore, resist the idea of providing the coaches with a rubric and saying “here, go self-asses against that.” But I think we should entertain the idea of having the coaches develop a rubric—and we have a good starting point there, because the document they have called “Making the Most out of Coaching” is actually a description of what good coaching looks like and they could analyze that—or provide them with a rubric that we ask them to revise, in the same way that we asked them to revise the value chain at the last community of practice.
Have you all begun to articulate what effective coaches do, in the form of competencies? Do you see a value, or a risk, in doing that, and in sharing it with the coaches?
In the form of competencies, no. In the form of “Making the Most out of Coaching”, yes, and that is a document that is in fairly wide use, although as I implied above, not everyone sees it as an obvious place to start. I would argue that each CoP is designed to move us closer to clarity of target on what good coaching looks like. Obviously, we are not there yet.
November 22, 2013 by Isobel Stevenson · No Comments · Student Achievement
Yesterday we worked with principals on giving feedback to teachers. We watched a video of a middle school math lesson, and then staged a role play with Pete playing the role of the teacher and three supervisors (Pat, Tim, me) giving feedback on the lesson. I haven’t been able to stop thinking about the response that we got from the principals to the role play. Here is what is striking to me.
First, and most obviously, is the valuing of style over substance. It seems to me that they were so focused on the style issue that it overshadowed what was actually said. My example, which was supposed to be the model, seemed to them so cold and distant that it seemed like almost none of them could accept—even when we pointed it out to them—that mine was the one who made no judgmental statements and gave no direction—instead, the questions were designed to have the teacher answer the three feedback questions: what was your goal, how did you assess what was going on, and what did you do to close the gap between your assessment and the target?. I think this tells us a lot about their ability to analyze, and indicates how we should have scaffolded the activity.
One way we could have scaffolded it would have been to have someone warmer and fuzzier than me be the ideal—or at least as warm and fuzzy as Pat and Tim. Seriously, they interpreted what I did as disengaged, whereas they saw Pat and Tim very differently. One of the comments from the feedback was “The scripted form (#3) is great but delivery and tone isn’t acceptable (it would turn off staff).”
Another would have been to add an additional step to the exercise, where they had to track the question or statement by the coach and the response it elicited. When we did the principal coach CoP, the exercise we asked them to do (modeled it first) was to write down the question asked (in this case they were listening to their own coaching) and the response it elicited, and asked them to draw conclusions about which questions were most successful in moving forward the thinking of the person being coached. That, in theory, should have screened out the affective part and got them to focus on, and justify, what was most effective.
The other part that was really striking to me was how quick they were to interpret something as evaluative, and which therefore would elicit a negative reaction from teachers. During the debrief, here are the things that they thought teachers would interpret negatively:
- Asking questions
- Taking notes
- Not making eye contact
- Not spending time on niceties at the start
- Going back the next day (even though Pete had asked me to come back the next day)
- Tone of voice
To me, this is evidence of the damage that is being done in the current evaluation-focused climate. Leaders in supervisory positions have to be careful what they say and how they say it because their experience has been that once something acquires an evaluative connotation, it will be resisted. This is also evidence in support of the theory that people distinguish between ego-involved and task-involved feedback, and behave differently as a result. When feedback is focused on the task, then they experience the feedback as help in helping them perform that task better. When it involves judgment, then their self-worth comes into question and they feel the need to justify what they do or did, and they stop listening to the information that could help them get better.
Another consistent theme I have noticed—this came through particularly strongly when I was working in California this spring—is that people tend to assume that when people get defensive in response to feedback, it is because of the way that they delivered the feedback. In other words, whether people respond to feedback is a function of how that feedback is couched rather than what was actually said. It just occurred to me that that is a corollary of what happened yesterday—two sides of the same coin. So that leads to a theory of action that says that the way to be a better supervisor is to perfect a certain style (softer, warmer, more genial) rather than to get better at giving feedback (which in our world means asking questions about target, current performance, and action steps). That is so interesting: it means that people define “giving feedback” in fundamentally different ways.
So that is the next design challenge: to create experiences that will allow leaders to test their underlying assumptions about feedback and make sense of what happens when they do.
October 24, 2013 by Isobel Stevenson · No Comments · Student Achievement
One of my colleagues tells me that I ought to write more, so I thank her for pushing me to do that, and for being an incredibly skillful and thoughtful colleague.
I worked with her today in a coaching conversation. She brought three problems of practice to our meeting, and we talked through them. As I said, she is a thoughtful person, and so the conversation was very engaging. Always interesting to talk to someone whose mind works fast and goes deep. On the one hand, it’s great to watch them do all the right thinking, and on the other hand it’s hard to think that you’ve been useful.
We have been having an ongoing conversation about the times when it is appropriate to step in and give [what you perceive to be] the “right answer”. She ran her own thought experiment with her coaching clients, paying attention to what happened when she did less of telling about her own experience, which I thought was very impressive. And we talked about the fact that I did share my own thinking at times, but only when it felt like to do otherwise would be dishonest–like I was a magician withholding information for the sake of maintaining an illusion.
We talked about her “take-aways” at the end of the meeting, and then she followed up with more in an email. Here they are. My favorite part is that they don’t all make sense to me. Which, of course, doesn’t matter. That’s why I like it.
- The power of silence
- Great to be on the “receiving end”—sensitizes you to how it feels to be on the other side of coaching moves
- Have to go back and think about belief systems
- Liked the tip: “what would the perfect coach say right now”
- Put it back on the client—it’s their work
- Liked the combination of pushing and sharing expertise
- You called attention to my facial expressions to help me make meaning and when I gave the wrong answer (he won’t be forthcoming so I won’t be forthcoming)
- You probed until I came to the right place. And then you revisited that concept later with the quid pro quo reference
Coaching is much of what I work on at the moment, but I am not always working as a coach. So it was great today to get to do that in the most enjoyable of circumstances.
October 5, 2013 by Isobel Stevenson · No Comments · Student Achievement
I have been working a lot on feedback recently—it is a recurrent theme in my work and the concept that I have the hardest time with. Here are the points that bear repeating, that I repeat to myself over and over again.
- The most important feedback—in the sense that we are most likely to believe it, although I am not sure that makes us more likely to act upon it—is the feedback we give ourselves. My source quotation on this is on a study that looked at exactly this question—what matters to you most, how you think it went or how someone not you thinks it went—whatever it was: “The results of this study rather dramatically point out the superiority of self-generated over externally generated performance oriented feedback” (Ivancevich & McMahon 1982, p 370).
- From the work of Hattie and Timperley: effective feedback answers three questions: where am I going, how am I going, and how do I get from here to there? Further, we can provide feedback at four levels: self, task, process, and self-regulation. Here’s what H&T say about these: “Thus, there is a distinction between feedback about the task (FT), about the processing of the task (FP), about self-regulation (FR) and about the self as a person (FS). We argue that FS is the least effective, FR and FP are powerful in terms of deep processing and mastery of tasks, and FT is powerful when the task information subsequently is useful for improving strategy processing or enhancing self-regulation (which it too rarely does) (Hattie & Timperley,2007, pp 90-91).
- The quotation that best sums it up is this one: The indispensable conditions for improvement are that the student comes to hold a concept of quality roughly similar to that held by the teacher, is able to monitor continuously the quality of what is being produced during the act of production itself, and has a repertoire of alternative moves or strategies from which to draw at any given point (Sadler, 1989). I have seen and heard this quotation I don’t know how many times in the context of what it means for the improvement of classroom instruction. But it is just as true as applied to adults.
- The most successful people are the ones who self-regulate—they ask the three questions all the time, and they go and find answers. We know this on the work on self-regulation by Schunk and Zimmerman, and on the work on feedback-seeking behavior in successful individuals from the field of organizational development. Feedback and self-regulation are incontrovertibly and inextricably linked.
- Thus feedback geared towards scaffolding and promoting self-regulation doesn’t always look like what we often think of as feedback, because it almost always involves asking another question. So what you have to understand there is that asking a question is itself an intervention. This is an idea I learned from the field of action research.
- Action research, and the cycle of continuous improvement, are also strongly tied to feedback, because the three questions are like a minimalist cycle of inquiry. Here is the cycle of inquiry as we are using in our current work, and you can see the intersection:
These ideas make perfect sense to me. I can see all the connections in my head. I am working on making my thinking visible, so that I can talk about these ideas with others without feeling tongue-tied, and so that I can continue to poke holes in my own thinking.
March 30, 2013 by Isobel Stevenson · No Comments · Student Achievement
Dylan Wiliam, the guru of formative assessment, has a new book out called Embedded Formative Assessment. Additionally, and more entertaining I think, he collaborated with the BBC to produce a documentary in two parts called The Classroom Experiment. This was shown in Britain quite a while ago, but I just found that the two episodes are available on YouTube at the moment. I don’t imagine this is legal, but I’m not complaining. I think it’s great stuff.
The documentary shows Professor Wiliam going in to a comprehensive school in Hertfordshire to work with the teachers there on the implementation of strategies for improving student achievement. Featured are: the use of randomizing techniques for questioning students, daily exercise, constant monitoring of student understanding, and feedback instead of grades.
There are several things I really like about The Classroom Experiment.
- It is really gripping. I initially was thinking of using clips from it to illustrate a couple of different projects I’m working on, but it really has most impact as a story, an unfolding of events. I got really caught up in the drama of how the various characters (teachers and students!) deal with the disruption to their routines, and more significantly, to their understanding of what school is about and what roles they play in that.
- It shows the teachers really struggling with implementation. This strikes me as much more real than most of what we generally get to hear or see about innovation, which always sounds so easy and straightforward (yes, all you have to do to meet the needs of all students is differentiate, or of course, you just need to teach using cooperative learning). These teachers would, I am sure, have given up their attempts to implement if Professor Wiliam had not been able to step in and tell them why they were struggling and suggest what to try next. The math teacher is particularly hard to watch. Her first attempt at using the colored cups is a disaster, as the kids are trying desperately to tell her that they have no clue what is going on in her classroom, and she gives up in disgust. I always think that our usual pathetic attempts at professional development for teachers are like giving them one skiing lesson and then asking them to ski down Everest.
- It shows the impact of changing classroom practices on the students. I always think of classroom formative assessment as benefiting the lowest achieving students most, but this documentary also spends quite a lot of time with the higher achieving students, and their reaction to the change is fascinating. Some of the interviews with the students show them being incredibly insightful about how classrooms and education in general work and what impact it has on them.
Please watch! Let me know what you think.
February 18, 2013 by Isobel Stevenson · No Comments · Student Achievement
The number of educators who work in pre-K-12 schools who read research articles is infinitesimal. The number of educators who read books about education, or journals aimed at a practitioner audience is much larger. Even among this audience, they subject what they read to a credibility test based on their own experience of what makes sense, what is likely to work, and what is simply not reasonable. The prevailing attitude is that they read to keep up, to know what is current; and at the same time, while they are reading they are gauging the plausibility of what the author has to say. If they come to the conclusion that the author “has no idea what schools are really like” or is (the worst insult) “clueless”, then they pay no more mind.
There does not appear to be much doubt that there is a gap between research in education and the practice of educators. This topic has, in fact, been the subject of many books and articles. There is also widespread consensus, at least among educators, as to why this divide exists. Here is a summary, taken from my own experience but also including what others have written about the topic.
They don’t have time
Occasionally I read our local newspaper online, and an article about education, particularly when it comes to contract negotiations and ipso facto, money, is liable to spawn a colony of comments about how much teachers get paid considering how much time they get off in the summer. I would like to see students in school for more days during the year, and I would also like to see teachers be paid more, but both of those opinions are beside the point that during the school year, teachers are pressed for time.
When I first considered moving into a job that was not a classroom teacher, the position I was contemplating was grant-funded, and therefore I faced the prospect of giving up my contract with my school district and the security that represented. I remember going for a walk with my husband and our dogs and trying to lay out the pros and cons for him. All he would say was, “yes, but will you have to grade?”. I don’t think it was until that walk that I fully understood the extent to which being a classroom teacher consumed my life outside school, and I would freely admit that all the jobs I have held since leaving the classroom have all been easier than being a classroom teacher.
Nobody should be surprised that, given the myriad demands on their time, educators do not make reading research a priority. As a principal once said to me, “I’m too busy doing it to read about it.”
There is too much research to keep track of
Even if teachers did have time to read the technical literature, how would they decide what to read? Mountains of articles are published every month. The American Educational Research Association alone publishes three journals, and then there are the more specialized publications in literacy, mathematics, and so on. The Social Science Citation Index (SSCI) catalogues over 150 education journals. A hundred and fifty! You couldn’t possibly keep track of all those, even if that was your full time job.
Researchers in academic positions have the luxury of a considerable amount of specialization; a historian will have a specialty like the Civil War or Ancient Egypt, and a geographer may be an expert on arctic biomes or how tragedy affects a place. The same is true in education, where a researcher in mathematics education probably knows only a limited amount about second language acquisition. A fourth grade teacher, on the other hand, has to know something about everything.
The problem of volume leads teachers to a couple of practical solutions. First, there are publications that feature articles similar to news stories: they are current, they focus on personal experience, and they are written for a general audience. Educational Leadership is a great example of this kind of publication, and with a circulation of over 150,000 (not to mention those copies that are passed from teacher to administrator to teacher, or read by many teachers in the teachers’ lounge), it is obviously a very important source of information for educators. Second, there are books that take a very practical approach to making research accessible to educators; for example, (Hattie, 2009; Marzano, Pickering, & Pollock, 2001). These researchers have perfected the art of the meta-analysis: taking vast numbers of research studies and drawing conclusions about what each individual mote of research contributes to what we know about good practice.
Previous experience with the uselessness of research and theory
When I talk to principals about what they are looking for in terms of instructional practices when they visit classrooms, and how they know what to look for, they almost invariably cite their own experiences as teachers and as children as the reason. Equally, they are apt to talk about the lack of utility of teacher training and principal preparation programs as being overly focused on theory and insufficiently practical. One principal described his teacher training as being like learning to swim by being told what the back stroke is and when you would use it, without ever getting into the pool. Then when he got a job teaching in inner city Chicago, he jumped into the pool and, in his words, sank like a stone.
I know from personal experience that some teacher training programs are better than others at weaving together research and practice, for I have gone through student teaching twice. I imagine having readers whose first thought is that I flunked out the first time, but happily, this was not the case. In fact, when I moved to the USA, my teaching license from England did not transfer, as I had had too little classroom experience as a full-time teacher. I recall that I needed two years of experience, and I had only one. I was required, therefore, to take certain classes and to repeat student teaching.
Although I didn’t know it at the time, my first teacher training experience was in a program that had been explicitly and thoughtfully designed to encourage reflective practice in its participants. It was a rather odd experience to read about my experience in quite clinical terms in a book about reflective practice a quarter century later (McIntyre, 1993). In contrast, my second teacher training experience was perfectly traditional: theory followed by application. That methodology was behind the times 25 years ago, yet is still the experience of many aspiring teachers.
The fact that educators do not associate research with practice is not the fault of the researchers. Nevertheless, the lack of connection seems to have made them think of research as existing almost in opposition to practice, and this of course is unfortunate for the profession as a whole.
Hattie, J. (2009). Visible learning: Routledge.
Marzano, R. J., Pickering, D. J., & Pollock, J. E. (2001). Classroom instruction that works: Research-based strategies for increasing student achievement. Alexandria, VA: ASCD.
McIntyre, D. (1993). Theory, theorizing and reflection in intial teacher education. In J. Calderhead & P. Gates (Eds.), Conceptualizing reflection in teacher development. London: The Falmer Press.
November 8, 2012 by Isobel Stevenson · No Comments · Student Achievement
Here’s a nice little story from Tim Harford’s blog. He’s worth reading, not least because he writes a lot about learning from failure.
I recently spoke at Wired 2012 and I felt it went well (video to follow, when they put it up).
Afterwards, people came up, shook my hand, patted me on the back and told me I did a great job. That felt nice, but it won’t help me to do a better job next time.
Elsewhere in the building, other people gathered in corners and grumbled about all the things I did or got wrong. (I don’t know if this happened. I assume it did. You can’t please everyone.) That didn’t help me to do a better job next time, either.
But someone did something helpful. Bruno Giussani of TED, seeing someone praise me for speaking without slides, immediately got to the point. “You talked about the Spitfire,” he said, “But this is an international audience. Many people won’t know what you’re talking about. You should have shown just one slide: a photograph of a Spitfire. Then everyone would have understood.”
Next time I give a similar speech, I’ll be showing one slide: a photograph of a Spitfire.
It isn’t easy to get straight to the point and offer a single, focused suggestion for improvement. And the truth is, we rarely seek that kind of feedback. When we ask “what do you think?”, we’re usually looking for those confidence-boosting pats on the back. But giving such feedback – and seeking it out – is hugely important.
And here is a photograph of a Spitfire.
August 22, 2012 by Isobel Stevenson · No Comments · Student Achievement
The mission of the school is the shorthand for the meaning that the people in the school attach to their work. Several writers have made the point that there is a difference between a mission statement and a mission. During a fabulous summer camping trip, I was in one of the gift shops in Mesa Verde, and one of the little cards in a display case described the significance of a fetish, and how they are accorded a power that the physical object itself does not possess. If you don’t believe in the potency of a fetish, then the object is more accurately described as a carving. I think this is a great metaphor for a mission statement. If you don’t believe in the power of the mission that the statement represents, then the statement is a poster, and not a mission.
July 31, 2012 by Isobel Stevenson · No Comments · Student Achievement
I keep, and look at from time to time, a memo I wrote more than ten years ago to the higher-ups at the district where I was at that time a principal. I wrote it because I and the special education teachers with whom I worked knew that the special education system was not working the way we thought it should. The memo asked for FTE to support students who needed extra help in reading without qualifying them for special education services.
We wanted to do this for two main reasons. First, many of the students who needed help were what we called “gray area” kids. They scored below average on IQ tests, and their achievement was also low, and therefore the magical one standard deviation discrepancy was not in evidence. Second, we didn’t like the idea of labeling students as broken even if they did have scores that met the magic formula. Ascribing that discrepancy to a disability seemed an inferential leap of daunting proportions that had implications for the student well beyond the initial staffing.
The meeting to talk about the memo didn’t last very long. We were told that that wasn’t the way the funding worked, and that there wasn’t anything to be done. I look back at that now and I’m a little disgusted with myself. I know so much more now about, well, everything. How special education functions as a safety valve in an unforgiving system. How to teach adolescents to read—we really didn’t do a good job with that 10 years ago, and we do a much better job now although we still have a long way to go. How deeply detached most regular education teachers are from special education, and vice versa. I should have tried harder, but I didn’t know what to do.
Times have changed. The discrepancy formula is on its way out and my state, Colorado, is ahead of the curve in requiring that all districts have a plan that makes them ready to do away with the discrepancy formula altogether and replace it with RTI (I know that the American educators reading this will all know what I mean, but if not, see http://www.rti4success.org/. So are we in good shape? Not necessarily.
While working on RTI with a group of educators, all from the same district, I asked them to bring in RTI materials from their respective schools—these materials could have been a formal plan for how the process operates in their school, forms to be filled out, lists of assessments to be given or interventions to try. What I asked them to do was to sit in groups, review the materials brought by everyone in the group, and to rank order the schools according to the most successful RTI process in place.
Each group went through the materials from each school. From looking at the materials, what they knew to be good practice in RTI, and from their own experience, they created attribute charts of what an ideal RTI process would look like. From that, they were able to evaluate each school’s RTI plan, and produce a kind of league table of schools according to where their processes fell on the attribute chart.
When they were done, I asked them whether they were able to produce a ranking, which they did. My follow up question was: “So, let me get this straight. The school that you’ve ranked number one has the process that creates the best outcomes for kids, right?” Silence.
The educators, being a thoughtful and experienced group, saw immediately that they had done what is often done in education, but that we are not supposed to do: use process as a proxy for outcome. I worry that, in setting up elaborate RTI models, we will miss bigger questions, like, are we doing this to fulfill a mandate, or will kids actually be better off? Will regular education teachers (and maybe also special education teachers) see RTI as simply the new process for getting kids labeled as special ed, or even the same process with a different name, instead of a chance to think differently about how we address differences in achievement among our students?
So while this is a story about RTI, it is also a story about other things as well, including: we frequently do not hold ourselves accountable for tying our policies, procedures, and practices back to better outcomes for students. But more importantly, we are often not in a position to see the systems of which we are a part, just like the fish who have never realized that the medium in which they swim is water.