Measuring mindset in my classes
Last year, I tried to create a metacognition curriculum where I devoted some class time to learning about habits and thinking strategies that lead to deeper learning, less stress and more success. The most important of these lessons is Carol Dweck’s Mindset. But one thing has always troubled me—is any of this having an effect on my students? In my weekly feedback and year end evaluations, I get very positive feedback, with one outlier every year who says that the time spent discussing mindset is a waste of time. I do notice a difference in how my students seem to approach assessments, especially when I proctor exams for other classes. But again, all of this data feels “soft”, and while I am ok with that, I think the stories of learning I hear back from students (which I need to do a better job of collecting and sharing) are far more important than standardized test scores. Still, part of me is curious. Could I actually measure the effect that a metacogntion curriculum has on students? Carol Dweck has certainly done so using carefully controlled experiments with middle school students to see that her Brainology curriculum leads to increased student motivation, higher grades and better performance in mathematics.
This week, Bowman Dickson, an awesome math teacher who teaches at a boarding school in Jordan (how cool is it that Twitter has connected me with a teacher in Jordan?) wrote this phenomenal blog post, What if Angry Birds Didn’t Grade with SBG?, which not only presents a great set of visuals to help students understand the power of SBG, he also details a survey he created to gauge his students’ math mindsets.
Then it hit me—this could be a fantastic tool for measuring the impact of my metacogntion curriculum. So I tweaked the questions a bit to be science related, and put out a new survey (link to excel version of grid). But there was one problem—I was three weeks into class already; I wouldn’t really be surveying their initial conceptions about intelligence and science ability. I decided to modify the survey to ask students to try to recall how they felt coming into the class and how they felt now (link to word version of survey). Here’s the survey:
Now here’s where things get interesting. When I got the data back, I entered it all into google docs, on a 1 (strongly agree) to 6 (strongly disagree) scale. I then modified my spreadsheet to calculate the averages on each question, the change for each question on a per student and whole class basis. But I didn’t stop there. I set up conditional formatting to change cell colors based on whether the response correlated with growth or fixed mindset thinking, and then developed an (admittedly very crude) metric—a Growth Mindset Index (GMI), which is basically how close a student’s answers are to someone who would strongly agree with all the growth mindset statements and strongly disagree with all the fixed mindset statements. With a GMI, I could then calculate gain scores to see just how a student or the class had improved in terms of increasing their growth mindset. Thanks to the magic of spreadsheets, all of this took only a couple of hours.
And the results are stunning. In advance, I must say that I think these results are also highly suspect. Just a couple of days earlier, all of my students read “How Not to Talk to your Kids,” a wonderful article that I always use to start a discussion about growth vs fixed mindset. And on the day before this survey, we discussed this article and growth mindset as a class. I can’t wait to try this survey again next year, and give it to the students before they ever come to class.
But here’s some summaries of the more interesting findings. Remember that students responded on the following rating scale (1-strongly agree, 2-agree, 3-somewhat agree, 4-somewhat disagree, 5-disagree, 6-strongly disagree):
- Before this class, on average, students somewhat agreed with the statement “How intelligent you are mostly determines how well you do in science.” (average 3.75) 3 weeks later, students are now between somewhat disagree and disagree with this statement (average 4.68). That’s a whopping average change of 0.90.
- Before this class, students were between somewhat disagree and disagree regarding the statement “You have a certain amount of science ability and can’t do much to change it.” (average 4.62) Now, students are between disgree and strongly disagree, with only two students reporting a rating less than disagree (average 5.41). Another leap of 0.77 on this scale.
- Regarding the statement “The percent of correct answers on a test is a good measure of science ability.” We moved from somewhat agree before class (average 3.01) to between somewhat disagree and disagree (average 4.01) in 3 weeks.
- On my Growth Mindset Index, before class my students averaged 64% agreement with a pure growth mindset. After three weeks, we’re now at 75%, making our gain 29%.
- Finally, since I’ve color-coded these results, you can see the difference. Remember green is aligned with growth mindset, red is aligned with fixed mindset.
Because I think stories are as interesting as data, here’s what some of the students said when asked to respond to a single statement:
- “You can greatly change how intelligent you are.” Before science, I said I agree, but now I know it is not how smart you are, it is how much effort you put in.
- Over the past few weeks, I have come to understand that this class is about exploring new ideas and gaining understanding rather than amassing information.
- Now that we have been in science for three weeks, I strongly disagree with the statement “how fast you can get a correct answer is a measure of science ability”, because no matter how long it takes you, as long as you get the right answer, you succeed.
- I feel that I have been opened up to a new way of learning because of the different methods we have been exposed to.
- In between the ‘then’ and ‘now’ charts, I started to realize that grades don’t matter as much as the knowledge that grades are supposed to assess. I also realized that the way I learn best is by practicing, and trying new things, not necessarily by having a teacher do the problems, but by me doing them.
- I have always been a person who likes and is good at memorizing formulas and ideas. This changed when I realized in this class, you have to think why it works and have a more open approach.
- I realized that I learn better by trying and failing and then trying again—it is the best way to learn. I also realized I need to really learn stuff instead of memorization because it will stick with me longer.
- Before this class, I thought that grades mattered a lot, but I now realize that one’s comprehension of the material is more important than grades.
- From the first chart to the second is just a mindset change from fixed to growth.
And perhaps the simplest summary of all:
Finally, I’m sure you want to see all my data in gory detail, so here it is. Here’s the link to the google doc.
Again, I’m still pretty blown away by these results. All told, we’ve done 3 things that I might consider “lessons” that might teach some of these metacognative ideas. We’ve done the Marshmallow Challenge and debriefed it (30 minutes), I did a presentation about Standards Based Grading (30 minutes) and we discussed the “How Not to talk to your Kids” article (30 minutes). I’d love to say that my school is 100% behind growth mindset and that every class they go to seeks to teach these ideas. I’d love even more to say that every child’s parents are working to foster a growth mindset at home. But I don’t think this is true. There are plenty of places and moments where my students experience fixed mindset all around them (in our discussion, many of them say that their peers are fixed mindset, or that some aspects of school encourage this thinking). The fact that 90 minutes of talking about these ideas could have this level of impact is shocking. I’m also still a bit skeptical, since I did give this survey right after the growth mindset talk.
But I don’t want to stop here. I need to do some statistical work to understand the real significance of these results. It also occurs to me that I have no idea how my data compares to the rest of the world, so here’s where I need your help. If you are a physics teacher, and could give this survey to your class (it takes less than 10 minutes), you could then enter your data (anonymously, of course) into my public google doc, and we could develop some sort of mindset baseline for physics students across the world. I hope you are seeing how incredible this could be. I’m going to put detailed instructions about this in my next post.
One other point—this work really isn’t all that new. Physics education researchers at the University of Colorado thought of this long ago, and developed a much more comprehensive survey class the Colorado Learning about Science Survey—CLASS, pronounced, C-LASS. It’s a fantastic and relatively quick survey that you can find on Webassign (assignment ID: 1365806). The not-so-shocking finding for anyone who has sat through a typical physics class is that scores on the CLASS show measurable declines after instruction. But the CLASS has shown that students who have positive learning attitudes as measured by the CLASS do achieve better outcomes in terms of grades and FCI scores. I gave this survey to my students last year at the beginning of the year, but did not follow up with a post test (DOH!). I’m definitely going to be giving this out via Webassign in the near future.
Now, when I step back, I wonder—isn’t this how data-driven education reform should be? By collaborating with other faculty, I’ve developed a tool to measure something I find interesting, I will do all the data analysis and interpretation myself, I’m going to use this tool to make changes to my instruction that lead to better student learning, and I’m sharing all my findings publicly with the world here on this blog. No one forced me to do this. The results are’t tied to my salary, students’ grades or graduation. Why can’t this be the model? Instead we a top-down model that calls for corporations to create tests that teachers have no voice in designing, whose results they never see until long after the students have moved on from their classroom, and the only impact they have on students is to hold them back from graduation and possibly strip their school of funding. Why can’t we create a model of data-driven reform that encourages teachers to innovate, rather than suspecting them of cheating at every turn? Oh, and here’s just one more thing—the cost for me to do all this? $0. I honestly don’t even want to know how much Pearson and other corporations and “non-profits” like the College Board are bilking my state and the federal government for their Potememkin Village of education innovation and reform.
So I’m hooked. I’m starting to see that there are tons of things I can measure in my class that when coupled with the rich narrative based feedback I already gather, can give me a better idea of how to teach my students. I just wish I had one one-hundredth the staff of statisticians and psychometricians that the College Board employs to help process all this data.