[PT] Pseudoteaching by Inquiry
In the Pseudoteaching FAQ, I tried to stress that pseudoteaching isn’t an indictment of lecture or of any other particular style of teaching. Pseudoteaching can come in almost any form from discovery-based learning to pure lecture. Pseudoteaching simply requires two things: 1. It looks like great teaching, so any outside observer, and even the teacher and students themselves would think that the lesson is filled with learning. 2. Upon closer examination, it seems that no learning is taking place.
In essence, pseudoteaching is a tool that I want to use in my own teaching to not be satisfied with lessons that look great to all parties. I want to push myself to see that these lessons really lead to learning, which requires me to clearly define what I want students to learn, and how I’m going to measure that learning early enough so that I can have a chance to make necessary course corrections.
This next personal reflection of pseudoteaching is hard for me, since it represents almost a complete year of teaching, certainly one of my most enjoyable, and one that I thought to be incredibly successful by every measure. It’s only now, 5 years later, that I think I’ve built up the courage necessary to say that it failed on the most important measure: student learning.
After 5 or 6 years of teaching, my school hired a wonderful new teacher, Mark Hammond. After a year marching through a very good textbook (Griffiths’ Physics of Everyday Phenomena) with not much success (students were often confused, and showed little ability to retain key ideas from unit to unit), we wondered if the problem was that students were really struggling with more fundamental ideas—thinking scientifically, understanding how ideas link together and conducting experiments. Seemingly simple things like what it means to operationally define a quantity like force or mass were completely beyond our students, no matter how many problems they could solve.
At the same time, I had been taking a look at the wonderful Physics by Inquiry curriculum created by the PER Goddess Lillian McDermott and her UW research group in order to give elementary school teachers a proper background in physical science to teach elementary school.
I’m not ashamed to say I simply fell in love with this text. PBI is a series of inquiry-based activities that have students explore many of the major topics in physics (motion, heat, light and color, optics, magnetism, electric circuits). Notice there are some notable topics missing in this list, starting with dynamics, which I’ll discuss further.
PBI takes typical topics, like understanding the ray model of light, and breaks it down into a series of experiments that students perform in order to develop the model on their own. Students start by exploring a single filament bulb, and then students explore shadows made by that bulb, followed by two bulbs, and eventually a frosted bulb. Students then explore how you can form an image of the bulb by placing a very small hole in front of the bulb, and allowing the light from the bulb to fall upon a blank screen. From there, they explore different bulb arrangements and try changing the aperture—what happens if the aperture is made larger, shaped like a triangle, or even the letter F? All of this leads up to students realizing what happens when you use a lens, what will result when you obscure half the lens (where students resolve the classic misconception that covering half the lens will make half the image disappear).
Eventually, I convinced Mark that we should give this curriculum a try—it’s writing intensive, it focuses on developing logical and scientific reasoning, and these were the very skills we felt our introductory students needed the most work on.
I should also note that McDermott is very clear that PBI is intended to be used by pre-service elementary school teachers (read: adult learners well into their college careers), she does not suggest this curriculum for high school classes. We were warned.
So we kicked off the year with the very first unit in PBI, properties of matter. It begins with the development of an operational definition of mass. It presented students with a very simple pegboard balance, with two baskets made of small plastic cups.
The first activity presents students with a deceptively simple question:
Develop a definition of what it means for the balance to be balanced.
This was a simply beautiful question. To see that, you only needed to wait about 5 seconds to for a student to give the obvious reply: “The balance is balanced when there is equal mass on either side.” At this point, many of the students in my class (mostly juniors and seniors) were wondering if we’d somehow mixed up the science lab with the materials from the elementary school down the road.
OK, I said, then “what does it mean for the mass to be the same on the either side?” They’d reply, “that’s it’s balanced.” And here was my first moment to try to push them toward a deeper realization.
You say that the balance means the masses are equal on both sides, and equal masses on both sides mean the balance is balanced. This is a circular argument. What can you measure to to see that the balance is balanced?
This throws them for a loop, and it’s only after about 5-10 minutes of thinking that students realize there’s a meter stick sitting in front of them for no apparent reason, but eventually, they think that they can measure by measuring the height of the balance arm on either side, and equal measurements mean it is balanced. The curriculum then painstakingly works through having students determine what variables affect balancing, asking questions like “Is it possible to balance 1 hex nut with 2 hex nuts?” and then working students past their initial “no” to realize that if you shift the position of the hex nuts, you can achieve this feat. The curriculum goes on step by step, until students have evidence that the position of the mass affects balance, and the mass itself affects balance but neither can predict balance. It was almost always around this time that someone would knock down my door during study hall (I used to work in a boarding school) to announce they’d cracked the equation for balancing. It was a truly awesome sight.
The next day, we’d often go into class with every student having a different equation most of which looked something like this:
Of course every equation looked different, but as students worked, they were able to see how they were exactly the same, to say nothing of the value of sigma notation (how’s that for an accomplishment?).
The curriculum went on from there. By attaching rubber bands to one side of the balance and then masses on the other side, students could develop an operational definition of force, and it wasn’t long before they were using the balance to model your forearm trying to lift a mass, as part of an activity on why you wouldn’t want to arm wrestle a chimp. I’ve attached it below, since you can see I was also madly in with worksheets that spelled everything out in detail at the time.
At this point students also completed a number of projects—making a balance to measure the mass of a bag of trash, or figuring out how a doctor’s scale works—why it is that moving around a few little masses that are smaller than your lunch can somehow “balance” you.
From there, we developing operational definitions of volume, and later worked to have students develop a way to predict sinking and floating. Here again, since I was teaching mostly seniors and juniors, confident in their understanding, and as soon as we started to experiment with sinking and floating, they’d say something like:
student: I know the answer, it’s density.
me: Oh really? Can you prove it?
student: yes. See, this pencil floats, and this penny sinks. Density.
me: This doesn’t prove anything. I say that things with #2 imprinted on them will float, and things with Abe Lincoln on them will sink.
Again, my students did days of experiments, and came to the idea that both mass and volume affected sinking and floating, but neither could predict it. So we searched combinations of these variables, and finally, when students organized them by the ratio of mass to volume (or volume to mass) they found that all items below the ratio for water behaved one way (floating) and all above behaved a different way (floating). At this point, students said saw why suddenly we need a name for the ratio of mass over volume, and I ask, if there’s a way you could really prove the idea that density predicts sinking and floating using control of variables. They all see that since density depends on mass and volume, there’s no way to hold 2 of the 3 quantities constant and change the third. Yes, we’re getting pretty deep into the weeds in understanding the scientific process. In fact, I think the whole course sort of existed at a meta-level of science.
How did we assess whether students understood this? We asked them to write 6-8 page papers explaining sinking and floating from first principles. Here we had seniors, writing amazing essays about Hamlet, and struggling to explain sinking and floating, especially how something like a submarine is able to control its buoyancy.
After this, we finally got to Newton’s laws, using our idea of measuring forces with rubber bands to establish some rules for what happens when balanced forces act on an object, as shown in this picture below.
We then set up the same experiment on a long skateboard, and see that when you pull the skateboard at a constant velocity, nothing changes, so having a net force of zero must mean the velocity is constant. And we did something similar for N2 and N3.
I really loved teaching this way. There was only one huge drawback. Time. This curriculum moved slow. As in glacially slow. In their minds, students were studying balancing for almost a whole semester, and because this could seem so basic, students sometimes had a hard time fully appreciating the difficulty of what they were learning, and instead they could see it as easy or babyish.
But the real problem came in trying to assess student learning. Physics has some great measures of conceptual understanding (the main goal of this introductory course), including the Force Concept Inventory (FCI), which tests basic concepts in Newtonian Dynamics and the Test of Science Reasoning developed (called the Formal Reasoning test), which tests basic understandings of experimental design. As I remember we gave both of these multiple choice tests to our students and saw very little gain. The FCI is easy to see—we spent very little time on traditional physics concepts like force and acceleration. The results on the science reasoning test bothered me more. Why couldn’t students succeed on this? Was it because we were never asking them to answer multiple choice questions in class? This seems too simple, and certainly they are very familiar with the MC format from other classes and standardized tests.
In hindsight, my most likely explanation is that they were not seeing how to extract bigger understandings from the inquiry work we were doing in class and apply it to new situations that appeared on these tests. Sounds like pseudoteaching to me.
Ultimately, this is a story without a resolution, since soon thereafter, I moved on to teaching modeling physics, and sort of put these efforts out of my mind. Looking back, I think there were also problems of assessment—it would have been much better if I’d spent some time trying to think about what the objectives were for having students do all that writing and reasoning, and come up with a more careful way to assess it. I would also be curious to know if I had done that, and implemented SBG at the same time, whether my students might have seen more success in applying these skills to new problems and venues.
I really didn’t have any standardized way of measuring understanding of operational definitions, balancing or buoyancy at the level we explored them. I wish I had taken the time to write some sort of assessment in advance to measure these things, and decide what I was willing to accept as proficiency. Perhaps if I had, I would have some measures of learning to counterbalance the FCI results.
But now it certainly gets me thinking that pseudoteaching can’t really be confined to any one particular style of teaching. It is much more, and it is mainly a way I have of going back and carefully examining even my best teaching moments to see if they were as successful as I thought, so that I can use this information to improve my own teaching.