In the Pseudoteaching FAQ, I tried to stress that pseudoteaching isn’t an indictment of lecture or of any other particular style of teaching. Pseudoteaching can come in almost any form from discovery-based learning to pure lecture. Pseudoteaching simply requires two things: 1. It looks like great teaching, so any outside observer, and even the teacher and students themselves would think that the lesson is filled with learning. 2. Upon closer examination, it seems that no learning is taking place.

In essence, pseudoteaching is a tool that I want to use in my own teaching to not be satisfied with lessons that look great to all parties. I want to push myself to see that these lessons really lead to learning, which requires me to clearly define what I want students to learn, and how I’m going to measure that learning early enough so that I can have a chance to make necessary course corrections.

This next personal reflection of pseudoteaching is hard for me, since it represents almost a complete year of teaching, certainly one of my most enjoyable, and one that I thought to be incredibly successful by every measure. It’s only now, 5 years later, that I think I’ve built up the courage necessary to say that it failed on the most important measure: student learning.

## Scene

After 5 or 6 years of teaching, my school hired a wonderful new teacher, Mark Hammond. After a year marching through a very good textbook (Griffiths’ Physics of Everyday Phenomena) with not much success (students were often confused, and showed little ability to retain key ideas from unit to unit), we wondered if the problem was that students were really struggling with more fundamental ideas—thinking scientifically, understanding how ideas link together and conducting experiments. Seemingly simple things like what it means to operationally define a quantity like force or mass were completely beyond our students, no matter how many problems they could solve.

At the same time, I had been taking a look at the wonderful Physics by Inquiry curriculum created by the PER Goddess Lillian McDermott and her UW research group in order to give elementary school teachers a proper background in physical science to teach elementary school.

I’m not ashamed to say I simply fell in love with this text. PBI is a series of inquiry-based activities that have students explore many of the major topics in physics (motion, heat, light and color, optics, magnetism, electric circuits). Notice there are some notable topics missing in this list, starting with dynamics, which I’ll discuss further.

PBI takes typical topics, like understanding the ray model of light, and breaks it down into a series of experiments that students perform in order to develop the model on their own. Students start by exploring a single filament bulb, and then students explore shadows made by that bulb, followed by two bulbs, and eventually a frosted bulb. Students then explore how you can form an image of the bulb by placing a very small hole in front of the bulb, and allowing the light from the bulb to fall upon a blank screen. From there, they explore different bulb arrangements and try changing the aperture—what happens if the aperture is made larger, shaped like a triangle, or even the letter F? All of this leads up to students realizing what happens when you use a lens, what will result when you obscure half the lens (where students resolve the classic misconception that covering half the lens will make half the image disappear).

Eventually, I convinced Mark that we should give this curriculum a try—it’s writing intensive, it focuses on developing logical and scientific reasoning, and these were the very skills we felt our introductory students needed the most work on.

I should also note that McDermott is very clear that PBI is intended to be used by pre-service elementary school teachers (read: adult learners well into their college careers), she does not suggest this curriculum for high school classes. We were warned.

So we kicked off the year with the very first unit in PBI, properties of matter. It begins with the development of an operational definition of mass. It presented students with a very simple pegboard balance, with two baskets made of small plastic cups.

a pegboard balance

The first activity presents students with a deceptively simple question:

Develop a definition of what it means for the balance to be balanced.

This was a simply beautiful question. To see that, you only needed to wait about 5 seconds to for a student to give the obvious reply: “The balance is balanced when there is equal mass on either side.” At this point, many of the students in my class (mostly juniors and seniors) were wondering if we’d somehow mixed up the science lab with the materials from the elementary school down the road.

OK, I said, then “what does it mean for the mass to be the same on the either side?” They’d reply, “that’s it’s balanced.” And here was my first moment to try to push them toward a deeper realization.

You say that the balance means the masses are equal on both sides, and equal masses on both sides mean the balance is balanced. This is a circular argument. What can you measure to to see that the balance is balanced?

This throws them for a loop, and it’s only after about 5-10 minutes of thinking that students realize there’s a meter stick sitting in front of them for no apparent reason, but eventually, they think that they can measure by measuring the height of the balance arm on either side, and equal measurements mean it is balanced. The curriculum then painstakingly works through having students determine what variables affect balancing, asking questions like “Is it possible to balance 1 hex nut with 2 hex nuts?” and then working students past their initial “no” to realize that if you shift the position of the hex nuts, you can achieve this feat. The curriculum goes on step by step, until students have evidence that the position of the mass affects balance, and the mass itself affects balance but neither can predict balance. It was almost always around this time that someone would knock down my door during study hall (I used to work in a boarding school) to announce they’d cracked the equation for balancing. It was a truly awesome sight.

The next day, we’d often go into class with every student having a different equation most of which looked something like this:

$m_{1l}x_{1l} +m_{2l}x_{2l}+ m_{3l}x_{3l}+...= m_{1r}x_{1r}+ m_{2r}x_{2r}+ m_{3r}x_{3r}...$

Of course every equation looked different, but as students worked, they were able to see how they were exactly the same, to say nothing of the value of sigma notation (how’s that for an accomplishment?).

The curriculum went on from there. By attaching rubber bands to one side of the balance and then masses on the other side, students could develop an operational definition of force, and it wasn’t long before they were using the balance to model your forearm trying to lift a mass, as part of an activity on why you wouldn’t want to arm wrestle a chimp. I’ve attached it below, since you can see I was also madly in with worksheets that spelled everything out in detail at the time.

View this document on Scribd

At this point students also completed a number of projects—making a balance to measure the mass of a bag of trash, or figuring out how a doctor’s scale works—why it is that moving around a few little masses that are smaller than your lunch can somehow “balance” you.

From there, we developing operational definitions of volume, and later worked to have students develop a way to predict sinking and floating. Here again, since I was teaching mostly seniors and juniors, confident in their understanding, and as soon as we started to experiment with sinking and floating, they’d say something like:

student: I know the answer, it’s density.
me: Oh really? Can you prove it?
student: yes. See, this pencil floats, and this penny sinks. Density.
me: This doesn’t prove anything. I say that things with #2 imprinted on them will float, and things with Abe Lincoln on them will sink.

Again, my students did days of experiments, and came to the idea that both mass and volume affected sinking and floating, but neither could predict it. So we searched combinations of these variables, and finally, when students organized them by the ratio of mass to volume (or volume to mass) they found that all items below the ratio for water behaved one way (floating) and all above behaved a different way (floating). At this point, students said saw why suddenly we need a name for the ratio of mass over volume, and I ask, if there’s a way you could really prove the idea that density predicts sinking and floating using control of variables. They all see that since density depends on mass and volume, there’s no way to hold 2 of the 3 quantities constant and change the third. Yes, we’re getting pretty deep into the weeds in understanding the scientific process. In fact, I think the whole course sort of existed at a meta-level of science.

How did we assess whether students understood this? We asked them to write 6-8 page papers explaining sinking and floating from first principles. Here we had seniors, writing amazing essays about Hamlet, and struggling to explain sinking and floating, especially how something like a submarine is able to control its buoyancy.

After this, we finally got to Newton’s laws, using our idea of measuring forces with rubber bands to establish some rules for what happens when balanced forces act on an object, as shown in this picture below.

We then set up the same experiment on a long skateboard, and see that when you pull the skateboard at a constant velocity, nothing changes, so having a net force of zero must mean the velocity is constant. And we did something similar for N2 and N3.

## Breakdown

I really loved teaching this way. There was only one huge drawback. Time. This curriculum moved slow. As in glacially slow. In their minds, students were studying balancing for almost a whole semester, and because this could seem so basic, students sometimes had a hard time fully appreciating the difficulty of what they were learning, and instead they could see it as easy or babyish.

But the real problem came in trying to assess student learning. Physics has some great measures of conceptual understanding (the main goal of this introductory course), including the Force Concept Inventory (FCI), which tests basic concepts in Newtonian Dynamics and the Test of Science Reasoning developed (called the Formal Reasoning test), which tests basic understandings of experimental design. As I remember we gave both of these multiple choice tests to our students and saw very little gain. The FCI is easy to see—we spent very little time on traditional physics concepts like force and acceleration. The results on the science reasoning test bothered me more. Why couldn’t students succeed on this? Was it because we were never asking them to answer multiple choice questions in class? This seems too simple, and certainly they are very familiar with the MC format from other classes and standardized tests.

In hindsight, my most likely explanation is that they were not seeing how to extract bigger understandings from the inquiry work we were doing in class and apply it to new situations that appeared on these tests. Sounds like pseudoteaching to me.

## Resolution

Ultimately, this is a story without a resolution, since soon thereafter, I moved on to teaching modeling physics, and sort of put these efforts out of my mind. Looking back, I think there were also problems of assessment—it would have been much better if I’d spent some time trying to think about what the objectives were for having students do all that writing and reasoning, and come up with a more careful way to assess it. I would also be curious to know if I had done that, and implemented SBG at the same time, whether my students might have seen more success in applying these skills to new problems and venues.

I really didn’t have any standardized way of measuring understanding of operational definitions, balancing or buoyancy at the level we explored them. I wish I had taken the time to write some sort of assessment in advance to measure these things, and decide what I was willing to accept as proficiency. Perhaps if I had, I would have some measures of learning to counterbalance the FCI results.

But now it certainly gets me thinking that pseudoteaching can’t really be confined to any one particular style of teaching. It is much more, and it is mainly a way I have of going back and carefully examining even my best teaching moments to see if they were as successful as I thought, so that I can use this information to improve my own teaching.

1. March 1, 2011 9:44 am

I think SBG can help teachers choose primary goals for their classes, and then explicitly measure their classes’ progress towards those goals. With just a lesson plan – be it a lecture or an investigation – you’re bound to lose track of what you’re trying to accomplish.

Also, having explicit goals for the outcomes of a class enables a whole department or school or district to work together. Another problem I had when using CPM was not knowing whether I was preparing my students for the next class they would take. I love that SBG organizes a curriculum into manageable pieces – not just for the students, but for the people designing the curriculum!

• March 2, 2011 6:00 am

Riley, this is a great point, and we need to put together a post on everything we’ve learned about PT and how to avoid even in these few short weeks. Having explicit and measurable outcomes for the class is one clear tool for doing so.

March 1, 2011 11:38 am

As you know, I can reflect your joy for inquiry based teaching. I can also reflect your disappointment with measured student outcomes. The lesson here is about how important it is to have clearly defined learning goals and matching assessments. The key is that it isn’t enough to have the learning goals clearly defined in your own head or on your lesson plans, for that matter, but that the goals be clearly defined and in the forefront of the students’ minds as well. This is especially important when you talk about the need for students to transfer their understanding of broad fundamental goals like science reasoning.
When the classroom format is anomalous – like one inquiry based science class out of 10 years of other types of class formats – it is even more important that students are aware of the learning goals and utilize meta-cognitive strategies. When they are asked to call on that knowledge in the future, the format will likely be disparate.

• March 2, 2011 6:03 am

Stacy, excellent ideas. I want to explore the idea of anomalous formats of classes further. Clearly, this is something we want our students to be able to deal with, and I think you are right that clear goals and teaching meta-cognition can be part of helping them to thrive in classes in all sorts of formats, taught with a wide range of pedagogies.

3. March 2, 2011 9:29 am

I have similarly struggled with the anomalous class format or assessment style. I was watching comments fly by on the twitter edchat last night on effective and meaningful feedback. There was some talk of student self-assessment followed by comments to the effect of “how do you know they will assess themselves honestly.” This is the same problem of anomaly. When students have become accustomed to grades being a final pronouncement and suddenly they get a voice in that judgement, they might be tempted to inflate their abilities. they have had no experience with self-reflection. When I introduced self-assessment into my class it was its own lesson in psuedo-teaching. “Look at my students being all involved in their learning process and reflecting on it,” I thought self-congradulatorily (…that’s a word…). Later, when I saw the lack of depth in their self-analyses, I realized I hadn’t given them any framework or practice with this new tool.

So I think there’s a lot to be discussed in terms of how, when we are introducing some new awesome framework, whether it’s SBG or inquiry-based curriculum, or self-assessment, how do we support support the students so that the awesomeness stays awesome and doesn’t become PT.

I also like the idea of putting together all our notes and ideas about avoiding PT. I’d be happy to be a part of that discussion.

4. March 2, 2011 2:11 pm

John. I really enjoyed this post. I hope your post helps others step back and evaluate their own bigger-picture teaching strategies and not just individual lessons.

You mention it multiple times in the post, but it is a very important point that you were coming in with a mismatch between your students and the target audience of your curriculum and it is interesting how this use of vanilla PBI on high-school students ended up creating this pseudoteaching situation. My understanding is that the reason that PBI is such a great curriculum for in- or pre-service elementary school teachers is that one of its main strengths is that it moves a completely science-phobic person to a point where they feel like they can “explain the \$h!+ out of a scientific concept” (something I overheard a PBI student say when observing a class) and thus feel comfortable in their own skin when teaching science at a very basic level in elementary schools. Your students, on the other hand, simply didn’t need this push. They’re already comfortable with the idea of explaining a scientific concept (even if they are not necessarily good at) and they are hungry for more concepts. A very interesting mismatch.

“In hindsight, my most likely explanation is that they were seeing how to extract bigger understandings from the inquiry work we were doing in class and apply it to new situations that appeared on these tests.”

This seems to be an argument for why they would do better on the reasoning test, but I thought you said that there was minimal gain.

• March 2, 2011 5:53 pm

Joss,
I think many of my students in my introductory physics course were very much like science phobic pre-service elementary teachers, and in that respect, the curriculum served them well. Almost no other curriculum I’ve seen gives students the same sense of joy and ownership of discovery. But I do think they were used to getting lots of information in most of their courses, and so PBI’s extreme focus on depth over breadth may have turned them off.

As for the scientific reasoning test, I can’t remember the exact scores, but I seem to recall that it didn’t produce the marked gains I was expecting. I think the sentence you quote was missing a NOT, which changes the meaning significantly. My kids were not seeing how to extract the bigger understandings about the scientific process from our inquiry work and apply it to new situations. And I think this is a very difficult skill for students to master, one I still struggle with today.

• March 2, 2011 6:49 pm

Aha, that sentence missing a not makes it make so much more sense.

For some crazy reason I didn’t think it though at all and assumed that high school physics students were there because they were excited about physics, but of course it is no different than at the university level where the bulk of the students are there because it is a stepping stone for some specific education path that has nothing to do with physics.

5. March 3, 2011 12:03 am

OT from the rest of the comments but your post brought up something I’ve been thinking about re: transfer to new situations. I’ve been wondering about the benefits of spending a long time on 1 content topic vs hitting multiple content topics but emphasizing the same inquiry/thinking habits. For a long time I was an advocate of the first option (like PBI). There’s a study somewhere about physics students doing better in college if their HS teacher spent more than a month on any 1 topic (I’m guessing you’re more familiar with that study than I am). Now? I’m less sure. Transfer is a bitch. In David Perkins,Making Learning Whole, he calls it something like Playing out of Town. Basically, learn it/do it in a whole lot of different contexts. Obviously, this would promote interdisciplinary emphasis on certain habits, but would also seem to promote a degree of “coverage”…for lack of a better word and without the negative connotations.

Clearly there’s some sort of balance involved but I’m not sure about the “sweet spot.”

• March 3, 2011 11:33 pm

Jason,
I think you’re right. There’s a balance that has to be struck there somewhere between playing in a lot of different stadiums and getting to know the home field really well. I need to pick up Perkins’s book and read it some more. I’ve seen the study you mentioned, but I couldn’t find it right away. I’ll try to dig it out.

• March 8, 2011 3:58 am

Phil Sadler and Robert Tai did a study. Here are excerpts from a ‘97 draft of “Success in College Physics: the Role of High School Preparation”, by Philip M. Sadler and Robert H.Tai, Harvard University. This is a study of performance in introductory physics courses for almost 2000 students at 19 colleges and universities in the United States.

(p. 14) “We have found that taking high school physics is related to better performance in introductory physics at the college level. This size of this relationship is smaller than most students believe and also smaller than previous studies have found. Three efforts that a student can make in high school are associated with much better performance in later physics courses: taking a calculus course in high school, maintaining a high GPA, and taking a rigorous sequence of high school physics, for two years if possible.”

(p. 16) “Higher college grades appear to be associated with courses characterized as covering few topics in great depth, with teachers that explain problems in many different ways, and with teachers turning from the text as the major guiding force to that of a resource. A considerable portion of the text can be consulted over the course of the year, but there appears to be little advantage to spending large amounts of time reading it or completing a high proportion of text problems. A limited set of topics, dealing primarily with issues in mechanics, appears to be beneficial. This concentration on few topics should not exclude qualitative problems, but teachers should consider carefully the concepts that should be dealt with without mathematics and these issues should be included on tests and quizzes along with quantitative problems. Laboratory experiments as well should be carefully chosen to tie in with major themes and not be overdone. Fewer lab experiments can be very effective if they relate to critical issues and students have time to pursue them fully. Classroom demonstrations are a favorite activity of many teachers, however there appears to be little to recommend a high frequency. Tobias (1992) found that demonstrations may be entertaining, but are more often confusing. This study finds that extensive discussion after a demonstration appears to be counter productive.”

(p. 17) “ … those teachers who choose not to use a text appear to have a real advantage. Perhaps, they are freed from the most obvious rubric for measuring how much they have covered and may concentrate on only a few central ideas. Perhaps they use materials that they have written themselves or have been given to them by other teachers or researchers. In any event, avoiding reliance on a text appears to have real benefit.”

6. March 3, 2011 9:56 am

While I’m agreeing with everything being said here I sure am hoping that we are not labeling great learning experiences for students as pseudoteaching because the kids can’t score high marks on a multiple choice test. Time is an issue only becasue we feel this need to “cover” so much new information that our students don’t have time to process and think deeply about a topic. I mean it takes us years to learn some of the concepts we expect our students to learn in weeks. That is why we need to spiral Science concepts throughout their K-16 and beyond education. So I don’t see what you did as pseudoteaching at all. That’s the kind of physics I’d want my own children to learn. Applying Science is better for kids than learning which response to bubble. So keep looking to SBG to monitor what your students are learning and how they are making sense of the physics. That will serve them better in the long than finding the correct response in a multiple choice test.

• March 3, 2011 11:36 pm

Al,
No, I’m not trying to dismiss everything I did with these students. I just wish I had taken more time to think about the major objectives of the course, and how I would assess them. The multiple choice tests we used aren’t your typical MC exams—they really are designed to uncover pre-newtonian thinking and assess scientific reasoning. I think my pseudo teaching comes primarily from the fact that I didn’t have a lot of conclusive ways to demonstrate that my students had learned something that would be relevant beyond my course.

7. March 3, 2011 12:14 pm

I commend your attempt at implementing PBI in a HS setting. I am privileged that I have attended Lillian’s summer institute for teachers once and I hope very much to return. What I experienced was beyond transformative in my understanding of inquiry and of teaching science. The experiences that I had there I took back to my middle school science classroom where I rewrote and self-paced the curriculum to be like a PBI course. It was HUGELY successful with those students who engaged in the HARD work of working. For students who would not engage and even begin, they failed utterly (the learning for me was that many if not most of these students would have scraped by with minimally passing grades in the traditional way of teaching the curriculum).

If I may, I would like to suggest some things that might have differentiated your experience from the environment in which the PBI curriculum is taught.

1) The PBI curriculum is taught with MANY instructors in the room… about a 1:8 ratio of instructors to students… so about 3 instructors in the room.

2) The PBI “checkouts” are not canned well in the written materials and are far more dynamic than the written materials suggest. In the checkouts, students are challenged to defend their statements with novel situations and date (sounds like you challenged their data, but did they extend at these points?). Checkouts can take as long as 20 minutes and are a conversation between and instructor and just two students.

3) The PBI class for properties of matter meets for 3 hours a day, 5 days a week for 5 weeks. With only a standard 1 hour period, the intensity and amount of learning is going to be diffused with the rest of the students’ lives.

Again, I applaud your attempt at implementing PBI. My own experience, though powerful, led me to shy away from using those materials in my class because I was just one person with too many students and I could not properly devote enough time or depth to the checkouts as students waited doing nothing while I rushed about from group to group doing poor checkouts.

You might check into Lillian’s PBI “Tutorials”. These are PBI-like sessions meant to accompany a standard physics course and provide critical background or instruction to students and classes that don’t have concepts firmly fleshed out in their minds. It’s kind of like a PBI-ace-bandage to get people through the rough spots and help them come out stronger in the end.

Regards,

Brian

• March 3, 2011 11:38 pm

Brian,
Thanks for these awesome insights about how PBI is done “for real.” I’ve always wanted to go to one of the summer sessions, but never found the time. I taught classes of 15, and I definitely felt overwhelmed and that without constant guidance at first, students could easily get lost. I’ve see the tutorials in physics, and I love them. I’ve incorporated a lot of their thinking into my own lessons. Is there some other PBI tutorials resource that she also published?

March 9, 2011 11:18 am

Brian,
I appreciated your response on PBI. I worked with John on creating the materials he (and I) used, and we taught together using them. I want to add a few things to John’s discussion. I was warned by McDermott’s group that the materials needed a LOT of modification to be used with high schoolers. It could very well be that John and I did not do enough (what we did took an awful lot of time… sometimes I felt we were running on empty but we sure were having fun putting the class together!)

I felt at times I needed a second teacher in the class (and occasionally John and I could and would help each other out that way). 13-15 students were an awful lot to keep track of using the PBI-ish approach. Some students were fantastic at working in together with little assistance. But most needed a lot of prodding, questioning and guiding. And a few always felt that things were going too slow and they got bored.

The other factor that hampered us was a total shock to me: some of the experiments require a level of hand-eye coordination (fine motor skills) that teenagers do not yet possess! Like I said, this blew me away. At several points I was getting very frustrated with the students’ inability to get repeatable or reasonable results, only to look closer, shut my mouth and watch carefully… and find that they were too clumsy to do what I thought should be easy. This wasn’t an issue of the students paying attention to detail… it was an issue of simple lack of the degree of fine motor control that I take for granted. I guess this is related to why our faculty will take on the varsity squads in soccer, field hockey or squash, but you won’t ever find us challenging the swim team or cross country team to a faculty-varsity meet! Wile and skill are our strengths as old-timers.

• March 9, 2011 9:59 pm

Mark,
I totally forgot about how much trouble students had working with those small blue balances that, used correctly, could measure to the nearest milligram. I think you are on to something there, and it’s a good reminder of just how much background and skill are necessary to really understand a subject like physics.

8. March 6, 2011 7:56 am

What a powerful discussion; I have learned so much here. Thank you, all. I agree to an almost infinite degree with Stacy, and I believe that learning targets and assessments must be clearly defined and aligned.

I wonder…is planting seeds “pseudo-gardening?” To plant a seed results in no real immediate satisfaction if the desired outcome is immediate plant growth. No evidence of growth after a day of planting the seeds…must not have gardened. However, we know we will see signs of growth later. Could this have occurred with the students described in this post? Were you expecting visible signs of growth too soon in the cycle? Perhaps the learning evidence came later? Maybe the lenses to see the growth were just imperfect. I think that is what you meant by a misaligned assessment strategy. I would contend that the students learned about the process of hypothesizing and positing plausible, testable explanations for those questions.

You have all left me with a lot to think about. Thanks.

• March 6, 2011 11:20 pm

This gets me thinking a lot. I would describe much of what I value most as a teacher is planting seeds. Much of my meta-cognition “curriculum” focuses on this. If these are some of the most valuable things to me, shouldn’t I have some way to measure them? Or must everything be measured?

December 29, 2011 10:34 am

I just found this series on “pseudoteaching” and having had a fair amount of experience with inquiry as an astronomy grad student teaching high school students and undergraduates as part of the UC Santa Cruz ISEE (http://isee.ucsc.edu/participants/programs/pdp.php), I found this reflection on learning in inquiry particularly interesting. However, I think you were being a bit hard on yourself in labeling this pseudoteaching. From your description, it seems learning definitely took place, the problem might be (as you said) that your assessment was not well matched to what was learned. And, as someone else said already, transfer is hard. Even if the students have learned the concepts necessary for applying it in another context, without practice at actually “matching” existing knowledge to new situations, it might be too much to ask of them to suddenly do it on a test.

• December 29, 2011 12:27 pm

It’s good to hear from someone who’s been through the ISEE program—I know several grad students who have done that program and found it to be excellent preparation for later teaching (though I believe that it is intended more for training future professors than future high school teachers).

I agree with you that quantumprogress may be a little too self-critical here, but I think that the basic point is sound: inquiry can definitely be pseudo-teaching. The main points of inquiry learning are that students remember better if they form the conclusions themselves and that they learn how to formulate and test hypotheses. If the second is not happening, then much of the purported benefit of inquiry learning is lost, and the students may be learning less than in traditional class. Quantumprogress is right to worry about that, and to look for ways to make inquiry live up to its promise. Many teachers would just wave the “inquiry is good” flag and not check to see if it was really working.

• January 1, 2012 11:10 pm

Patrik,
Thanks so much for the comments. This, and reading a number of other posts by Brian Frank and Mylene about their experiences with full inquiry based classes have me aching to give something like this a go again.