As I watched my students taking the midterm I was quite literally awestruck. Before me sat a room full of incredibly talented students, and I get the privilege of being part of their educational experience. The only other time I've been struck with a similar feeling was when I used to teach our (then) 300-person introductory computer science course (it's now about 750 students and growing strong thanks to the amazingly talented David Malan).
But this was different. These students have been working incredibly hard for the past three weeks getting their operating systems to fork and exec processes. A quick look at the data suggests that a typical student spent almost sixty hours working on the assignment over those three weeks. Yup, that's twenty hours a week for one of four courses. Ouch. I guess that explains the claims of students spending 60 hours weeks on this course -- historically there were always students who didn't do anything the first couple of weeks and then tried to do it all the last week. Things seemed a bit better this year with many people making steady progress over the three weeks.
Anyway, back to the exam. Having loaned my laptop to a student who had somehow missed the discussion about taking the test online, I had nothing to do other than watch them. There they were, all intently focused on the exam, typing, thinking, shuffling through slides. It was an open notes, open book, open course materials (but not open Internet) exam. I appreciated that they asked if they could read the course Q&A site during the exam -- I did draw the line there, even though they all certainly seemed to understand that posting questions was not going to be on the agenda. Towards the beginning of the test, there was much shuffling through materials, but as the test progressed, there was less. My theory has always been that I can make tests open book, etc., so long as I write questions that require synthesizing information so that the materials aren't actually that useful. There were a couple of questions where a glance at a set of slides would prove useful, but in most cases, there simply weren't places they could look for an easy answer, unless they already had a pretty good idea about the question.
I was quite curious to see the results. It's not possible to run a real apples to apples comparison beacuse a) this class is twice as big as it was two years ago, b) it's a different exam, and c) it's a different way of teaching. So, any results are open to interpretation. The results are surprising -- the distribution is almost identical to that two years ago. However, there were no truly bad grades -- last time, 2 of 23 students had grades on the midterm that were cause for alarm; this year, there were no grades that worried me (it appears that grades that worry me are those more than 2 standard deviations below the mean).
I'm pretty sure I cannot conclude anything from this. So, we'll have to wait until the assignments are graded. In this case, the assignment is pretty much identical (I saw pretty much because we asked different code reading questions, but ideally I can dig up the grade breakdown and remove those). Time will tell.
No comments:
Post a Comment