Archive for the ‘Academia’ Category

h1

Instructors should share self-assessments with students. Do they?

November 17, 2013

In the world of higher education, and perhaps the world of education in general, instructors are expected to engage in “reflective teaching.” That is, we think about and document what is working in our courses, what isn’t working, and what might be done differently next time. (Here are two end-of-course examples: my self-assessment for Music+STEM and my self-assessment for BIOL 485.) This is appropriate and useful. Furthermore, I believe that we should share these self-assessments with our students, for the following reasons.

1. Having to share self-assessments ensures that we actually do self-assessments. Even if most of us are already doing them, as we should be, a little extra motivation never hurts.

2. Sharing self-assessments models metacognition for students. “Metacognition” refers to the fact that we want students to think about the study habits that do and do not work for them. Our explanations of how we optimize our teaching practices may offer parallels to how they can optimize their learning practices.

3. Sharing self-assessments counters the “instructor as policeman” stereotype. Course syllabi are often packed with rules that students must follow … or their grades will suffer! Those admonitions may be necessary, but they imply that the instructor is the students’ adversary — someone whose main job is to uphold the rules of absenteeism, lateness, plagiarism, etc. and punish any violators. In contrast, an instructor who shares self-assessments indicates that he/she actually cares about what the students learn and strives to support them as effectively as possible.

4. Sharing self-assessments provides accountability. Students and their parents/guardians periodically wonder about whether their education is worth what they’re paying for it. Instructor self-assessments underscore that the students aren’t just getting the same old course that their predecessors had — they’re getting a new and improved version of it!

5. Sharing self-assessments encourages students to take course evaluations seriously. Students are more likely give thoughtful feedback if they see evidence that the feedback is received and acted upon.

Some possible objections to instructors’ sharing of self-assessments with students, along with my rebuttals, are as follows.

1. “Sharing self-assessments requires extra work/time.” Not really. We should already be doing self-assessments and documenting them in some way, so posting a PDF file to a course website or spending 3 minutes of class time on them should not be a big deal.

2. “My course has some significant problems, and I don’t want to draw additional attention to those.” Students recognize and gossip about such problems anyway. Why not take the opportunity to explain how the problems can be addressed?

3. “I’ve gotten important feedback from students that I shouldn’t share publicly.” One should certainly respect students’ privacy and anonymity, but feedback can almost always be discussed at a general level without quoting or referring to individuals.

4. “Most students wouldn’t be interested in my self-assessments.” This may be true, but the ones who are interested are probably the ones who care most about the course. Don’t you want to serve those students well? As an analogy, think about the “cool links” on course websites that lead to sites where students can explore material in greater depth. Most students won’t care about those either, yet those who are most invested in the course may find them quite valuable.

5. “I do share this sort of information with my students! For example, when we went over the last exam, I pointed out the topics where confusion was most prevalent.” That’s good, but did you take the additional step of discussing how the confusion might be reduced in the future? Being explicit in this way makes it clear that you take personal responsibility for students’ difficulties.

Convinced of the strength of my argument, I’m now wondering whether other instructors actually do share self-assessments with students on a regular basis. As an attempt to find out, I performed a quick survey of websites of undergraduate biology courses offered at three institutions: Bellevue College, the University of Washington, and Western Washington University. Many course websites (or subsections thereof) were password-protected and thus off-limits to me, but I still was able to access 46 of them: 12 at Bellevue (Biology 100, 130, 160, 162, 199, 211, 213, 241, 242, 260, 275, and 312), 19 at UW (Biology 118, 119, 180, 200, 300, 317, 356, 401, 411, 417, 427, 433, 442, 452, 453, 454, 476, 480, and 488), and 15 at WWU (Biology 101, 140, 204, 322, 325, 326, 345, 346, 348, 349, 416, 432, 445, 462, and 497). For each of these courses, I found a home page with links of multiple files: syllabi, schedules, handouts, rubrics, slides, etc. (If all I saw was an online syllabus, I didn’t count that as a “course website.”)

As far as I could see, none of these 46 course websites included substantive comments on past or current successes and failures of the instructors’ teaching or changes that may be made in the present or future. The closest thing to self-assessment that I found was this note in an answer key for a Biology 401 exam: “I added 6 points to everyone’s score … to account for several locations in the exam where most of you were not clear on how specific you had to be (Q7), how to describe your experiment (Q8) or what type of control you should provide (Q8).” Here was a semi-admission of a problem, at least.

My survey results come with important caveats (beyond the obvious issue of sample size), though. First, I looked at these websites quickly and didn’t check every single file, so I certainly could have missed relevant tidbits. What I can say is that I didn’t notice any sections on assessment of course effectiveness (as opposed to assessment of individual students’ performance) in the syllabi, first-lecture-of-the-term slides, and quiz/exam keys that I did check. Second, instructor self-assessments may be more likely to be included on the password-protected pages that I couldn’t get to. Third, instructors might discuss self-assessments in class even if there isn’t online evidence of this. For example, I’ve heard that Scott Freeman tells his students how research on his courses drives changes in these courses, though the Fall 2013 UW Biology 180 website does not appear to address this. Likewise, an instructor who collects mid-quarter student feedback on notecards and then discusses the feedback during a later class session might not document this on the course website.

Given these caveats, I’m not sure what (if anything) can be concluded about college biology instructors’ frequency of sharing self-assessments with their students.

What do others think?

h1

Misconceptions about misconceptions?

October 28, 2013

In checking my Google Scholar profile the other day, I was happy to find that a recent paper in CBE Life Sciences Education cited my 2012 review article on the use of music in science education. I was less happy to discover that my paper was cited as an example of bad pedagogy.

April Cordero Maskiewicz and Jennifer Evarts Lineback summarize their own paper as follows:

The goal of this paper is to inform the growing BER [Biology Education Research] community about the discussion within the learning sciences community surrounding misconceptions and to describe how the learning sciences community’s thinking about students’ conceptions has evolved over the past decade. We close by arguing that one’s views on how people learn will necessarily inform pedagogy. If we view students’ incorrect ideas as resources for refinement, rather than obstacles requiring replacement, then this model of student thinking may lead to more effective pedagogical strategies in the classroom.

I find this position interesting and sensible. I agree, of course, that how we teach should be based on research on how people learn. More specifically, I’m sympathetic to the viewpoint that (in the authors’ words) “Learning … is not the replacement of one concept or idea with another”; rather, “students learn by transforming and refining their prior knowledge into more sophisticated forms.”

The article provides two useful examples of how instructors can build upon students’ naive views of evolution, rather than simply rejecting them as wrong. So far so good.

Then comes the section “The use of the term misconceptions in current BER [Biology Education Research] Literature,” in which Maskiewicz & Lineback assert that many instructors have been slow to adopt this transform-and-refine-prior-knowledge view of learning. It’s a significant point because if everyone already holds this view and teaches according to it, there’s not much to discuss. Accordingly, Maskiewicz & Lineback searched the past three years of CBE Life Sciences Education for problematic as well as enlightened uses of the word “misconception.” Here’s what they found:

In some of these articles, the authors seemed to equate misconception with the more traditionally accepted definition of a deeply held conception that is contrary to scientific dogma (Baumler et al., 2012; Cox-Paulson et al., 2012; Crowther, 2012). Others, in contrast, seemed to use the term to reflect an ad hoc mistake or error in student understanding, one that exists prior to or emerges through instruction but, in either case, is not robust, nor does it interfere with learning (Jenkinson and McGill, 2011; Klisch et al., 2012). The authors who considered misconceptions to be “deeply rooted” spoke of instructional strategies designed to specifically elicit, confront, and replace students’ incorrect conceptions (i.e., Crowther, 2012). In contrast, authors for whom misconceptions were more tentatively held and/or emergent, suggested that students’ incorrect ideas can be amended through tailored instruction grounded in those ideas (i.e., Klisch et al., 2012). This latter perspective on learning is consistent with approaches supported by recent research in the learning sciences community (Carpenter et al., 1989; Ruiz-Primo and Furtak, 2007; Pierson, 2008).

Not only am I being dissed, but Baumler et al. (2012) and Cox-Paulson et al. (2012) are too! So, do we deserve it? Let’s look at the use of the term “misconception” in each of the articles cited.

From Baumler et al. (2012):

Questions of conservation lend themselves well to a “teachable moment” regarding the choice of nucleotide versus protein BLAST. In one group of 28 students, students were asked to provide a written response justifying their choice of using BLASTP or BLASTN. Twelve of the 14 pairs of students provided answers that were complete and exhibited clear comprehension of relevant concepts, including third position wobble. One pair gave an answer that was adequate, although not thorough, while the last pair’s response invoked introns, an informative answer, in that it revealed a misconception grounded in a basic understanding of the Central Dogma, concerning the absence of splicing in bacteria.

From Cox-Paulson et al. (2012):

Student misconceptions about DNA replication and PCR have been well documented by others (Phillips et al., 2008; Robertson and Phillips, 2008), and this exercise provided an opportunity to increase understanding of these topics.

From Crowther (2012):

My own opinion is that songs can be particularly useful for countering two types of student problems: conceptual misunderstandings and failures to grasp hierarchical layers of information. Prewritten songs may explain concepts in new ways that clash with students’ mental models and force revision of those models, or may organize information for improved clarity (e.g., general principles in the chorus, key details in the verses, other details omitted). Songwriting assignments could have similar benefits by forcing students to do the work of concisely restating concepts in their own words and organizing the information in a musical format. As an example of using music to counter misconceptions, I once team-taught a “biology for engineers” course in which my coinstructor complained that many students failed to internalize the difference between genotype and phenotype. I wrote and performed a song to drive home this distinction, the chorus being, “Genotype, ooh… It’s the genes you possess—nothing more, nothing less! Versus phenotype, ooh… Your appearance and health and reproductive success!”

Note that these were the sole instances of the word “misconception” in each article. Do they illustrate what Maskiewicz & Lineback say they illustrate? I don’t think so.

The first claim made by Maskiewicz & Lineback is that some papers (e.g., the three cited) consider misconceptions to be “deeply held” or “deeply rooted.” None of the papers cited uses either phrase, nor do I see any discussion of misconceptions’ deepness in the passages above.

The second claim is, “The authors who considered misconceptions to be ‘deeply rooted’ spoke of instructional strategies designed to specifically elicit, confront, and replace students’ incorrect conceptions (i.e., Crowther, 2012).” The “deeply rooted” business aside, is Crowther indeed advocating wholesale swapping of students’ incorrect conceptions for correct ones? No. “Prewritten songs may explain concepts in new ways that clash with students’ mental models and force revision of those models.” That is, the models should be revised — NOT discarded! As far as I can tell, this is consistent with Maskiewicz & Lineback’s recommendations up to this point. (Later in the article, they propose abandoning the term “misconceptions” altogether.)

I suspect that Maskiewicz & Lineback found the above wording (with its talk of clashing, forcing, and failures) overly adversarial, and I concede that the tone is not ideal. But the passage is basically agreeing with them!

Not being an expert on addressing misconceptions (or whatever they should be called), I was glad to get Maskiewicz & Lineback’s perspective. But if their best example of the problem is a paragraph that neglects to mention the positive aspects of one particular misconception, perhaps the problem is not as big as they are making it out to be.

h1

Detecting plagiarism by comparing sequences of references

October 14, 2013

I recently reviewed a manuscript bearing a suspicious resemblance to an already-published paper. The exact wording of the previous paper had not been copied; however, in a couple of sections, the new manuscript made a series of points in the same order as the previous paper, using a very similar sequence of references. The sequence of greatest overlap is shown below (with references from Paper B renumbered according to the bibliography of Paper A).

reference sequence

While this similarity does not itself prove that plagiarism occurred, it certainly constitutes grounds for further investigation.

It occurred to me that comparing sequences of references might be a good way of detecting possible cases of “subtle plagiarism,” in which the original text has been rephrased. This would be sort of analogous to the BLAST (Basic Local Alignment Search Tool) algorithms used in biology to identify related nucleotide and amino acid sequences (as in the comparison below of Calcium-Dependent Protein Kinases from different organisms, taken from Ojo et al. 2010).

CDPKs

In poking around online, I noticed that, not only had others independently come up with “my” idea, they had already implemented it and published papers about it. The 2-page Gipp & Beel contribution to the 21st ACM Conference on Hyptertext and Hypermedia (June 2010) provides a nice introduction to the approach.

Gipp and coworkers are also developing CitePlag.org, a website intended to let others perform document comparisons of their own. According to Gipp, the site currently struggles with long documents and not-yet-accounted-for citation styles. My own testing of it indicates that it is not especially useful at the moment, but definitely on the right track.

h1

I Hate My Hat

September 30, 2013

This weekend my 6-year-old son and I practiced pronouncing words that do and do not end in “e.” I wrote this poem to help him practice.

embarrassing hat

I hate my hat. It made me mad
To get this present from my dad.

My pal was pale. “What’s that?” he said.
“That big, big gold thing on your head?”

He did not have to be so rude.
My hat’s a dud. I get it, dude!

I want to look my best — but nope!
My pop made me look like the Pope.

h1

Are pie charts half-baked?

September 18, 2013

Some couples argue about money, intimacy, religion, and things like that. But for me and my girlfriend, our most contentious topic of the month has been the value of pie charts.

Leila’s position is that any data that can be presented as a pie chart can be presented better in some other format. A leading faculty member in her department — a truly brilliant woman — agrees wholeheartedly. But I’m sure they’re wrong, for reasons explained elegantly by Bruce Gabrielle.

Gabrielle’s advantage #2 of pie charts, “Communicates parts-to-whole relationships better,” is the one I consider most important.

At a glance, you know a pie chart is splitting a population into parts.

Bar charts do not have the same meaning. You can signal to the reader the bars add up to 100%, by adding a column or an annotation. But this requires some extra mental gymnastics by the reader to understand the bar chart represents 100%. Nothing beats a pie chart for instantly communicating 100%.

I’m sure this is why pie charts are routinely used to introduce fractions in elementary school. The format is easily grasped by anyone who has ever divided up a pie (or a cake). Does any other type of graph connect so well to a common visual from everyday life?

While I lack Gabrielle’s experience in presenting data to executives, I do present data all the time. And while I rarely include pie charts, I claim that there is a time and a place for them.

About a year ago, I put together some slides about my website SingAboutScience.org for a potential collaborator. One of my points was that recent improvements in the website had not caused an appreciable increase in the fraction of website visitors who returned for multiple visits. A pair of pie charts helped make the point quickly and clearly.

pie chart

Or so I thought. When I described this example to Leila, she did not find it compelling.

At any rate, this beats arguing about money.

h1

Images from the lab and the classroom

July 15, 2013

Coworker Ryan Choi has started supplying us with “fun facts of the week.” (Source for this one: omg-facts.com.)
no time travel, says Hawking

This painting (by prolific Seattle muralist Ryan Henry Ward) sits in the hallway on my floor. I believe it represents red blood cells infected with the malaria parasite.
2013_07_16_Henry_malaria_painting

One of the proteins our lab is studying is human glucose 6-phosphate dehydrogenase. Today we received a 3D printer-generated model of this protein from Tami Caraballo and her students at Glacier Peak High School in Snohomish, Washington.
G6PD model

I love the fact that I’m teaching a course where a single page of a handout might cover both engineering of muscle tissue in vitro … and the lyrical and rhythmic structure of the 1950s hit “Hound Dog.”
handout page 2

h1

Strategies for reading the primary literature

July 2, 2013

Aside from my obsession with teaching science through music, one of my main pedagogical interests is helping students read primary literature, i.e., the original articles in which scientists report their research findings. These articles tend to be really hard to understand, yet (for me, at least) gathering information in this way feels a bit like unlocking the secrets of the universe. I’m not depending on a doctor who read a book which cited a review article which cited the primary paper; I have more direct access to the original, less filtered data.

Yesterday, my Biology 485 students and I made a list of strategies for comprehending these difficult but valuable papers. Here are some of my favorites.

• If the Introduction section of a paper doesn’t make sense, the rest of it probably won’t make sense either. Consult key references to help you digest the Introduction.

• Figure out which new vocabulary terms are absolutely central to the paper, and learn those.

• As a first pass through the article, just read the headers of the sections and subsections to get an overview. As a second pass, read the article QUICKLY, not getting bogged down in details. Then go back and read the article again.

• To understand the main point of a given figure or table, find where and how it is cited in the main text.

• As a test of whether you understand a given sentence or paragraph, rewrite it in your own words.

• Apply BQMOC analysis (Background, Question, Methods, Observations, Conclusions) to individual parts of the paper — individual figures, for example.

• Finally — and perhaps most importantly — read each paper for a specific purpose. There’s so much crammed into each one that filtering the information can be a formidable challenge. Read the paper with specific questions in mind (either provided by the instructor, or self-generated), and focus your attention on the parts that directly address those questions.

primary lit strategies

h1

A fake mini-review of “In Search of Santa”

April 14, 2013

In Search of Santa (2004) appears to be a movie for young children. It’s a G-rated cartoon lasting 75 minutes and featuring an abundance of cuddly penguin characters like the protagonists — twin sister princesses voiced by Hilary and Haylie Duff. Yet beneath the crude CGI animation, cliched moral lessons, and preposterous plot twists is a seething, searing indictment of the academic world and its self-important, soulless inhabitants.

The movie’s villains are Agonysla, Derridommis, and Mortmottimes, a trio of royal advisers collectively known as the Terribly Deep Thinkers. As they explain in their theme song, “We’re the Terribly Deep Thinkers/We’re walking almanacs/Our bones are old and brittle/But our minds are sharp as tacks. We’re overeducated/We’re snooty brainiacs/ We possess an excess/Of many useless facts.”

Early in the film the triumverate puts Princess Crystal on trial for the crime of believing in Santa Claus. Later they trap her and her sister, Princess Lucinda, in the Cave of Profundity while publicly declaring the sisters “lost at sea,” thus clearing their own path to the throne in the event of the king and queen’s demise. “You just use big words to hide small thoughts,” Crystal admonishes the trio, yet she and Lucinda appear doomed until — spoiler alert — a precocious baby leopard seal leads the other penguins to the imprisoned sisters, exposing the Terribly Deep Thinkers’ avarice and egotism.

So masterful is director William R. Kowalchuk’s disguise of his anti-intellectualist parable as kiddie entertainment that reviewers have called it “a cheap Saturday morning cartoon slapped together to cash in on the Hilary Duff wave” and “forgettable fare that may only satisfy the youngest and most undiscerning viewers.” Such opinions aside, the sisters’ improbable triumph over the Terribly Deep Thinkers is not simply an instance of Duff-mania gone awry, but rather a wit-seeking missile strike against ivory towers everywhere.

h1

Baby’s first bar graph

April 11, 2013

Aside from the algebra issue, my kindergartener’s MAP test also made me wonder what he was learning in the area of “Statistics & Probability” (another MAP math sub-score). But here’s a recent homework assignment overseen by his mom — a bona fide histogram of the frequency of red, white, and blue cars parked on the street. Note the expert color coding of the data.

1 car 2 car, red car, blue car

h1

In praise of the MAP test

March 29, 2013

In January, the parents in Phil’s kindergarten class got an email from a fellow parent. She told us that a standardized test called MAP (Measures of Academic Progress) would be administered soon, but that we had the right to opt out. She included links to a couple of anti-MAP blog entries.

Not knowing anything about the MAP beyond this one parent’s views, and not being big boat-rockers in general, Phil’s mom and I ignored the opt-out option and promptly forgot about the test until this week, when his scores came back.

According to the MAP, Phil is about average in reading and well above average in math. That’s what I would have guessed, but it’s nice to have an independent confirmation.

The results came with reading sub-scores for Phonological Awareness, Phonics, Concepts of Print, Vocabulary & Word Structure, Comprehension, and Writing. Phil’s Comprehension score was “LoAvg” (21st to 40th percentile), so it was suggested that we ask more questions when reading with him — certainly a reasonable suggestion.

The math sub-scores were for Problem Solving, Number Sense, Computation, Measurement & Geometry, Statistics & Probability … and Algebra.

Algebra? The class I took in 8th grade?

Phil was rated “High” in all of the math subcategories, including Algebra, so I decided to explore the validity of the test by giving Phil my best approximation of a kindergarten-level Algebra problem.

“I’m thinking of a number,” I said. “You don’t know what that number is, so we’ll call it X. But what if I told you that X plus 2 equaled 3? Would you know what X was then?”

Phil thought for a moment, then correctly answered that X was 1.

He struggled with “X minus 6 equals 2,” but eventually solved that one too.

Maybe Algebra really IS taught in kindergarten these days!