I recently shared a video with an old friend who is an immunologist and statistics teacher. (That video will be published shortly, btw). In that video I argue we need to shift to a general linear model approach to teaching statistics. Here’s an old video that explains what I mean by teaching stats as a GLM:

 

Not surprisingly, she was skeptical. She sent me an email with her objections (and points of agreement). I figured I’d share it with ya’ll. (Her words are italicized, mine are indented).

Okay, so here’s the things I thought.

Overwhelmingly, you and I feel exactly the same about pedagogy. It should be teaching concepts and connections, because memorization is pointless, particularly in the age of Google. So, things I agree with:

1) Don’t memorize. I don’t give tests, but when I did, I allowed a 3×5 that they could write all the memorization stuff (like formulas) on. 

Nice to see we agree 🙂 Although, it’s not the memorization I find problematic, per se. It’s the fact that it requires too much intellectual effort to actually run the analysis. By the time they actually figure out what analysis and run the thing, there’s nothing left to interpret the results. Imagine if, in order to unlock your phone, you’d have to enter a password, enter your mother’s date of birth, swipe a fingerprint scanner, do 30 jumping jacks, then eat a pint-sized bowl of cereal. Only then would your phone unlock. That’s a massive human factors nightmare. Software (and deciding which analyses to run) should not get in the way of giving us what we want (results). I’m just shortcutting the time from opening software to interpreting results.

2) It’s not a protocol/procedure. It’s a science that is changing constantly, and teaching it like a protocol is silly. Of course people will learn in chunks and not connect. I use several main concepts, like the ratio, to connect everything. We talk about linear regression that way (how could you not, with an F?). 

Yes, agreed. I actually wrote a paper about this: i propose an eight step approach to data analysis. It’s not a procedure, it’s a framework. Here’s the link, in case you’re interested:

https://psyarxiv.com/r8g7c/

btw, it has Harry Potter references and a birthday cake metaphor. Some of my best work 🙂

4) Once the concepts are solid, use software. Don’t endlessly make people calculate z scores.

YES! I used to spend hours doing hand calculations in front of students. What a waste of time!

Things I had concerns about

1) We also don’t teach biology to non-biologists in the way experts think about it. To experts, all biology is inclusive fitness, or what makes a population most likely to reproduce (to genes to be passed on). Complicated fields, like Immunology, make so much more sense when you realize it’s all about many cells competing for resources to be the ones to survive and generate copies of themselves. It’s a truly elegant system. But learning about each system in and of itself, in little chunks, is quite difficult without the interwoven themes. BUT – and here’s where I have trouble – It is easier, and more practical, to teach it how we teach it. The students don’t get overwhelmed, and the ones that aren’t too bright don’t get totally lost. 

Good point. I too have wondered whether learning in discrete chunks is necessary before one comes to see things as interrelated. But, I don’t think that’s the case with statistics, or at least the discrete chunks we currently use aren’t serving us well. Case in point: you (and many others) seem to struggle with the idea that everything is just the linear model when it is, verifiably, all the linear model. Again, this is not intended as an insult or condescension; it’s just the way you were taught. The fact that it’s so hard for you (and others) to accept says, to me at least, the existing curriculum isn’t serving us well.

After writing that, it sounds like I’m being harsh and argumentative. I’m not. Just making the point that the way we currently teach requires students to make a really hard mental transition, one that is entirely unnecessary.

2) At the end of the day, sometimes we need a yes/no, actually, usually we need a yes/no. We make a cutoff, even though we KNOW we are losing information, because we have to generate something that is manageable, understood quickly by others in the field, and summarized as an asterisk in a paper so it doesn’t take up too much space.

I agree. But, sometimes is different than every time. Yes, sometimes making binary decisions is best. You’ll see in some videos in the coming weeks I demonstrate situations where I have to make binary decisions (in this case, whether we should keep an interaction term in our model). However, sometimes we do need to know something about the degree. The standard stats curriculum says little (though not nothing) about that. Mine does 🙂

I don’t disagree your way makes sense; however, it requires a LOT of people changing how they think about statistics.

Totally. But, are you familiar with the “replication crisis”? That too requires a LOT of people changing how they think about research. But it’s happening (at least in psychology). It’s a hard change, but things are changing.

And, just because it’s hard, doesn’t mean it’s not necessary 🙂

You’re going to have to have buy-in from thousands or perhaps millions and you’ll have to convince journals to give more space to statistics. 

I’d recommend reading the article I mentioned above. What I recommend actually doesn’t add that much, and what it does add is much more informative than a table of p-values. Yes there is a cost, but the benefits far outweigh the costs.

3) You’ll have to forgive my piecemeal training and the fact that I am not even close to as much of an expert as you – but non-parametrics? How do you deal with that?  

Great question! But, I need to clarify some things. How I handle messy data is very different now than when I handled it as a biostatistician. From what I remember, it’s very common to handle messy models with Mann Whitneys or Friedman tests (among others). From what I’ve read (e.g., https://psycnet.apa.org/record/2008-14338-002), these are very dated ways of handling messy models. In fact, I don’t even use “modern” robust procedures (as that article advocates). Instead, rather than removing the parametric from statistics, I just assume a non-normal distribution. If a biomarker is super skewed and zero-inflated, maybe I’ll model it as a gamma or poisson or zero-inflated model. In other words, rather than sweep the messiness under the rug as nonparametric procedures do, I’d rather model that messiness, except I model them as generalized linear models. Well, generalized linear models are just extensions of general linear models, so it’s an easy transition to make. (Although I don’t teach generalized linear models until their second statistics class).

Leave a Reply

Your email address will not be published. Required fields are marked *