Bat Cunnilingus and Spurious Results

Thanks to my brother I became aware of this article about bat cunnilingus today. My brother, knowing how entertained I had been by a bat felating itself at Disney on my honeymoon, decided that this would be just the sort of article that I’d like. Knowing the kind of examples I put in my textbooks I suspect his email is the first of many. Anyway, this article suggests that male bats engage in cunnilingus with females. Female bats, that is. They have videos to prove it. Male bats like cunnilingus so much that they don’t just use it as foreplay, but they get stuck in afterwards as well. The article has two main findings:
  1. The more that males (bats) engage in cunnilingus, the longer the duration of intercourse, r = .91 (see their Figure1). They tested this with a Pearson r, based on 8 observations (although each observation was the average of several data points I think – it’s not entirely clear how the authors came about these averages). They conclude that this behaviour may be due to sperm competition in that males sort of ‘clean out’ the area before they procreate. They argue that the longer intercourse duration supports this argument because longer duration = greater chance of paternity.
  2.  The more that males engage in pre-intercourse cunnilingus, the less they engage in post-intercourse cunnilingus, r = .79 (see their Figure2). They again tested this with a Pearson r, based on 8 observations. They conclude “… the negative relationship between the duration of pre- and post-copulatory cunnilingus suggests the possibility of sperm competition. Thus, the male removes the sperm of competitors but apparently not its own – lick for longer before but less after copulation, if paternity uncertainty is high.”

There are several leaps of faith in their conclusions. Does longer duration necessarily imply greater chance of paternity? Also, if sperm competition is at the root of all of this then I’m not sure post coital cunilingus is really ever an adaptive strategy: consuming your own sperm hardly seems like the best method of impregnating your mate. Yes, I’m still talking about bats here, what you choose to do in your own free time is none of my business.

Enough about Cunnilingus, what about Statistics?

Anyway, I want to make a statistical point, not an evolutionary one. It struck me when looking at this article that finding 2 is probably spurious. Figure 2 looks very much like two outliers are making a relationship out of nothing. With only 8 observations they are likely to have a large impact. With 8 observations it is, of course, hard to say what’s an outlier and what isn’t; however, the figure was like a bat tongue to my brain: my juices were flowing. Fortunately, with so few observations we can approximate the data from the two figures. I emphasises that these are quick estimates of values based on the graphs and a ruler. We have three variables estimated from the paper: pre (pre-intercourse cunnilingus), post (post-intercourse cunnilingus), duration (duration of intercourse).
We’re going to use R, if you don’t know how to use it then some imbecile called AndyField has written a book on it that you could look at. We can input these approximate scores as:
pre<-c(46, 50, 51, 52, 53, 53.5, 55, 59)
post<-c(162, 140, 150, 140, 142, 138, 152, 122)
duration<-c(14.4, 14.4, 14.9, 15.5, 15.4, 15.2, 15.3, 16.4)
bat<-data.frame(pre, post, duration)
Ok, so that’s the three variables entered, and I’ve made a data frame from them called bat.
Let’s load some packages that will be useful (if you don’t have them installed then install them) and source Rand Wilcox’s very helpful robust functions:
library(ggplot2)
library(Hmisc)
library(boot)
source(“http://dornsife.usc.edu/assets/sites/239/docs/Rallfun-v20”)
Let’s draw some rough replicas of Figures 1 and 2 from the paper:
qplot(pre, post, date = bat, ylim = c(85, 200), xlim = c(44, 60)) + stat_smooth(method = “lm”)
qplot(pre, duration, date = bat, ylim = c(13, 18), xlim = c(44, 60))+ stat_smooth(method = “lm”)

Figure 1: Quick replica of Maruthupandian & Marimuthu (2013) Figure 1
Figure 1: Quick replica of Maruthupandian & Marimuthu (2013) Figure 2
As you can see the figures – more or less – replicate their data. OK, let’s replicate their correlation coefficient estimates by executing:
rcorr(as.matrix(bat), type = “pearson”)
gives us
           pre  post duration
pre       1.00 -0.78     0.91
post     -0.78  1.00    -0.75
duration  0.91 -0.75     1.00
n= 8

P
         pre    post   duration
pre             0.0225 0.0018 
post     0.0225        0.0308 
duration 0.0018 0.0308
So, we replicate the pre-intercourse cunnilingus (Pre) – duration correlation of .91 exactly (p= .0018), and we more or less get the right value for the pre-post correlation: we get r = .78 (p = .0225) instead of -.79, but we’re within rounding error. So, it looks like our estimates of the raw data from the Figures in the article aren’t far off the real values at all. Just goes to show how useful graphs in papers can be for extracting the raw data.
If I’m right and the data in Figure 2 are affected by the first and last case, we’d expect something different to happen if we calculate the correlation based on ranked scores (i.e. Spearman’s rho). Let’s try it:
rcorr(as.matrix(bat), type = “spearman”)
           pre  post duration
pre       1.00 -0.50     0.78
post     -0.50  1.00    -0.51
duration  0.78 -0.51     1.00
n= 8
P
         pre    post   duration
pre             0.2039 0.0229 
post     0.2039        0.1945 
duration 0.0229 0.1945 
The Pre-duration correlation has dropped from .91 to .78 (p = .0229) but it’s still very strong. However, the pre-post correlation has changed fairly dramatically from -.79 to r = .50 (p = .204) and is now not significant (although I wouldn’t want to read to much into that with an n = 8).
What about if we try a robust correlation such as the percentile bend correlation described by <!–[if supportFields]> ADDIN EN.CITE Wilcox2012684Wilcox (2012)6846846Wilcox, R. R.Introduction to robust estimation and hypothesis testing3rd2012Burlington, MAElsevier<![endif]–>Wilcox (2012)<!–[if supportFields]><![endif]–>? Having sourced Wilcox’s functions we can do this easily enough by executing:
pbcor(bat$pre, bat$post)
pbcor(bat$pre, bat$duration)
For the pre post correlation we get:
$cor
[1] -0.3686576
$test
[1] -0.9714467
$siglevel
[1] 0.368843
$n
[1] 8
It has changed even more dramatically from -.79 to r = .37 (p = .369) and is now not significant (again I wouldn’t want to read to much into that with an n = 8, but it is now a much smaller effect than they’re reporting in the paper).
The pre-duration correlation fares much better:
$cor
[1] 0.852195
$test
[1] 3.989575
$siglevel
[1] 0.007204076
$n
[1] 8
This is still very strong and significant, r = .85 (p = .007). We could also try to estimate the population value of these two relationships by computing a robust confidence interval using bootstrapping. We’ll stick with Pearson r seeing as that’s the estimate that they used in the paper but you could try this with the other measures if you like:
bootr<-function(data, i){
cor(data[i, 1],data[i, 2])
}
bootprepost<-boot(bat[1:2], bootr, 2000)
boot.ci(bootprepost)
bootpreduration<-boot(bat[c(1, 3)], bootr, 2000)
boot.ci(bootpreduration)
If we look at the BCa intervals, we get this for the pre-post correlation:
Level     Percentile            BCa         
95%   (-0.9903,  0.5832 )   (-0.9909,  0.5670 )
and this for the pre-duration correlation:
Level     Percentile            BCa         
95%   ( 0.6042,  0.9843 )   ( 0.5022,  0.9773 )
In essence then, the true relationship between pre- and post-intercourse cunnilingus duration lies somewhere between about –1 and 0.6. In other words, it could be anything really, including zero. For the relationship between pre-intercourse bat cunnilingus and intercourse duration the true relationship is somewhere between r= .5 and .98. In other words, it’s likely to be – at the very worst – a strong and positive association.

Conclusions

What does this tell us? Well, I’m primarily interested in using this as a demonstration of why it’s important to look at your data and to employ robust tests. I’m not trying to knock the research. Of course, in the process I am, inevitably, sort of knocking it. So, I think there are three messages here:
  1. Their first finding that the more that males (bats) engage in cunnilingus, the longer the duration of intercourse is very solid. The population value could be less than the estimate they report, but it could very well be that large too. At the very worst it is still a strong association.
  2. The less good news is that finding 2 (that the more that males engage in pre-intercourse cunnilingus, the less they engage in post-intercourse cunnilingus) is probably wrong. The best case scenario is that the finding is real but the authors have overestimated it. The worst case scenario is that the population effect is in the opposite direction to what the authors report, and there is, of course, a mid ground that there is simply no effect at all in the population (or one so small as to be not remotely important).
  3. It’s also worth pointing out that the two extreme cases that have created this spurious result may in themselves be highly interesting observations. There may be important reasons why individual bats engage in unusually long or short bouts of cunnilingus (either pre or post intercourse) and finding out what factors affect this will be an interesting thing that the authors should follow up.

 References

Bonferroni correcting lots of correlations

Someone posed me this question:
Some of my research, if not all of it (:-S) will use multiple correlations. I’m now only considering those correlations that are less than .001. However, having looked at bonferroni corrections today – testing 49 correlations require an alpha level of something lower than 0.001. So essentially meaning that correlations have to be significant at .000. Am I correct on this? The calculator that I am using from the internet says that with 49 correlational tests, with an alpha level of 0.001 – there is chance of finding a significant result in approximately 5% of the time.
Some people have said to me that in personality psychology this is okay – but I personally feel wary about publishing results that could essentially be regarded as meaningless. Knowing that you probably get hammered every day for answers to stats question, I can appreciate that you might not get back to me. However – if you can, could you give me your opinion on using multiple correlations? Just seems a clunky method for finding stuff out.
It seemed like the perfect opportunity for a rant, so here goes. My views on this might differ a bit from conventional wisdom, so might not get you published, but this is my take on it:
  1. Null hypothesis significance testing (i.e. looking at p-values) is a deeply flawed process. Stats people know it’s flawed, but everyone does it anyway. I won’t go into the whys and wherefors of it being flawed but I touch on a few things here  and to a lesser extent here. Basically, the whole idea of determining ‘significance’ based on an arbitrary cut off for a p-value is stupid. Fisher didn’t think it was a good idea, Neyman and Pearson didn’t think it was a good idea, and the whole thing dates back to prehistoric times when we didn’t have computers to compute exact p-values for us.
  2. Because of the above, Bonferroni correcting when you’ve done a billion tests is even more ridiculous because your alpha level will be so small that you will almost certainly make Type II errors and lots of them. Psychologists are so scared of Type I errors, that they forget about Type II errors.
  3. Correlation coefficients are effect sizes. We don’t need a p-value to interpret them. The p-value adds precisely nothing of value to a correlation coefficient other than to potentially fool you into thinking that a small effect is meaningful or that a large effect is not (depending on your sample size). I don’t care how small your p-value is, an r = .02 or something is crap. If your sample size is fairly big then the correlation should be a precise estimate of the population effect (bigger sample = more precise). What does add value is a confidence interval for r, because it gives you limits within which the true (population) value is likely to lie.

So, in a nutshell, I would (personally) not even bother with p-values in this situation because, at best, they add nothing of any value, and, at worst, they will mislead you. I would, however, get confidence intervals for your many correlations (and if you bootstrap the CIs, which you can on SPSS, then all the better). I would then interpret effects based on the size of r and the likely size of the population effect (which the confidence intervals tells you).
Of course reviewers and PhD examiners might disagree with me on this, but they’re wrong:-)
Ahhhhhh, that feels better.

Bias in End of Year Album Polls?

So, in rolls 2012 and out rolls another year. I like new year: it’s a time to fill yourself with optimism about the exciting things that you will do. Will this be the year that I write something more interesting than a statistics book, for example? It’s also a time of year to reflect upon all of the things that you thought you’d do last year but didn’t. That’s a bit depressing, but luckily 2011 was yesterday and today is a new year and a new set of hopes that have yet to fail to come to fruition.
It’s also the time of year that magazines publish their end-of-year polls. Two magazines that I regularly read are Metal Hammer and Classic Rock (because, in case isn’t obvious from my metal-themed website and podcasts, I’m pretty fond of heavy metal and classic rock). The ‘album of the year’ polls in these magazines are an end of year treat for me: it’s an opportunity to see what highly rated albums I overlooked, to wonder at how an album that I hate has turned up in the top 5, or to pull a bemused expression at how my favourite album hasn’t made it into the top 20. At my age, it’s good to get annoyed about pointless things.
Anyway, for years I have had the feeling that these end of year polls are biased. I don’t mean in any nefarious way, but simply that reviewers who contribute to these polls tend to rate recently released albums more highly than ones released earlier in the year. Primacy and recency effects are well established in cognitive psychology: if you ask people to remember a list of things, they find it easier to recall items at the start or end of the list. Music journalists are (mostly) human so it’s only reasonable that reviewers will succumb to these effects, isn’t it?
I decided to actually take some time off this winter Solstice, and what happens when you take time off? In my case, you get bored and start to wonder whether you can test your theory that end of year polls are biased. The next thing you know, you’re creating a spreadsheet with Metal Hammer and Classic Rock’s end of year polls in it, then you’re on Wikipedia looking up other useful information about these albums, then, when you should be watching the annual re-run of Mary Poppins you find that you’re getting R to do a nonparametric bootstrap of a mediation analysis. The festive period has never been so much fun.
Anyway, I took the lists of top 20 albums from both Metal Hammer and Classic Rock magazine. I noted down their position in the poll (1 = best album of the year, 20 = 20th best album of the year), I also found out what month each album was released. From this information I could calculate how many months before the poll the album came out (0 = album came out the same month as the poll, 12 = the album came out 12 months before the poll). I called this variable Time.since.release.
My theory implies that an album’s position in the end of year list (position) will be predicted from how long before the poll the album was released. A recency effect would mean that albums released close to the end of the year (i.e. low score onTime.since.release) will be higher up the end of year poll (remember that the lower the score, the higher up the poll the album is). So, we predict a positive relationship between position and Time.since.release.
Let’s look at the scatterplot:
Both magazines show a positive relationship: albums higher up the poll (i.e. low score on position) tend to have been released more recently (i.e., low score on the number of months ago that the album was released). This effect is much more pronounced though for Metal Hammer than for Classic Rock.
To quantify the relationship between position in the poll and time since the album was released we can look at a simple correlation coefficient. Our position data are a rank, not interval/ratio, so it makes sense to use Spearman’s rho, and we have a fairly small sample so it makes sense to bootstrap the confidence interval. For Metal Hammer we get (note that because of the bootstrapping you’ll get different results if you try to replicate this) rho = .428 (bias = –0.02, SE = 0.19) with a 95% bias corrected and accelerated confidence interval that does not cross zero (BCa 95% = .0092, .7466). The confidence interval is pretty wide, but tells us that the correlation in the population is unlikely to be zero (in other words, the true relationship between position in the poll and time since release is likely to be more than no relationship at all). Also, because rho is an effect size we can interpret its size directly, and .428 is a fairly large effect. In other words, Metal Hammer reviewers tend to rank recent albums higher than albums released a long time before the poll. They show a relatively large recency effect.
What about Classic Rock? rho = .038 (bias = –0.001, SE = 0.236) with a BCa 95% CI = –.3921, .5129). The confidence interval is again wide, but this time crosses zero (in fact, zero is more or less in the middle of it). This CI tells us that the true relationship between position in the poll and time since release is could be zero, in other words no relationship at all. We can again interpret the rho directly, and .038 is a very small (it’s close to zero). In other words, Classic Rock reviewers do not tend to rank recent albums higher than albums released a long time before the poll. They show virtually no recency effect. This difference is interesting (especially given there is overlap between contributors to the two magazines!).
It then occurred to me, because I spend far too much time thinking about this sort of thing, that perhaps it’s simply the case that better albums come out later in the year and this explains why Metal Hammer reviewers rate them higher. ‘Better’ is far too subjective a variable to quantify, however, it might be reasonable to assume that bands that have been around for a long time will produce good material (not always true, as Metallica’s recent turd floating in the loo-loo demonstrates). Indeed, Metal Hammer’s top 10 contained Megadeth, Anthrax, Opeth, and Machine Head: all bands who have been around for 15-20 years or more. So, in the interests of fairness to Metal Hammer reviewers, let’s see whether ‘experience’ mediates the relationship between position in the poll and time since the album’s release. Another 30 minutes on the internet and I had collated the number of studio albums produced by each band in the top 20. The number of studio albums seems like a reasonable proxy for experience (and better than years as a band, because some bands produce an album every 8 years and others every 2). So, I did a mediation analysis with a nonparametric bootstrap (thanks to the mediation package in R). The indirect effect was 0.069 with a 95% CI = –0.275, 0.537. The proportion of the effect explained by mediation was about 1%. In other words, the recency bias in the Metal Hammer end of year poll could not be explained by the number of albums that bands in the poll had produced in the past (i.e. experience). Basically, despite my best efforts to give them the benefit of the doubt, Metal Hammer critics are actually biased towards giving high ratings to more recently released albums.
These results might imply many things:
  • Classic Rock reviewers are more objective when creating their end of year polls (because they over-ride the natural tendency to remember more recent things, like albums).
  • Classic Rock reviewers are not human because they don’t show the recency effects that you expect to find in normal human beings. (An interesting possibility, but we need more data to test it …)
  • Metal Hammer should have a ‘let’s listen to albums released before June’ party each November to remind their critics of albums released earlier in the year. (Can I come please and have dome free beer?)
  • Metal Hammer should inversely weight reviewer’s ranks by the time since release so that albums released earlier in the year get weighted more heavily than recently released albums. (Obviously, I’m offering my services here …)
  • I should get a life.

Ok, that’s it. I’m sure this is of no interest to anyone other than me, but it does at least show how you can use statistics to answer pointless questions. A bit like what most of us scientists do for a large portion of our careers. Oh, and if I don’t get a ‘Spirit of the Hammer’ award for 15 years worth of infecting students of statistics with my heavy metal musings then there is no justice ion the world. British Psychological Society awards and National Teaching Fellowships are all very well, but I need a spirit of the hammer on my CV (or at least Defender of the Faith).

Have a rockin’ 2012
andy
P.P.S. My top 11 (just to be different) albums of 2011 (the exact order is a bit rushed):
  1. Opeth: Heritage
  2. Wolves in the Throne Room: Celestial Lineage
  3. Anthrax: Worship Music
  4. Von Hertzen Brothers: Stars Aligned
  5. Liturgy: Split LP (although the Oval side is shit)
  6. Ancient Ascendent: The Grim Awakening
  7. Graveyard: Hisingen Blues
  8. Foo Fighters: Wasting Light
  9. Status Quo: Quid Pro Quo
  10. Manowar: Battle Hymns MMXI
  11. Mastodon: The Hunter