Useless citations

Milan Curcic
3 min readOct 28, 2020

A few days ago I got notice from Google Scholar about the first citation of my latest paper on drag coefficient in hurricane winds:

https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2020GL087647

When I get these notifications, I go and look at the paper that cited it. Perhaps my results supported theirs, or maybe the Authors disagreed with something I wrote. The citing paper is Gauer (2020):

And the citing statement is:

As a conservative estimate, the drag coefficient cDa is assumed to be in the order of 0.01 (see for example Curcic and Haus, 2020).

There are two problems with this reference.

First, Gauer writes about drag that calm air exerts on an avalanche. We measured drag over water and waves in hurricane-force winds. They’re different physical processes. If a reader follows the reference hoping to learn more about air drag over moving snow, they’ll be disappointed because they won’t find that. This citation is useless.

Second, the citation to our paper immediately follows the statement that the drag coefficient is in the order of 0.01. Our paper finds the drag coefficient in the order of 0.001. A reader that is not careful could re-use this reference in another paper without doing due diligence. This citation is not only useless, but potentially harmful.

So, how does this happen when we have rigorous peer review?

Is the editor supposed to catch this? Editors want to work less, not more. They already got their plates full. They shouldn’t worry about this level of detail — the reviewers will fix it.

Are the reviewers responsible for cross-checking all references? Maybe, but I doubt that most of them actually do it. I don’t in most cases. I dive only into the cited papers that interest me.

Are then the authors responsible for providing accurate and relevant references? Absolutely! But we authors are lazy. We want to get the paper out of the way and get back to work. Providing a reference for every statement we make can be a chore — something we do to meet the spec and move on. If a citation is problematic, the editor or the reviewers will catch it and fix it for us, right?

Papers nowadays tend to be 10% useful information and 90% filler. There’s a bad historical habit among authors to do a background literature dump and bore the reader to death, just because we’re supposed to do it. Everybody else is doing it. Plus, it covers our backs in peer review. Who could argue that “Some scientific claim (Jones 1973, 1974; Smith et al. 1985a, 1985b, 1992; Doe et al. 2004, 2009)” isn’t backed by the literature? Because we’re incentivized to write long and boring literature reviews and back up every claim with a citation (but the more the better), there’s an increasing risk of useless citations slipping into the literature reviews if the author doesn’t do the due diligence.

Citing papers was meant to be the “standing on the shoulders of giants” in practice. It gives us the building blocks to base our research is on, without having to prove or describe them fully because somebody else already did. Nowadays, it’s that, and more. We write to meet a spec. To please reviewers and editors. How many times have you Google Scholar-searched a keyword, and cited the first result without reading the paper? It’s okay — I’ve done this too. The logic goes like this: This was the number one search result for this keyword. Look, it’s been cited thousands of times; it must be relevant — I’ll cite it!

So what are the takeaways here?

  1. Stop writing long and boring literature reviews that span 50+ years of scientific progress in every paper that you write. Focus on the minimal essential literature review for your paper, and trash everything else. Most papers don’t need an introduction longer than a page.
  2. Don’t assume that the editor or the reviewers will fix it for you. You’re responsible for your writing. Do the work.
  3. Use citations that are relevant, accurate, and that help the reader learn more.

--

--