One of the most common types of feedback I get from clients is questions about data that doesn’t make sense. Most commonly – “How can we have 8 out of 10 ‘dead’ at 4 hours and then only 2 out of 10 dead at 24 hours?”
Actually I am quite proud of such anomalies in our data sets since most data sets SHOULD have these types of things for several reasons:
- In some cases insects truly do recover (as exemplified by pyrethrin recovery) after an initial impact.
- In some cases insects are easily misclassified as ‘dead’ when really they were just stunned or simply not twitching at the moment that the researcher inspected that rep.
- Some species of arthropods are more prone to have very little twitching or ‘knockdown’ behavior and hence are more prone to jump back and forth between alive and dead classifications (such as caterpillars, grubs, and the like).
- In some cases our researcher simply didn’t look long enough at the rep to confirm the condition of all 10 insects (how long is ‘long enough’?).
- In some cases the insect may have been wedged or tangled up in another one and the movement was confused between the 2 individuals.
- In some cases the insect was obscured from the researcher’s view and was mislabeled.
- Maybe the researcher counted the same ‘dead’ (or ‘alive’) movement twice in the same insect as the insects moved around in the arena. This is very easy to do when most of the insects are alive since they are in constant motion, and it can be difficult in small arenas to keep track of each individual.
- And a sad reality is that not all people see things the same. We may have weekend counts with a different person getting counts and what one perceives to be dead may be what another will see slight twitching and call KD. We had a researcher that could see the abdomen of a fly moving when the rest of us could see nothing even with some magnification.
And the part I like the most about the ‘dead coming back to life’ is that it shows Snell Sci has a trained staff…. Trained to count ‘one bug at a time’. The biggest problem I notice for front line researchers getting counts is that they can easily look at the previous data count, see that there are already 2 dead and just look to see if there are now 3 or more. Or if the last count showed 10 dead, they don’t even bother picking up the arena to get a new count and just write down 10 dead again today. We strive to have our researchers ignore the previous counts and only record what they see in the current count. If that creates wacky data or confusing issues, I would rather deal with that than having a mistake moved from one count to the next. Continuing mistakes causes data quality to quickly go downhill.
Bottom line… things do happen with live bugs and live people. And when the data looks perfect, then it is good to be suspicious. Small glitches are good because it tells us that the researchers are not trying to make everything look pretty. With 1,000s of tests each year, we just don’t see a lot of perfect tests. There is something odd about all of them. And with 3-6 people involved in many of our tests (rearing, prep, labeling, mixing, treating, etc.) it is good to have several conflicting things happen to keep everyone on their toes. When only one person is involved in a single test, it is easy to get lazy and make assumptions.