My kids have developed a love for the show Mythbusters. I can’t blame them. The show is done well, the experiments are often captivating, the personalities interesting, and the questions they come up with are so unique.
For example, one recent episode questioned the myths of a few common ‘wise’ sayings. Among these, there was the proverb that “You can’t polish a turd.”
Adam and Jaime set out to test this theory. Jaime intended to use actual polish, and Adam learned an old Japanese art involving polishing dirt into shiny smooth spheres using only water and elbow grease. Jaime even considered the diets of the animals in the “selection process” and chose a carnivore’s droppings because of the assumed “quality” of the material.
The end result? Myth busted. You can polish a turd.
There’s not much you can’t spin to sound good. You don’t even have to outright lie. The way we tell the truth, and the amount of truth we tell, both of those can contribute to a shiny result.
I immediately think of the various kinds of data and statistics I look at in my office job. Much like any corporate setting, our military units track all sorts of information in order to figure out the quality and quantity of whatever we’re doing. That can be really helpful.
But it offers a tempting opportunity. We can put a shine on poop and sell a lie to those above us.
What are the dangers of how we use data and metrics?
First, we can make something bad look good.
Our unit uses a system to track how many events aircrew accomplish each month, with a set quota required based on experience. All the crewmembers get checked to see who is keeping current. The point of the program is to ensure commanders know the readiness of their aircrews. In the guidance, it specifies this data is not meant to be used as a sort of report card or grade.
So of course we use it as a grade, and everything becomes focused on getting good numbers instead of actually preparing aircrews for duty. We have people running training events designed only to make numbers look good instead of get crews ready for their mission in the real world. The numbers that get reported look great, at the expense of the aircrew expected to report for a flight ready to do their best.
Similar to that, I recall an inspection of our programs where the entire squadron scrambled to put a standardized cover and spine on every binder in the building. The logic was (and still is) that if things look good at first glance, inspectors don’t dig as deep or ask as many questions. So we put pretty covers and spines on binders that hadn’t been reviewed in years, despite a requirement to review annually. I questioned that logic, but the unit gambled that no one would check those particular binders, and they won that bet.
So what’s the answer? Focus efforts on doing the job, not on counting jobs done. Let the data serve its purpose. If it shows a problem, fix the problem, not the numbers.
Second, we might manipulate the data to suit our needs.
There are leaders who care about the mission and their people, and there are managers who care about the results they get from the mission and the people. The former make decisions based on the perceived best interest of the entire organization. The latter act based on what creates the best data for their area of responsibility, regardless of impact to the rest of the organization.
It’s really easy to tell when you’re dealing with one or the other.
If I care about a particular program or system only when I’m in charge of it, I fall into the selfish manager camp. Good results for the program make the manager look good too. So they focus effort on what sets them up for success. When something beneficial to the organization might impact the manager’s results, they shoot that suggestion down.
The easiest way to tell which type you’re dealing with is by witnessing a change in responsibility or authority. If all their priorities shift when they change positions, if what once got shot down now gets approved to make the new program results better, then *cue Jeff Foxworthy* they might be a manager.
Leaders care about doing what’s best for all concerned, even when it doesn’t benefit them, even when it hurts. They’re willing to sacrifice some results in order to keep the organization going. They’re willing to fight for what’s best regardless whether that makes them look good. Of course they want good numbers and positive performance reports, but they want meaningful numbers and accurate performance reports more.
What do I do with this? I try to fight for what’s right even when it’s outside my area of responsibility. I care for the entire organization, not just my little corner of it. Caring about other people’s results usually leads to them returning the favor, in my experience. And that makes a healthier unit.
Finally, we may assume the data tells the whole story.
Changes in data help us identify trends. If production slips, we know, because we track how many things we produce. If accuracy slides below standards, or if we’re missing something important, data is often the first indication.
As much as I rant about counting beans and reliance on tracking numbers, there’s a reason we do it. Generally, it works great. The trouble is, some situations are unique, and it’s hard to capture everything in a number.
We had a student come through who dominated every phase of the course. From the beginning, he made his goal clear. “I’ve been at the top of every class so far in the military, and that’s important to me. I want to do that here.”
Out of, say, 600 questions in the academic course, he got one wrong. His marks were good on all simulations. He went into his flight training with the same vigor, and got marked well above standards in a couple key areas. On his evaluation at the end of the training, he received no discrepancies or markdowns, and got marked for “superior performance” in his main aircrew duty.
All those results go into a worksheet with a formula to figure out an overall score. The trick is, almost everyone gets the same grades on their sims and flights, so his points weren’t really any better than someone else as long as they did decent work. The eval grade that counted for the formula also only takes into account the same grade the average student will receive; it didn’t factor in the superior performance areas.
So the math worked out to put him just below the cutoff for Distinguished Graduate, even though he literally did everything right. We could look at other students and see a clear distinction between his performance and theirs. The data are usually reliable, but in this case, the data were misleading.
If we just look at numbers and don’t put thought into what they may or may not be telling us, we’ll walk away with an acceptable snapshot of the organization’s performance. But every so often, we’ll be wrong.
I know, this last bit doesn’t really fit the theme of polishing a turd. It’s more like Tolkien’s turn of the old proverb into “All that is gold does not glitter. Not all who wander are lost.” Sometimes what looks like poop might need a touch of attention and thought, maybe a quick polish, to find the gold hidden beneath the surface.
Like anything, data are fantastic when used properly, with integrity and care. The fact is, we can polish turds.
The question is, who wants to?