New Predator Control Review Exposes Flaws
Review is flawed as well
by Cat Urbigkit, Pinedale Online!
September 5, 2016
The headlines proclaimed the importance of a new research paper. KRWG reported, "Study debunks theory that killing predators reduces livestock losses."National Geographic ran with this headline: "The case for mass slaughter of predators just got weaker: A new study found that there’s little evidence that lethal predator control does anything to help ranchers." The piece was complemented with a feature photo of four bloody wolf pelts hanging off the bed of a pickup truck.
Hogwash, once again. The new "study" was actually a literature review conducted by three researchers who examined published, peer-reviewed papers on predator control in North America and Europe. The review is entitled, "Predator control should not be a shot in the dark," by reviewers Adrian Treves, Miha Krofel, and Jeannine McManus, and published in Frontiers of Ecology, the journal of The Ecological Society of America.
Treves’ Bias I suspected the eventual outcome of the review simply by a few alarming statements in the review’s second paragraph: "Controversy and uncertainty about predator control generally persisted for decades in the absence of convincing evidence. Resolving this controversy will help to restore populations of predators and other species in largely undisturbed ecosystems as well as in more developed landscapes with people and domestic animals." That statement ("restore populations of predators") has the sound of a predetermined outcome in opposition to lethal control of predators.
I read papers written by primary reviewer Adrian Treves with a critical eye, since other papers he has authored seem agenda-driven. I glanced through this new review and noted Treves cited one of his previous papers that didn’t say what the review claimed. The review claimed that lethal intervention is implemented "days after a predation event has occurred, sometimes far from where the attack occurred." But when I went back to the paper cited, I found that Treves had stated that agency action usually took place "near depredation sites" and that field investigations typically followed complaints "within 48 hours." Not exactly the same days later and far from the attacks scenario described in the review.
In that same paper, Treves was critical of Wisconsin for providing compensation for owners of hunting dogs that are killed by wolves, stating that "Compensating owners for depredations on dogs running free on public land may encourage a practice that has negative ecological consequences (Bowers 1953)." That Bowers citation isn’t from a scientific paper but is actually an article ("The Free-Running Dog Menace") published more than 60 years ago by the associate editor of a state wildlife magazine, Virginia Wildlife. Things have changed substantially in 60 years, and to compare most of Wisconsin’s hunting dogs with the free-ranging dogs of Virginia 60 years ago isn’t a fair assessment.
The Review After conducting a Google Scholar search for peer-reviewed papers on predator control, the reviewers excluded any and all papers that did not meet their standard for "experimental" design. That means discarding any study that did not include a control/areas of no-treatment, or where there might have been "bias in sampling, treatment, measurement, or reporting." Papers were thrown out for a variety of reasons: • used sheep flocks of varied size instead of all the same size; • included captive predators; or • because livestock producers self-reported livestock losses, or shared their perceptions of the effectiveness of a control technique. It appears that the reviewers discarded the majority of papers that disagreed with the reviewers’ pre-determined conclusions, and retained only a handful, most of which supported that conclusion.
In the end, the reviewers only considered a small number of papers for their scrutiny: 12 papers to be exact, and only two of these papers met the "gold standard" for experimental design.
Review Based on Wrong Standard The fact is that most of predator control research is not conducted under an experimental design, but are instead reports of observations of actions and reactions in field conditions. The reviewers noted: "Often well-intentioned and highly competent researchers encounter flaws in research design because of inescapable challenges presented by field conditions." But that didn’t stop the authors from recommending that lethal control efforts be suspended "until gold standard tests are completed."
If we waited for gold standard research before taking action, we simply wouldn’t manage wild animal populations. The reviewers driving emphasis on experimental design for research to be valid is ludicrous. Wildlife research is primarily based on descriptive or observational studies – cases in which researchers cannot have tight control over all factors of an experiment (an impossibility in natural experiments and observational studies).
Although field research efforts may suffer from design flaws, I agree with scientists such as C.K. Catchpole, who wrote in 1989, "Most hypotheses are tested, not in the splendid isolation of one finely controlled ‘perfect’ experiment, but in the wider context of a whole series of experiments and observations. Surely a much more valuable form of validity comes from the independent repetition of experiments by colleagues in different parts of the world."
Douglas H. Johnson of the USGS Northern Prairie Wildlife Research Center authored a 2002 paper on "The Importance of Replication in Wildlife Research" in the Journal of Wildlife Management that describes the situation for wildlife researchers: "Wildlife ecologists sometimes face severe difficulties meeting the needs of control, randomization, and replication in manipulative experiments. Many systems are too large and complex for ecologists to manipulate."
Johnson’s resolution isn’t to throw out all research that fails to reach experimental design standards as the reviewers did, but to conduct studies of any phenomenon in different circumstances, with different methods, and by different investigators, a process known as metareplication.
Johnson wrote: "Metareplication involves the replication of studies, preferably in different years, at different sites, with different methodologies, and by different investigators. Conducting studies in different years and at different sites reduces the chance that some artifact associated with a particular time or place caused the observed results; it should be unlikely that an unusual set of circumstances would manifest itself several times or, especially, at several sites. Conducting studies with different methods similarly reassures us that the results were not simply due to the methods or equipment employed to get those results. And having more than one investigator perform studies of similar phenomena reduces the opportunity for the results to be due to some hidden bias or characteristic of that researcher."
Bingo! Much of the research papers thrown out by the reviewers examining predator control would serve as metareplication. As Johnson wrote, "although independent studies of some phenomenon each may suffer from various shortcomings, if they paint substantially similar pictures, we can have confidence in what we see."
What we often see is that some non-lethal methods of control are very effective, but all fail at some point in time. That doesn’t mean that we should stop using them. And we often see that lethal control is often the only effective method to stop specific predators from killing livestock at a given location. As long as predators and livestock exist together, there will be conflict between the two. Some livestock will be killed, and some predators will be killed as well.
But for all the flaws in observational study design, wildlife managers must be doing something right since they seem to be able to recover endangered species, and now manage for over-abundance of many wildlife populations, including predator populations.
But the reviewers had a different view, concluding, "we believe the science of predator control lacks rigor generally – the resulting uncertainty about the functional effectiveness of killing predators should guide evidence-based policy to non-lethal methods until gold standard tests are completed." If gold standard experimental-design tests are the measure, lethal control would never occur.
The reviewers did find 12 papers they believed were of such design they were worth examining, although they found flaws with all but two. The seven lethal control papers actually examined by the reviewers included placing poison-filled collars on sheep, placing strychnine-laced baits to reduce wolf depredation on cattle, and trapping, snaring, fumigating and shooting of coyotes and wolves.
The non-lethal control papers included chemical sterilization of coyotes, the use of light and sound devices to deter predators, the use of baits with an aversive chemical, the use of sheep herders, and utilizing livestock guardian dogs.
A press release from the University of Wisconsin-Madison (where main reviewer Treves is affiliated) stated: " ‘The majority of lethal methods appeared to waste time and resources, and threaten predators and livestock needlessly,’ Treves says. "The authors note that non-lethal methods of predator control were generally more reliable and effective in preventing carnivore predation on livestock."
And while the reviewers suggest that predator control policy should be consistent with "law, scientific evidence, and ethical standards of society," there was no mention of practicality for use in field conditions or economic feasibility. As expected, the review also failed to mention that various methods of non-lethal control are often utilized by livestock producers before lethal control is initiated. Livestock producers know the reality of the situation: When a predator kills your livestock, and then that predator is killed, that predator won’t be killing any more of your stock.
Related Links
Review - Read the full Treves paper here
Wolf Watch - by Cat Urbigkit, Pinedale Online!
|
|