menu

Track Record.

Hugo is the leader in crowd-sourcing, jury tests, and data analytics. Based on our research begun at Harvard, our methods published in the New England Journal of Medicine, and our findings featured on NBC News, Wall Street Journal, and NPR, this team uses randomized experiments and robust statistical methods to predict the best strategy for a case. Over nearly a decade of scholarly and commissioned research, we have tested dozens of cases with over 10,000 respondents.

Anchoring

A large body of research has shown that when plaintiffs make large demands for damages, these “anchors” can powerfully inflate jury awards, even if the amount is unreasonable. We tested whether there is an upwards limit. Basically, no: any credibility effect on win-rate is a small cost to pay for much larger recoveries overall. We also tested three different defense responses, and found that countering with a more reasonable amount is a safe approach, because it did not create a concession effect. In a follow up paper, we tested whether the strategy of breaking up damages awards into time units of suffering (aka “per diems”) could similarly create a large anchoring effect on damages, and were surprised to find that they did not. The strategy did improve plaintiff win rates, however.

Credibility

Jurors often rely less on the substance of what an expert is saying, and more on peripheral cues as to her credibility. With “blinding,” we have developed a particular model for preserving witness objectivity and telegraphing that fact to jurors, and our evidence shows they will dramatically reward litigants who do so. In this work, we also pioneered a method of continuous response, to measure juror responses at half-second intervals.

A client needed to determine whether the plaintiff could enhance his credibility by admitting some responsibility for a bad outcome. Our test of both strategies revealed that it was better to just hit the other side hard. We also noticed that the plaintiff’s $5M demand for damages may be too small, so we tested another variation with $25M instead. Sure enough, the value of the case more than doubled. At trial, the plaintiff recovered over $17M.

Exposure

Clients often ask us to evaluate their exposure. For example, in a class action faced by a leading consumer electronics company, in-house counsel wanted a case evaluation. The process helped the client determine how much time and money to invest in defending the case. In another case involving a police shooting captured on video, on the eve of trial, we validated a custom scoring index for voir dire, but our data predicted that the overall win rate would remain unfavorable. Unfortunately settlement was not feasible, and the trial result turned out to be unfavorable. Another client took our scientific report to mediation. The mediator and opposing counsel were impressed with the rigorous approach, and had no alternative basis for resolution. So, the case settled very close to the amount we estimated.

Instructions

Testing alternative jury instructions can be important for helping courts pick the right rule, but also to determine whether an error in a particular case made a difference. In 2012, one state adopted a new jury instruction on eyewitness testimony. In work featured on NPR, we tested the new instruction versus the prior one, and found that, while the instruction helped defendants, it failed to help jurors identify unreliable testimony in particular. Similarly, we tested a circuit split on two ways of instructing jurors about the meaning of “quid pro quo” corruption and found that neither mattered. On the other hand, we tested a case both with and without erroneously-admitted testimony, and found a huge impact. Likewise, in work featured in Slate and NBC News, we found that exposure to pretrial publicity can be very impactful, and it is no remedy to ask jurors to “self-diagnose” whether they can be fair.

Merits

Beyond jury behavior, we have also done a range of studies about the merits. In one paper, we tested 18 scenarios to determine public expectations of privacy and the impact of hindsight therein. To determine whether a disclosure would have made a difference to a consumer fraud case, we tested consumer vignettes both with and without the withheld information. Here again, real experiments allow causal inference, unlike any amount of speculation or even armchair expert testimony.

HUGO Analytics provides a cost effective and efficient alternative to traditional mock jury or focus group exercises. The resulting data were robust enough to allow us to reach significant conclusions regarding case themes and strategy.

David Stortz, Drinker Biddle