Keeping Litigation out of Arbitration

Our Approach

The following articles provide detailed introductions to the philosophy and methodology of ProActive arbitration.

Scientific Support for Impartiality Practices

 

1. Common Objections and Reservations

 When I talk to people about efforts to enhance the impartiality of arbitrators and the arbitral process, the most common reservations fall into two basic categories. First, many arbitrators believe that their cognitive heuristics and biases are not powerful or impactful enough to merit such sustained energy in pursuit of mitigation. We would prefer to think that our powers of judgment are rational and fair enough to do our jobs well without so much extra effort. 

Second, while many people concede that their judgments may be influenced by unconscious processes and cognitive heuristics outside of their awareness, some are quick to chalk this up to an immutable “human nature,” and to doubt the efficacy of intervention. What proof do we have that we can, with any degree of predictability or reliability, alter our cognitive heuristics for the better?    

This article outlines some of the extensive scientific research that has been done to demonstrate 1) that our judgments and decision-making processes are influenced significantly by cognitive heuristics operating outside our awareness, and 2) that our mental models, associations, and biases are malleable, and can be significantly altered by a wide range of tools and practices.   

       

2. Research on Common Cognitive Errors and Biases in Decision-Making

Human judgment and decision-making have long been studied across many academic disciplines, including behavioral psychology, neuroscience, economics, and political science. Before the 1970s, much modeling of human decision making was based on “rational choice theory” which assumes that human beings are rational agents who make decisions based upon their self-interest, and that they can intuit and articulate their preferences coherently and consistently. 

One of the most influential and significant disruptions of this approach to understanding human judgment has come from the duo of Nobel Prize-winning economists Daniel Kahneman and Amos Tversky, whose landmark work “Judgment Under Uncertainty: Heuristics and Biases” was published in Science in 1974. In this paper, and in subsequent work over the past five decades, the researchers published their results from scores of experiments demonstrating the numerous ways in which common cognitive heuristics lead us into mistaken assumptions and adversely affect our judgment.[1]  

A. Law of Small Numbers: One of the most comprehensively documented cognitive errors is the “law of small numbers.” Kahneman illustrates the phenomenon using an example from the world of education. The Gates foundation, in searching for successful educational models to replicate and fund, noticed that a disproportionately large share of the best schools were small ones. Many resources were then devoted to funding small schools and breaking up large ones. Unfortunately, a disproportionately large share of the worst schools were also small ones! Small schools do not lead to better or worse outcomes. They are just more likely to skew toward extreme results, for the simple reason that they represent smaller sample sizes.[2]

Most people learn early in life that larger sample sizes produce more reliable findings than smaller ones. But we are not very good at applying this knowledge to our daily lives. We are always looking subconsciously for casual connections and reasonable explanations, trying to make meaning out of the innumerable facts that come our way. We try to extract clear lessons from our own experiences, which may seem like ample evidence to ourselves, but really form a very small sample size in most cases. An arbitrator with a former career litigating construction cases has undoubtedly assembled a mental map with many cognitive heuristics or “rules of thumb” about the way things work in the construction industry. But many of those “rules of thumb” have been gleaned from what is really a small sample size of cases (all of which include the arbitrator himself) and should not be applied to whole industry—especially as it is the arbitrator’s duty to treat each case as unique (see: The Blank Slate).   

The Law of Small Numbers is not an affliction only affecting the “man on the street” but a cognitive error that has been proven to influence even trained statisticians and researchers who design experiments for a living. As arbitrators, we must beware of giving undue credence to the explanations, casual connections, and accepted truths offered in expert testimony, or gleaned from our own experiences with similar cases. We must become practiced at asking ourselves whether such explanations are relying on small sample sizes, and whether they are the result of a strong predisposition to see causality where there may be only randomness, as in so much of life.  

B. Attachment to a Single Explanation: This natural human desire to find coherent causes and explanations amidst uncertainty also encourages us to see complex situations in overly simplistic terms. When we have some confidence that one explanation makes sense, we intuitively discount all the other explanations. This is known as the Discounting Principle, first demonstrated by renowned social psychologist Harold Kelly in 1971, and proven by many experiments in the subsequent decades.[3] The problem with discounting additional explanations, of course, is that life is complex; events and behaviors are very often caused by many intertwined and competing forces.   

Furthermore, the single explanation we are most likely to rely on is not necessarily the one suggested by the facts of the case at hand, but rather the one we have found to be “true” or convenient in the past. Too often, we are swayed by the intuitions and pre-judgments suggested to us by our ingrained and simplified mental models of the world. Regarding the way our mental models influence our perceptions of reality, educational psychologist Efraim Fischbein writes:  

the model very often imposes its constraints on the original and not vice versa! Consequently, a model is not simply a substitute, an auxiliary device (more simple, more familiar, more accessible). It is another reality, jealous of its independence, and often insufficiently permeable to genuine intimations of the original.”[4][emphasis mine]

We are naturally inclined to overvalue those explanations that have been operative for us in our past experiences, to discount additional explanations that might be working in tandem, and even to mold our perception of the case at hand to fit the ingrained explanations in our mental models, discounting or ignoring contradictory evidence and circumstances. 

 These are just two revealing cognitive errors (both widely documented by decades of research), that begin to demonstrate the ways in which the human brain is not naturally impartial. A former career in litigation or the judiciary does not immunize arbitrators against these common cognitive biases. It is imperative that arbitrators dispense with the idea that they are already impartial, or “impartial enough,” and to avail themselves of new methods and tools for guarding against such widespread cognitive errors.          

 

3. Research on the Existence and Impact of Implicit Bias 

Another cognitive bias whose influence on human judgments has been thoroughly documented is known as In-Group Bias. In-group bias occurs when an individual’s allegiance to a group is activated and he shows favor or preferential treatment—consciously or unconsciously—to another member of that group. We all belong to countless groups or communities—families, neighborhoods, nations, ethnicities, religions, alma maters, professional associations—any of which can active an In-Group Bias. In the context of an arbitration, this might look like giving more credence to an expert witness who graduated from your alma mater, harboring skepticism about the capabilities of an advocate who belongs to an organization you dislike, or listening less intently to the testimony of a witness of an ethnicity different from your own. 

 While rejecting any deliberate In-Group Bias is universally understood to be an essential part of an arbitrator’s duty, I am not familiar with a single arbitration organization that has directly acknowledged or responded to the substantial body of research demonstrating the pervasive effects of unconscious In-Group Bias operating outside the sphere of our awareness. If the presence of such implicit bias has been documented, shouldn’t a profession whose entire function is to provide impartial judgments make some effort to address these findings?     

 Scientific interest in implicit bias has increased dramatically in the past decade. Studies have been undertaken to understand the existence and effects of implicit bias, especially with regard to race, ethnicity, and gender, in a wide range of industries, from medicine to business to academia to law enforcement. Whether studies focus on race, gender, or some other identity category, what is clear from the mounting evidence is that, in the aggregate, people’s judgments and actions express in-group biases that they do not profess to hold. The qualifier “in the aggregate” is important here, because an implicit bias is not necessarily a rigid belief an individual holds about an out-group, but rather an attitude prevalent in a cultural or societal context that can become activated in a given moment or context to influence people’s snap-judgments and behaviors. One extremely useful resource for beginning to understand the depth and scope of scientific research on implicit bias is the regular report published by the Kirwan Institute of The Ohio State University, which aggregates many of the implicit bias studies published in peer-reviewed journals in the preceding years.[5]      

While many implicit bias studies do not rely on any advanced technology, our understanding of implicit bias has been aided by the widespread use of functional magnetic resonance imaging (fMRI) in neuroscientific studies in the 21stcentury. An fMRI maps brain activity by measuring blood flow to different, precise regions of the brain. In this way, we have been able to measure different levels and types of brain activity in test subjects as they respond to stimuli associated with both in-group and out-group members. Studies like these give us important clues for understanding how we process interactions with out-group members differently, as well as data for evaluating the impact of various attempts at attenuating such biases.[6] Though fMRI is still a relatively new technology, significant advances in the design of fMRI experiments and the interpretation of their results have been achieved in the last decade.  

 There have also been increasing numbers of implicit bias studies focusing specifically on the judiciary and the legal system. To name but a few examples: Law Professor and former U.S. District Judge Mark Bennet, who has written and lectured widely on the topic, recently co-authored an article detailing the results of an empirical study investigating the implicit biases that sitting federal and state judges hold with regards to Jews and Asian-Americans.[7] Jerry Kang is a legal scholar at the University of California, Los Angeles who has researched, published, and lectured widely on the subject of implicit bias with specific attention paid to implications in the fields of law and criminal justice. His website provides a useful collection of publications for anyone embarking on an investigation of the ways in which scientific findings are transforming old assumptions about the way justice is carried out in our society.[8]   

 

4. Can We Really Fix Our Biases and Cognitive Errors? 

Confronted with ample evidence attesting to the ubiquity of cognitive errors and implicit biases, many arbitrators are still reluctant to alter their outlook or approach to their profession. I believe much of this reluctance stems from the idea that to acknowledge the existence of cognitive errors and biases in oneself is to admit to a moral failing, and to undermine one’s professional reputation, if not the profession of arbitration itself. Expression of a bias is often equated with being a bad person, and freedom from bias is often equated with being a good person. Such a mindset is counterproductive to the arbitral process, as it encourages arbitrators to deny any manifestation of bias in their thinking. All humans commit cognitive errors. All humans make snap judgments based upon past experiences or faulty assumptions. All humans are influenced by their environments and develop biases as a result. The arbitrator’s goal is not to scrub his consciousness clean of all biases or cognitive errors, but rather to establish procedural protocols for mitigating such biases, and to become better acquainted with the workings of his own mind and the way that particular errors manifest themselves through self-reflection and habitual exercises.

In addition to fears of being judged unethical or unfit, I believe resistance to change also stems from pessimism about our capacity to meaningful mitigate cognitive errors and biases. The skeptics posit: If a cognitive error or bias is truly operating unconsciously, how can one be expected to identify it, much less consciously combat it? Whatever transpires in the human mind beyond our conscious control is simply too murky and too hard to measure or evaluate. We can’t go around policing people’s unconscious mind. All we can do is try our best to be neutral.

So goes a typical objection. On a single point, I am in complete agreement: All we can do is try our best. But what scientific research actually tells us is that our mental models and our unconscious biases are malleable and susceptible to targeted interventions. Thus “trying our best” actually means making a rigorous and consistent effort to improve the impartiality of our mental models through active strategies (both on the individual and the procedural level) which address common cognitive errors and implicit biases.

In their most recent roundup of scientific research on implicit bias, the Kirwan Institute affirms that one of the “key characteristics” of implicit biases is that they are malleable: 

 “The biases and associations we have formed can be “unlearned” and replaced with new mental associations.”[9]

 Another stellar introduction to the scientific support for the malleability of mental models and the efficacy of targeted interventions is provided by the reputable social psychologist Jennifer L. Eberhardt, Phd, in her recent book Biased. In addition to her decades of research, Eberhardt has served as an advisor to police forces across the country in implementing bias mitigation efforts. 

To illustrate the malleability of our mental models, Eberhardt summarizes the findings of a study on professional taxi drivers. Compared to the brain scans of a control group, the brain scans of professional taxi drivers showed striking enlargements of the brain regions crucial to spatial memory and navigation (the posterior hippocampal regions). Many studies like this one, looking at a wide variety of tasks and regions of the brain, have led researchers to conclude that we can indeed take specific actions that produce significant changes in specific brain functions.[10]     

5. Research on the Benefits of Priming Exercises

The Contact Hypothesis, first published by psychologist Gordon Alport in 1954, posited that, under favorable conditions, contact between members of different in-groups reduces negative biases towards the out-group. The hypothesis has been much studied and generally validated in intervening decades, with much investigation and refinement of what contexts constitute “favorable conditions.” Recently, there have been many studies of “simulated contact,” whereby contact between different in-groups occurs by means other than face-to-face meeting, such as online contact, imagined contact, and virtual contact in a simulated reality or video game. Results have generally indicated that simulated contact is a highly effective means of reducing cognitive biases. In fact, it can often be more effective than face-to-face contact because many unpredictable variables of in-person contact can be better controlled. In a recent meta-analysis of over 70 studies involving “imagined positive interactions with an out-group member,” psychologists Eleanor Miles and Richard Crisp found that these simulated interactions “significantly reduced intergroup bias across all four dependent variables”—the variables being: attitudes, emotions, intentions and behaviors.[11] Jerry Kang has written frequently about the outsize influence that electronic media consumption has on implicit biases, by furnishing us with “vicarious experiences with outgroups.”[12]

Fortunately, these simulated or vicarious experiences are variables over which we can exert a large amount of deliberate control. This is why Pro-Active arbitration advocates that arbitrators deliberately diversity their media consumption. In the case of specific cognitive biases that an arbitrator has identified as part of their own mental model, a twitter or other social media feed can be curated to deliberately include examples running counter to the cognitive bias.

 

6. Research on the Benefits of Procedural Practices 

As stated previously, implicit biases should not be understood solely as rigid personal attributes or beliefs, but more broadly as social phenomena that may shift according to the context in which one finds oneself. For this reason, impartiality enhancement efforts cannot be limited to the realm of cultivating individual awareness and increasing one’s individual contact with out-groups. Efforts must also include making changes in the industry-wide expectations of arbitration itself, such as implementing procedural practices that mitigate opportunities for cognitive errors and biases, and stimulating a culture among arbitrators that values transparent dialogue about cognitive errors and biases. 

Psychologists Keith Payne and Heidi Vuletich refer to this social aspect of implicit bias as the “Bias of Crowds,” and stress that procedural mitigation efforts are often more effective than those targeting the individual.[13] Many of the procedural practices they advocate echo those advanced by arbitration. These include slowing down the process to decrease reliance on quick snap-judgments when implicit bias is most likely to be activated, and acknowledging cognitive errors and biases as a “baseline” or default element of human judgment. Only by creating an environment in which cognitive errors and biases can be discusses transparently among the arbitrators on a panel will arbitrators be able to make strides in mitigating them.      

7. Research on the Benefits of Diversity on Panels

When the case is being heard by a panel rather than a sole arbitrator, Pro-Active Arbitration advocates assembling a group of arbitrators who possess a diversity of professional and life experiences. There is ample evidence suggesting that such diversity on a panel can play an important role in enhancing impartiality. A comprehensive overview of the relevant research can be found in scientist Scott E. Page’s recent book, The Diversity Bonus: How Great Teams Pay Off in the Knowledge Economy.” As Page states explicitly on the first page, his overview does not take into account any ideological arguments based upon the desirability of equity or social justice, but focuses solely on quantifiably demonstrative pragmatic benefits of cognitive diversity, of which identity forms only a part. Within the constraints of this project, he finds considerable support for the theory that diverse teams add value to a cognitive undertaking across a wide variety of industries.[14]      

8. Conclusion  

While no article on a topic as broad as the “science of impartiality” can hope to be comprehensive, I hope I have provided you with a rough sketch of the contemporary terrain, in terms of what is generally known and agreed upon in the realm of cognitive errors and biases, and the sorts of impartiality practices that appear fruitful in light of recent research. Though there is still much to learn about how to best mitigate cognitive errors and biases, what seems already clear is that old assumptions about baseline neutrality no longer serve our profession or our society. ProActive Arbitration is an early attempt to move our industry in the direction of active engagement with the issues of cognitive errors and biases, and my hope is that, together, we will make great strides in enhancing the overall impartiality of our profession, so that we will move ever closer to our ideal of justice under the law.   

 

 

 

 


[1] The original article “Judgment Under Uncertainty: Heuristics and Biases,” can be found in Science 185, no. 4157 (1974): 1124-31. Daniel Kahneman’s most recent book is Thinking, Fast and Slow. New York: Farrar, Strauss and Giroux, 2011. For a highly readable overview of Amos Tversky and Daniel Kahneman’s partnership and its revolutionary impact on the study of human judgment, I recommend Michael Lewis’s The Undoing Project: A Friendship That Changed the World. London: Allen Lane, 2017.

[2] Kahneman, Thinking Fast and Slow, pg 117-118.

[3] See H.H. Kelley’s work on Attribution. For instance: Attribution in social interaction. Morristown, NJ: General Learning Press, (1971);  Attribution: Perceiving the causes of behavior. Morristown, NJ: General Learning Press, (1972); and “The processes of causal attribution,” in American Psychologist, 28, 107–128 (1973).  

[4] E.  Fischbein, D. Tirosh, R. Stavy and  A. Oster, “The autonomy of mental models,” in For  the  Learning  of Mathematics 10(1), (1990), 24.

[5] The full report is available for free download on the Kirwan Institute website: http://kirwaninstitute.osu.edu/wp-content/uploads/2016/07/implicit-bias-2016.pdf Staats, C., Capatosto, K., Tenney, L., Mamo, S., (2017). State of the Science: Implicit Bias Review(2017 ed.). Columbus, OH: Kirwan Institute for the Study of Race and Ethnicity, The Ohio State University.

[6] For a good overview with many elucidating examples of specific fMRI studies, see: Molenberghs, Pascal, and Winnifred R Louis. “Insights From fMRI Studies Into Ingroup Bias.” Frontiers in Psychology vol. 9 1868. 1 Oct. 2018.

[7] See: Justin D. Levinson, Mark W. Bennett, and Koichi Hioki, Judging Implicit Bias: A National Empirical Study of Judicial Stereotypes, 69 Fla. L. Rev. 63 (2017). I also recommend Bennett’s article: Unraveling the Gordian Knot of Implicit Bias in Jury Selection: The Problem of Judge-Dominated Voir Dire, the Failed Promise of Batson, and Proposed Solutions, Harvard Law & Policy Review, Vol. 4, p. 149, 2010.

[8] A list of some of Kang’s lectures and publications can be found at: http://jerrykang.net/2011/03/13/getting-up-to-speed-on-implicit-bias/ In particular, I recommend: Seeing through Colorblindness: Implicit Bias and the Law, 58 UCLA Law Review. 465-520 (2010) (with Kristin Lane); and Implicit Bias in the Courtroom, 59 UCLA Law Review 1124 (2012).

[9] Staats, C., State of the Science: Implicit Bias Review, pg. 10.

[10] Eberhardt, Jennifer L. Biased: Uncovering the Hidden Prejudice That Shapes What We See, Think, and Do. Penguin, 2019.

[11] Miles, E., & Crisp, R. J. (2014). A meta-analytic test of the imagined contact hypothesis. Group Processes & Intergroup Relations, 17(1), 3–26.

[12] See for example: Kang, Jerry, Bits of Bias (December 4, 2011). IMPLICIT BIAS ACROSS THE LAW, Justin Levinson, Robert Smith, eds., Oxford University Press, 2012; UCLA School of Law Research Paper No. 11-40. 

[13] Payne, B. K., & Vuletich, H. A. (2018). Policy Insights From Advances in Implicit Bias Research. Policy Insights from the Behavioral and Brain Sciences, 5(1), 49–56.

[14] Page, Scott E. The Diversity Bonus: How Great Teams Pay Off in the Knowledge Economy. Princeton University Press, 2017. 

 
Rikki Wrightlaw, arbitration