Homework for you

Expectancy Biases In Critical Thinking

Category: Critical thinking

Description

Chapter 2: The Research Enterprise in Psychology

Critical Thinking for Weiten's
Psychology: Themes and Variations, Briefer
Critical Thinking Lesson 2b:
The Need to Control for Expectations

My name is Taylor and I suffered from terrible lower back pain for almost 10 years. About a year ago, I bought a Somnocare Sleep System, which contains a revolutionary new mattress that is like nothing else on the market today. Within a month after I began sleeping on my Somnocare mattress, my back pain had decreased to a point where I was able to go back to the gym and do step aerobics, which I love. Today, I'm a new woman, with no back pain. I can pick up my young daughter without any problem, and my husband tells me that I look years younger. All thanks to the Somnocare Sleep System!

How often have you heard stories like this one on television commercials or in magazine advertisements? Such stories are called "testimonials." A testimonial is an anecdote in which the merits of a product or service are attested to. Although testimonials seem to many people to be very compelling evidence for the value of a product or service, they actually are virtually worthless in this regard. This is because they are anecdotes and, thus, do not control for the effects of important extraneous variables.

One very important extraneous variable that testimonial evidence does not control for is the set of expectations that the testimonial-giver has for the product or service. These expectations may result in a "placebo effect." A placebo is an inert substance or an irrelevant activity that is presented as a treatment for a physical or psychological disorder. Because no active treatment is being given, a placebo should have no effect on the symptoms of a disorder. Nevertheless, it often has been found that, in the case of at least some disorders, placebos do cause improvement. As you learned in Chapter 2, the placebo effect is the reduction of a disorder's symptoms caused by the belief that one is receiving a treatment. The variable that seems to underlie placebo effects is the expectation that one will begin to feel better when receiving what one believes to be a treatment. This expectation does not only cause a person to feel subjectively as if they are getting better, it also has been shown to cause real objectively measured bodily changes in some cases (see, for example, Russo, 2002 ). Thus, expectations can have powerful influences on our minds and bodies. This is why most scientific research on treatments for various disorders include placebo controls. It is important to determine how much effect the participants' expectations are having on the symptoms of a disorder.

But it is not only the participants' expectations that must be controlled for when performing research. Researchers also have expectations that may affect the results of a study. In Chapter 2, the effects of researchers' expectations on a study's results were referred to as experimenter bias. A well known example of experimenter bias is found in a research program in physics that took place at the beginning of the twentieth century. In 1903, soon after the discovery of X-rays, a French physicist by the name of Ren� Blondlot discovered what he called N-rays (named after the University of Nancy, which was where he did his work). Just like X-rays, Blondlot discovered that N-rays are not visible. But he claimed to be able to detect their presence through their ability to increase the brightness of luminous objects (such as electric sparks). Blondlot and other researchers found that N-rays are emitted by the sun, flames, and many natural objects. And Blondlot stated that some substances, such as the fluid within the eye, absorb N-rays and then emit them later, which allows people to see better in darkened rooms. Most of these findings were replicated by a number of researchers in different laboratories. The evidence seemed conclusive and many physicists were certain that Blondlot had discovered something very important.

But others doubted the existence of N-rays because they were unable to replicate Blondlot's findings. They pointed to a basic flaw in Blondlot's method of detecting N-rays: the observer judged by sight whether or not an object bombarded with N-rays had increased in brightness. In other words, the observations used to detect N-rays were highly subjective. When observations are subjective, the effects of expectations can have a large effect on them. In this case, if a researcher believed that N-rays were present, and expected that the N-rays would increase the brightness of objects, he or she might perceive an increase in brightness even when no change actually had occurred. It is noteworthy in this respect that most of the studies that found evidence for the existence of N-rays were performed in France, where Blondlot was a preeminent scientific authority. When he stated that N-rays existed, this was likely to have a strong influence on the expectations of other French researchers.

How could one test the hypothesis (a hypothesis is a speculation about the relationship between two or more variables ) that researchers' expectations affected the perceived brightness of objects in N-ray studies? There are two things that one could do: (a) tell observers that N-rays were present when they actually were absent, and (b) tell observers that N-rays were absent when they actually were present. If observers saw an increase in the brightness of objects only when they believed that N-rays were present, regardless of whether or not they actually were, then it could be concluded that their expectations caused their observations to be mistaken. In 1904, an American physicist by the name of Robert Wood performed this experiment in Blondlot's laboratory:

N-ray experiments had to be carried out in a darkened laboratory. This gave Wood an opportunity to make several observations that proved Blondlot's judgements of brightness changes were a function of his beliefs, and not of the presence or absence of N-rays. In one experiment, Wood was to block an N-ray source by inserting a sheet of lead between the source and a card with luminous paint on it [Blondlot had found that N-rays could not penetrate lead]. Without telling Blondlot, Wood changed the experiment in one slight but vitally important way. He would indicate to Blondlot that the lead sheet was blocking the N-ray source when it really wasn't, or vice versa. If N-rays really existed, Blondlot's judgements of the brightness of the luminous paint should be a function of whether the lead screen really was between the card and the N-ray source and should have no relationship to whether or not he believed the sheet was blocking the source. [Wood found that if Blondlot] believed the screen was present (blocking N-rays), but it wasn't, he reported the paint to be less luminous. If he was told the screen was not present (allowing N-rays to pass), but it really was, he reported the paint to be more luminous. (Hines, 1988, p. 10).

This study showed that observers' expectations determined their judgements of brightness in N-ray studies. Thus, Wood concluded that the results of previous studies supporting the existence of N-rays were contaminated by experimenter bias. Blondlot, however, never gave up his belief in N-rays: "convinced until the end that N-rays were real, [Blondlot] pursued his research on the topic until his death in 1930" (Hines, p. 11). This shows that researchers' expectations can have such a large influence on their observations that they will reject even undeniable evidence that a particular observation is mistaken. In other words, once we have developed a belief based on a compelling personal experience, we often are very reluctant to give up this belief. This also is a problem in the case of placebo effects: a sick person who feels better after taking what amounts to a placebo often will swear that the treatment was effective, even when told that it had no active ingredients in it.

In general, the expectations that we bring into a situation can have powerful effects on what we perceive, how we behave, and what we remember later on about the situation. In Critical Thinking Lesson 4. we will return to the topic of personal experience and its limitations.

CRITICAL THINKING QUESTIONS FOR LESSON 2B
Question 2B-1

Professor Harrington decided to perform a study of maze learning in rats in order to learn more about a particular brain structure--the "corpus substantia"--that he thinks may be important for learning. He found two strains of rat that differ in the size of this brain structure: the corpus substantia is larger, on average, in rats from Strain A than it is in rats from Strain B. He developed the following hypothesis:rats from Strain A, because of the larger size of their corpus substantia, will learn the maze more quickly than will rats from Strain B.
Because he didn't have time to perform the experiment himself, Professor Harrington hired student volunteers to test the two strains of rats in the maze. He described in much detail the rationale of the experiment to the students, and informed them of his hypothesis. He then gave each student a cage labelled Strain A that contained several rats, and a second cage labelled Strain B that contained several rats. The students found that rats from Strain A learned the maze faster, on average, than did the rats from Strain B.

Based on what you have learned in this lesson, can we conclude that rats from Strain A, perhaps because of their larger corpus substantia, learn the maze more quickly than do rats from Strain B ?

Question 2B-2

A group of researchers put advertisements in local newspapers asking for people to participate in a study investigating the effectiveness of a new anxiety-reducing (anxiolytic) medication. The advertisement stated that potential participants should be suffering from severe anxiety that is not being treated with any medication or psychotherapy. Sixty people meeting the criteria contacted the researchers. The 60 participants received the anxiolytic medication for a total of eight weeks. Their degree of anxiety was measured at the beginning of the study and then again at the end of the study. The researchers found a large average decrease in the degree of anxiety over the course of the eight-week treatment.
Is this good evidence for the effectiveness of the anxiolytic medication? Why or why not? If you stated that it was not good evidence, what would you do to improve the study?

Bibliography and References

Hines, T. (1988). Pseudoscience and the paranormal: A critical examination of the evidence. Amherst, NY: Prometheus.

Ricker, J. P. (2002). An introduction to the science of psychology. Boston: Pearson Custom Publishing.

Rosenthal, R. & Fode, K. L. (1963). The effect of experimenter bias on the performance of the albino rat. Behavioral Science, 8, 183-189.

Russo, E. (2002). The biological basis of the placebo effect. The Scientist, 16 (24). Retrieved March 17, 2003, from http://www.the-scientist.com/yr2002/dec/research_021209.html

Other articles

Confirmation Bias in Action: Critical Thinking While Parenting

If critical thinking was easy, everyone would do it!

Every day, I speak with customers regarding the importance of assessing and developing critical thinking skills in their employees. I share ways in which they can create a culture of critical thinking. I blog about the topic, speak to HR associations about the topic, and even coach critical thinking in the Critical Thinking University discussion forums.

So, you can imagine my embarrassment when I fell prey to a common cognitive bias recently.

It all began when I saw a dreaded note on my toddler’s preschool class door. “There are 4 confirmed cases of Hand Foot and Mouth Disease. Be on the lookout for a fever and a rash around the child’s hands, feet, and mouth.” My fears multiplied when I learned that my daughter’s best friend was patient zero. Hand Foot and Mouth disease is a highly contagious virus that spreads easily and quickly among children. The child first develops a fever, then a few days later a rash and/or small blisters appear which cause considerable discomfort. There is no treatment for the virus, and it takes several days for the symptoms to disappear. All you can do is treat the discomfort with Tylenol and encourage the child to drink liquids and eat popsicles to stay hydrated. Once I saw the note, I immediately launched into Worst Case Scenario mode.

So, the next morning, when my toddler felt a bit warm, I grabbed our infrared thermometer and let out a sigh as it flashed a bright red screen and read 102º. I actually scanned my daughter with the thermometer several times to ensure the reading was correct. Being thorough in my analysis, I scanned my own forehead with the thermometer which read 98.8º. With that confirmation, I resigned myself that we were in for the long haul with Hand Foot and Mouth Disease.

We also have an infant at home, so we did our best over the next few days to quarantine the toddler, use hand sanitizer frequently, and Lysol everything in sight. We kept checking the toddler’s temp and it consistently registered between 101-102º day after day. Knowing it was only a matter of time before the rash/blister stage took hold, I checked the toddler’s hands and feet looking for red sports, but nothing appeared. I assumed the virus was just slow to show additional symptoms. After a few days, the toddler would occasionally show me her hands and say they hurt, but I couldn’t see any spots. My own hands began to feel like they were burning, so I wondered if the virus was just presenting differently for the two of us. Day after day went by…Thursday…Friday…Saturday…Sunday…Monday… Still no rash or blisters (or any other symptoms at all, really), but the fever remained consistent.

By Tuesday morning when her fever registered at 102º again, I’d had enough. We immediately headed to the pediatrician’s office for advice. The nurse scanned my daughter’s head with the thermometer, and I saw her make a funny face. She scanned again and wrote something down on her notepad. As I was answering the other nurse’s questions about my daughter’s symptoms, I glanced down at the notepad…98.7º. Wait, what? I was in disbelief. How can a child go from 102º to 98.7º in a half hour? We discussed our options (blood work, chest scans, etc) but since the fever was apparently gone, we decided to go home and monitor the situation to see if the fever returned before taking any action.

I left the doctor’s office still baffled. How could her fever disappear so quickly? Did the fever just happen to break on the way to the doctor’s office? As soon as we returned home, I used my own infrared thermometer to scan her. Unbelievably, it read 102º. It took 6 full days for my critical thinking skills to kick in. I grabbed our back-up digital thermometer and placed it under her arm. A few seconds later, I just shook my head as I read the screen- 98.6º.

In just a few seconds of reflection, I realized I had succumb to a common cognitive bias. From the second I read the notice on the Preschool door, I had mentally prepared for my daughter to catch the virus. From the first scan on, I interpreted any evidence as confirmation of my belief. I barely attempted to double check the evidence. Now that I can reflect logically on the only symptoms we experienced, I realize our hands weren’t burning from an invisible rash, it was from excessive hand sanitizer usage. That thought never occurred to me thanks to confirmation bias .

Recognize Assumptions - Because my daughter’s best friend had a confirmed case of Hand Foot and Mouth Disease, I assumed it was only a matter of time before my daughter began presenting symptoms.

Evaluate Information - I never questioned the validity of my thermometer. I never sought out any data to the contrary because my assumptions were so strong. And when new data (burning hands) appeared, I never considered that the cause could be anything other than the virus. I also never questioned the lack of other tell-tale symptoms. I failed to objectively evaluate the evidence, or lack thereof.

Draw Conclusions - Because I had incorrectly interpreted the evidence, I drew the wrong conclusion and lost 4 days of work (and 4 days of preschool tuition) caring for a sick child that wasn’t actually sick.

Confirmation Bias is a very dangerous logical error. Imagine the scenario above, but replace the parent expecting a children’s virus with an individual expecting profit from a financial investment. The same way I waited and waited for symptoms to appear because I was expecting to see them any day, an investor may only seek out information that confirms their bias toward a certain investment and then wait too long to cut their losses because they anticipate returns any day.

To defend against Confirmation Bias, it’s important to:

  • Remain purposefully neutral when evaluating information
  • When you do form a hypothesis, seek out evidence to the contrary
  • Check your assumptions and evidence interpretation with a subject matter expert
  • Engage a trusted person to take on the role of Devil’s Advocate

Learn more about critical thinking by downloading the Think About It! eBook.

Editor’s Note: Breanne Harris is the Solutions Architect for Pearson TalentLens. She works with customers to design selection and development plans that incorporate critical thinking assessments and training. She has a Master’s degree in Organizational Psychology and has experience in recruiting, training, and HR consulting. She is the chief blogger for Critical Thinkers and occasionally posts at ThinkWatson. Connect with her on LinkedIn and Twitter for more of her thoughts.

Observer-expectancy effect

Observer-expectancy effect

The observer-expectancy effect (also called the experimenter-expectancy effect. expectancy bias. observer effect. or experimenter effect ) is a form of reactivity in which a researcher 's cognitive bias causes them to unconsciously influence the participants of an experiment. Confirmation bias can lead to the experimenter interpreting results incorrectly because of the tendency to look for information that conforms to their hypothesis, and overlook information that argues against it. [ 1 ] It is a significant threat to a study's internal validity. and is therefore typically controlled using a double-blind experimental design .

An example of the observer-expectancy effect is demonstrated in music backmasking. [ citation needed ] in which hidden verbal messages are said to be audible when a recording is played backwards. Some people expect to hear hidden messages when reversing songs, and therefore hear the messages, but to others it sounds like nothing more than random sounds. Often when a song is played backwards, a listener will fail to notice the "hidden" lyrics until they are explicitly pointed out, after which they are obvious. Other prominent examples include facilitated communication and dowsing .

Learning to learn: fighting cognitive biases

Learning to learn: fighting cognitive biases

In a world with more information than ever, figuring out how to use the brain to its fullest potential, as well as filling it with as much knowledge as possible, is the main focus of a vast amount of people in this world.

I’ve made it clear on many occasions that I believe in the importance of being a perpetual learner. One of the key activities associated with learning is exploring and understanding the way the human brain functions, and using the results of this to properly hack the critical thinking process. For example, did you know that something called a cognitive bias exists? This term refers to the tendency to think in certain ways.

Cognitive biases range from the bandwagon effect – when truths are accepted because a large amount of people also accept them – to the confirmation bias – when people believe information that confirms what they think or believe in. According to those who study psychology and behavioral economics, hundreds of cognitive biases exist. It’s necessary to educate ourselves on these biases so that we can overcome them and make sure we’re thinking as clearly and critically as possible when it comes to decision making and information processing.

Critical thinking is an increasingly important skill that has been overlooked by many as information becomes more accessible and superfluous. Today, a critical thinker is able to set him or herself apart by lending his or her brain to the many others who have not yet figured it out. Becoming this “thought leader,” if you will, is beneficial in many ways, including the ability to gain the trust of those with whom you wish to connect as well as the authority in the space in which you have established your expertise.

While it’s not possible to go through a list of each possible cognitive bias with each of life’s decisions, it is possible to take a few actions and train our brains to overcome these phenomena on a more general level. Here’s an investigative list of 5 cognitive biases and suggestions on how to fight them.

The Bias. The Backfire Effect – the rejection of evidence that contradicts your point of view.

The Anti-Bias. Rather than treating your own points of view as fact, view them as hypotheses. Being proven wrong by data is not a bad thing; it just means you learned something new.

The application. Your boss just informed you of an article he read explaining a study in which orange call-to-action buttons garner the most clicks. Your first response is that this can’t be possible, because you hate orange – it’s the ugliest color! You step back, read the aforementioned article, and realize that this information actually does make sense. (You still don’t have to buy any orange clothes, though, don’t worry).

The Bias. The Hard-Easy Bias – the pattern of overconfidence in easy situations and under confidence in difficult situations.

The Anti-Bias. Define and recognize your capabilities. The issue at hand will be solved if you can do it, and you will be able to come up with another solution if you cannot do it on your own. Try to briefly remove yourself from the situation before you begin, and imagine what you would tell a close friend or colleague if they were the one faced with the problem. Then, take your own advice!

The application: Every morning, you arrive at work and look at your to do list. You’ve been skipping over “write Q1 content strategy” all week in favor of “answer e-mails” and “tweet.” While the latter is certainly a more daunting task, you are going to have to do it eventually, and you wouldn’t be tasked with it if you weren’t fully capable of completing it. Remember last time you thought you wouldn’t be able to finish your strategy? You did it then, and you’ll do it again now.

The Bias. Irrational Escalation – compounding a bad investment, because “it’s already bad.”

The Anti-Bias. Bear with me – here comes a metaphor. This reminds me of a health tip I once read: If you have one piece of cake, this doesn’t mean your entire day is ruined. You don’t have to give up and eat the whole thing just because you started. The same goes for investing money in a sinking stock or failing on a part of a project. If one thing turns out badly, the best thing to do is to make the rest of it turn out well.

The application. “I just bought 50 shares of Facebook stock, and it tanked! May as well just buy 500 more, it can’t get any worse!” Slow down there, tiger. Losing $2,500 is certainly not the same as losing $30k. Think about that.

The Bias. The Observer-Expectancy Effect – when expectations influence outcome.

The Anti-Bias. Once again, remove yourself from the situation. When conducting a test or experiment, be open to the (50%) possibility that your hypothesis will be disproven. As previously mentioned, this will only mean that you learned something new. When experimenting or analyzing data, tainting or angling the results to support your hypothesis will only hurt you in the long run.

The application: Your colleague just finished two months’ worth of research trying to figure out what your customers see valuable about your product. Just as he begins presenting, you exclaim, “I knew it! Our value prop is X!” Although the first piece number you saw might have supported this, your colleague actually didn’t get to the second piece, which clearly disproves that. Your mind, however, is already closed. Acknowledge that, get in there, and open it back up. Remember that 50% chance that your hypothesis would be proved wrong? As great of a critical thinker you may be, data is still the smartest thing in the whole world – never forget it.

The Bias. Reactance – the desire to do the opposite of what you’re asked or advised, simply to prove your freedom of choice.

The Anti-Bias. In one of the more difficult bias avoidance situations, this calls for the swallowing of pride and recognition that doing what you’re asked and/or advised is probably in your best interest, and you probably would have been perfectly okay with doing it if someone else didn’t tell you to. Don’t worry, everyone will remain aware of your freedom of choice.

The application: Just as you’re about to start reading that book your friend sent you a few weeks ago, she calls you to bug you about how you haven’t picked it up yet. Even though you were about to, you suddenly feel the need to let her know that you’ll get to it when you can and you’re a very busy person! You know it would make her happy if you just read it, and trust me, she knows you’re an adult with free will. With that in mind, swallow your pride and thank her for reminder, then pour yourself a cup of tea and dig into that (probably amazing) book.

The Bias. Bias Blind Spot – not recognizing the existence of cognitive biases.

The Anti-Bias. Read this post and open your mind!

The application: You just read a great article on cognitive biases and advice on how to overcome them. You want the rest of the world to be able to do this also, so you tweet it!

What is the most complex cognitive bias you’ve encountered? Let’s discuss in the comments.