A Primer on Analogical Reasoning
Systematizing one of our most common forms of argument
A chapter Mill’s “System of Logic” was part of the inspiration for this post.
I recently came across one of the best Twitter accounts I’ve ever seen.
The account, @justsaysinmice, does one thing: Points out where journalist’s reports fail to mention a study has only been performed on mice. For some reason — probably lack of flair — journalists are slow to advertise that only about 11% *of *drugs clinically tested in human beings will actually work after a successful trial in rodents [1].
Mouse clinical trials are a swiss army knife in medicine — especially oncology — yet they often fail when scientists try to replicate the results in human beings. As it turns out, mice aren’t tiny, furry, humans. However, they are sometimes *analogous *to people.
Recently problems like the reproducibility crisis in biomedical research have gotten me interested in the pitfalls of analogical thinking. In disciplines like medicine, business strategy, and law there’s likely to be no exact precedent for the plan, experiment, or case in question. This leaves techniques like analogy, first principals thinking and thought experiments as some of the only tools these professions can employ to create new knowledge or come to new conclusions. I’m going to do my best here to give you some guiding rules on what makes a good argument from analogy, first by mentioning some examples, then by trying to synthesize those into general rules.
If you’re just interested in the runbook on analogical thinking, feel free to skip the end.
Have you ever come across someone telling you that their company is the Uber of X? *Before I worked at a large corporation (Amazon) I used to work part-time with startups as an independent contractor. In one startup I worked with very briefly I’d hear — and regurgitate — the Uber comparison fairly often. For this particular company, the explanation through analogy helped others understand the product much faster. However, the owner was also eager to go to Uber to decide *what features to add. Adding tips for drivers, using your current location for delivery, and others I can’t remember were all on the list of features to build.
Not a lot of these features make sense for a small-time pre-prepared meal delivery service.
The company didn’t make it long enough to ship many of these ideas, in fact, I was only officially on the payroll for a couple of weeks before they went bust and I moved onto a different project. Looking back, I think one mistake the owner was making was sliding from an analogical explanation to an analogical argument. The analogical argument is something of the form feature Y is great for Uber’s customers, so by analogy, it will be good for ours. The analogy with Uber was doing explanatory work, but Uber wasn’t a good place to look for features.
We choose what drugs to try in human clinical trials a lot like how my boss chose which features to build in his app, except that in the case of mice the analogy is often good. When some treatment is successful on a mouse it clears up some symptoms in that mouse, and someone is trying to claim that the treatment will also work in humans. The resolution of the symptoms in the mouse is what we’re hoping will translate to humans, much like how the aforementioned company was hopeful that a useful feature in Uber might translate to a useful feature in their app.
In principal mice are actually great candidates for analogies with humans: we’re both mammals, we have similar organs, nervous systems, etc. All this stems from the fact that, to an acceptable extinct, mice are genetically similar to humans. And in some respects, mice are much more useful test subjects than we are because their life spans much shorter, and they’re much easier to control and monitor. So what’s going wrong with mouse studies?
Statistical problems aside, there are only two general places where the mouse analogy can fail to provide us with “true positives”: the biology, and the experimental design. In other words, either the analogy between the actual mouse and a human being broke down or the analogy between the mouse trial and the clinical trial did.
For cases where the biological analogy broke down, one common mistake researchers made was misdiagnosing genital glands that mice have for cancerous lesions [1]. Researchers ignorance of mouse biology is the problem here. If those performing the experiment had more familiarity with mouse biology, the glands might have been correctly ignored.
Rather than ignorance of either domain, experimental problems often stem from differences between the standards of human trials and mouse trials. In an NPR interview, Joseph Garner, an author in the Nature paper I’ve cited in this article, joked about the experimental design of mouse studies:
“Imagine you were doing a human drug trial and you said to the FDA, ‘OK, I’m going to do this trial in 43-year-old white males in one small town in California,’”
Oftentimes the mice included in studies will all share a grandparent, or experimenters will be informed as to which mice are in which test groups, causing confirmation bias (this is known as failing to “blind” observers). Somehow, these problems are rampant: a recent meta-analysis of animal experiments showed that 86% of experiments didn’t blind observers to the treatment of animals and that 87% didn’t randomize animals to treatments [1]. Because the aforementioned problems are basic, they would rarely be repeated in a clinical trial on humans. The problem here isn’t that experimenters are missing some complex point, but that they’re expecting mouse results to translate to human ones without treating the mouse trial like they would a human one. If you want an experimental result to transfer from one species to another, you’d better keep the standards around the experiments as identical as you can.
Similarly, in our Uber example, if we want a feature from Uber to translate to another market, the things that made that feature a hit for ridesharing should also be relevant to a prepared meal delivery service. A location-sharing feature should strike you as relevant for ridesharing — you might not know your address — but not meal prep, which, for the most part, should only be delivered to your house.
So far what we’ve talked about is common sense, but really analogies shine brightest when used to solve problems outside of one’s domain. In the 1980s, a group of cognitive psychologists out the University of Michigan examined how college students solved problems with the help of analogies. The students were asked to solve a medical problem known as “Dunken’s Radiation Problem”. In it, a patient has a tumor deep inside their stomach that needs to be operated on. A direct surgery would be too dangerous, so the student’s only option is to utilize high-energy waves to destroy the tumor. The catch is that the high-energy waves cannot be applied directly without damaging the healthy tissue in their path [2].
The solution to the problem involves surrounding the patient with low-energy waves and having them converge on the tumor, causing a high-intensity wave there, but nowhere else. The students were taught to memorize a few stories prior to being given Dunken’s Radiation Problem. However, each story was an analogy in disguise, given to the students to hint at surrounding a target in some fashion. For example, in one story a general organizes small numbers of his troops to surround a fortress that would be impenetrable with a direct approach. Without being given the stories prior to the question, only about 10% of students can solve the problem, but when presented with the analogy about a general seizing a fortress, and then prompted to use the story as an analogy, the figure jumps to 30%. A second analogy causes the figure to go to 50%, and after a third, 80%. The effect of analogies on problem-solving is remarkable.
However, the study has a caveat: Students needed prodding to utilize the analogies. In another experiment, students were told to first memorize the same stories as well as two “distractor stories”. In addition, students were not explicitly told to use the stories as analogies. In this experiment, the percentage of students finding the correct answer drops considerably (to around 20%). Clearly, analogies are useful, but hoping that the analogies come to you spontaneously is wishful thinking.
This might be why executives are so quick to look for analogies explicitly when making high-risk business decisions. In a 2005 issue of Harvard Business Review, analogical problem solving is explored as a tool for corporate executes:
“We’ve explained the notion of analogical reasoning to executives responsible for strategy in a variety of industries, and virtually every one of them, after reflecting, could point to times when he or she relied heavily on analogies.” — HBR
The article goes on to explain that in the 90s Circuit City executives made the surprising decision to jump into the used car business, opening what’s now the fortune-500 company, CarMax. In hindsight this was genius, but at the moment this must have seemed like a crazy move. However, to Circuit City executives who had researched the used car space, the market they learned about looked too similar to the consumer electronics market of the 70s (which they tamed successfully) to ignore. Trust in existing retailers was low, retailers weren’t utilizing economies of scale, information technologies were largely unused, etc. Things like building customer trust were an opportunity in the used car business, just like they were in the electronics one — used-car salesmen are synonymous with swindling people — CarMax could change that by offering fixed prices with no haggling. Because they were able to align an existing marketplace to one they already made a killing in, they were able to navigate an incredibly risky decision.
Not everything between the used car business and the electronics business lined up, but the disanalogies between the markets could be mitigated. Unlike the consumer electronics space, car dealerships couldn’t present the large number of used cars they had available to customers. To remedy this, Circuit City utilized the web to show consumers their full inventory of products.
There’s a host of things that separate the Circuit City’s of the world from the hoards of Uber for X’s and failed mouse studies. I’d like to end with some principals that can be applied anywhere and used for judging your own, as well as other’s analogies.
Principals of a good argument from analogy
This is the textbook definition for an analogical argument:
System A has properties a,b, and c. System B has properties a and b. So system B may also have property d.
In logic textbooks, system A is often known as the primary analogy, and System B is the secondary analogy. *d *is the conclusion you’re trying to reach.
The key when analyzing these arguments is trying to find out if the properties our systems have in common (a and b) are relevant to the property trying to be inferred, d. If not, the similarities between our systems may just be superficial. This is why knowledge of the two domains in question is critical for applying the technique. There’s no good way to understand the complete causal chain between properties we observe in the real world, so having some broad knowledge of the domains is often the best we can do here (oncologists being aware of mouse anatomy, for instance).
Notice the may *qualifier *in the above definition. No analogy is bulletproof, so it’s best to have multiple analogies to prove your point. This is practiced most frequently in law, where common law cases are frequently enumerated to make a point about how a case in question should be treated.
I’ve tried to boil down much of what’s discussed in logic textbooks to 3 categories of heuristics to pay attention to when making analogical arguments, and some associated questions you should consider for those categories:
1. The set of analogies as a whole
Gather as many analogies as you can.
-
How diverse are the analogies? Provided the analogies actually work, the more diversity, the better.
-
How many analogies do you have? The more, the merrier.
2. The individual analogies
Learn as much about the spaces of those analogies as time permits.
-
Could ignorance about the domain of a system make a property causally irrelevant? If you barely know anything about one of the domains, proceed with caution.
-
How similar is the domain of the primary analogy to the secondary analogy? The more similar, the most likely someone has looked into the analogy before.
3. The individual properties within those analogies
For each analogy, figure out if the properties in that analogy really translate.
-
What are the similarities and differences between the primary and secondary analogies? Pay closer attention to the differences that could disprove your analogy. Chances are you’ve already picked out the primary analogy because you think it will work for the secondary, so you need to work hard to prove yourself wrong.
-
How many properties are in favor of your conclusion? Again, the more the merrier.
-
How many differences are against your conclusion? Can those differences be mitigated?
-
Are the properties causally relevant? Be sure you can trace a story between the property and your conclusion.
-
Is the property in the primary analogy only related to the conclusion because the conclusion is weak, or ill-defined?
Keeping things logical
I hesitated in even writing this blog post. There was an insecurity in the forefront of my mind about whether this was too basic. But I think that’s exactly why it’s important; it’s so easy to make analogies between things without accounting for the limitations of the mechanism, especially when arguers have skin in the game (abortion law is the best example of this). We need some way to keep ourselves honest. I think this is why the philosophy literature on the topic is so exhaustive; philosophers are careful with these mistakes.
Hopefully, this will helpful for you. It was super interesting to synthesize for me.
Notes
[1] Most of the facts in here stem from an article I found from the journal *nature *here: https://www.nature.com/articles/laban.1224.pdf and another article on *nature’s *website you can find here: https://www.nature.com/news/preclinical-research-make-mouse-studies-work-1.14913#/b1
[2] I got this from chapter the book *Range *by David Epstein, chapter 5. The title of the chapter is “Thinking Outside Experience”, and it’s basically an advertisement for analogical reasoning.