What is the “Hard Problem” of Consciousness?
Thoughts on one of the most contentious problems in philosophy of mind today
The classic anime Ghost in the Shell’s title comes from philosopher Gilbert Ryle’s critique of cartesian dualism — the idea that the mind and body are separable entities.
A couple of months back I finished a book by Daniel Dennett written in 1991 titled Consciousness Explained *that *sped me down a multi-month rabbit hole where I tried to learn as much about anything consciousness-related as I could. I poured over the latest in philosophy, computer science, neuroscience, and psychology, only to find that in 2019 the topic is as contentious as it was in ’91. All the things I learned about in those few months seemed to be glued together by one particular question known as the “Hard Problem” of consciousness.
Consciousness is fascinating because everything you do is predicated on an understanding of it. The reason you treat a dog — or another human being for that matter — differently than a blade of grass is at least in part because you feel as though that dog has a first-person perspective similar to yours. As fundamental as consciousness is, you’d think we’d have a grasp on why we have it at all.
However, consciousness remains as elusive as it is fundamental. If you take away the verbal reports you and I give about our consciousness, there is no evidence our consciousness exists at all, no location where it can be found in the brain. This isn’t to say that neurologists don’t know which regions of the brain are necessary for the various parts of conscious experience: the occipital lobe commands vision, as does the auditory cortex for hearing. But there is nothing like a “theatre” of consciousness where we can find the data for the movie that seems to be shown to you.
Our primary motivation in life is to change our current state of consciousness, yet we have no idea how to find that consciousness physically. And even if we can “find” consciousness, is there any hope in asking why it’s there in the first place? Why would the cold, algorithmic nature of evolution give us conscious experience when it seems perfectly reasonable to imagine beings that could do everything we can without it?
The famous “hard problem” of consciousness aims to ask why we need to be conscious of anything at all. The hard problem is controversial; many people don’t think it’s actually a valid question. My hope in this post is to give you an overview of the hard problem, and some different stances on it. Along the way, we’ll also talk (in a non-technical manner) about whether new findings in artificial intelligence can help us come to terms with the hard problem. Finally, I’ll give my own take on the hard problem towards the end.
Authors note: I’m neither a philosopher, or a neuroscientist. If there are any mistakes, or something critical about the topic I’m missing in here please let me know, and I’ll be sure to add it.
The Hard Problem
How do we arrive at the “feeling” of consciousness? The mechanistic narrative of neurons firing in the brain may provide an account of why we behave the way we do, as well as other “easy problems,” but leaves no room for the mental sensations you get that appear to be qualitative experiences. Things like the way drinking a cup of coffee may taste to you right now are excluded. Your “mind‘s eye” is excluded. Philosophers term these subjective, qualitative, experiences “qualia”, and they play a central role in theories of consciousness. Individual qualia are known as “quale”.
Consciousness is also a loaded word, so it’s better to be specific. Qualia act as instances of consciousness and do not necessarily include the narrative structure that may be unique to *human *consciousness. A bat can be said to have qualia. This is controversial, but the existence of qualia can be said to act a threshold for saying that something has consciousness.
One of the most famous thought experiments involving qualia is the Inverted Spectrum:
Suppose you and I look at the same apple. How can I be sure what I see as red isn’t what you see as green or vice versa?
This idea has its roots in John Locke’s 1690 *Essay Concerning Human Understanding, *in which he distinguishes between the primary qualities of objects and their secondary qualities . To Locke, objects have properties independent of viewers (think density, space, etc), and secondary qualities (color, taste, etc). Secondary qualities are those that can’t provide objective facts about objects, they’re specific to your subjective experience; these can be thought of as the features of qualia.
Qualia sound like an undeniable part of your day to day life, but they’re actually a contentious topic amongst philosophers. Much of the debate around the hard problem rests on our ability to treat qualia as something conceptually different than the states of the brain they’re associated with. If qualia are the same thing *as the electrochemical activity that cause them, then there is no hard problem because we know that this electrochemical activity is caused by evolution. In the aforementioned essay, Locke uses the Inverted Spectrum as a thought experiment to show that qualia should be considered conceptually different than the underlying states of the brain that they’re composed of, provided that you find the Inverted Spectrum *plausible. His argument is theoretically interesting: it primarily stems from what’s known as modal logic, however, I don’t find it very convincing and I don’t want to cover it or the associated arguments around it here. Instead, if you can, I’d like you to remain agnostic towards whether qualia can be considered the same thing as whatever activity in the brain generates them, and for the sake of some arguments consider that qualia could be conceptually different from whatever states of the brain make them up.
The hard problem of consciousness is tied to how we can connect our subjective experiences — our qualia — to our “low level” explanations of phenomena. Can we reduce the subjective secondary quality of redness to a particular firing of neurons in the brain, and if we can, what necessitates the experience of redness? Couldn’t we have all the same neurons fire, subtract the experience of redness, and have evolution work in the exact same way? The state of the world without qualia seems like it would be the same; this is why the hard problem is so — for lack of a better word — hard, and why a boiled-down way to ask the hard problem is to ask the question “Why do qualia exist”?
A quick detour: What do we mean by “exist”?
The hard problem seems to elude most reasonable explanations for consciousness. To illustrate this, imagine one gives a scientific account of why qualia exist — the idea that conscious experience arises when the brain’s predictions fail for instance — this might explain when consciousness arises, but as we’ve discussed, it still fails to explain why it’s necessary. When we’re asleep there seem to be periods where we have no qualia, but we are perfectly alive and functioning. Why can’t we be alive, but without experience, all of the time? The scientific account of why qualia are there is often called the “functional” reason for their existence — another way to phrase this is the “causal role” of qualia (e.g. what downstream effect do qualia have on behavior? Is it just planning?). A lot seems to be riding on our ability to distinguish between why qualia exist as a function *and why qualia exist *as experience. Many philosophers argue that if we want a full explanation of consciousness, we need to explain why qualia exist as experience.
The formation of the hard problem as I’m referencing it was first pointed out by David Chalmers in his essay Facing up to the Problem of Consciousness. *In his essay, *Chalmers describes the hard problem this way:
“It is undeniable that some organisms are subjects of experience, but the question of why it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C?”
In his essay, Chalmers endorses the intractability of the hard problem. He argues that the best we can do is take the existence of experience for granted, and hope to find explanations for its structure. For example, if you’re staring at your laptop, we have every reason to believe that perfect neurological explanations for why the secondary qualities of the laptop are there (its shape, color, etc), but why you need to experience those secondary qualities is still a mystery.
“Disqualifying” the Hard Problem
Some philosophers see Chalmers’ hard problem as a trick, and one of the most influential people in philosophy of mind, Daniel Dennett, is amongst them. Dennett argues that the features of consciousness are akin to a set of magic tricks that collectively give us the illusion of consciousness and that if you explained the function every feature of consciousness you would arrive at a conclusion about why we feel like we have experiences. He thinks that if we continue to dismantle the pieces of conscious experience — like we have with deja vu for instance — that the subjective character of experience will fall apart. Before Chalmer’s essay, in his 1991 book Consciousness Explained, *Dennet *put forth a model of the brain that attempted to throw away the concept of qualia all together.
Despite being written over 20 years ago, Dennett’s model of consciousness — the Multiple Drafts Model (MDM) — remains influential. In the MDM he replaces what he considers to be an assumption about consciousness — namely that it acts as a “stream” of information going into a “theatre” of experience — with a parallel model. Instead of one “frame of consciousness” coming to you at a time, various “content fixations” are distributed around the brain at the same time with the content fixations that “yell the loudest” winning the competition to be entered into consciousness. Viewed this way, events that you become aware of can be seen as being analogous to a published version of many drafts.
These “drafts” are created rapidly, and somewhat dramatic adjustments are made to them prior to their publishing to consciousness. Here’s a concrete example: the human eye darts around rapidly in what are known as “saccades” at about 5 times per second with only a small area of actual perceptual data coming in to the cornea, yet you don’t see the world as jiggling around constantly, or missing information due to the small surface area of the cornea. The final image you see is the product the brain combining the drafts created by the saccades. The final combination gives you the raw information from the saccades as well as a “best guess” for any other missing information. Dennett provides similar explanations for the other features of consciousness, ultimately leading to something that looks like an outline of what a full theory of consciousness might look like.
The MDM Dennett presents in Consciousness Explained *still only describes the functions of consciousness, but to Dennett, functions of consciousness are what give us the impression the hard problem exists. In one of his essays, Dennett likens the hard problem to *The Tuned Deck *magic trick. The trick is a modern twist on a classic: In it, a magician asks the audience for a card, has them shuffle the card back into a deck he on hand, listens to the ripples of the deck as its shuffled, and then finds the card they picked from the deck. At first pass *The Tuned Deck *sounds *like a standard trick, however, upon repetitions, even experienced magicians are unable to decipher what’s going on behind the scenes. The inventor of the trick, Ralph Hull, only let other magicians know its secret posthumously. The secret to the trick is that the magician will use a different — sometimes well known — magic trick to perform *The Tuned Deck *each time they perform it, but include the act of listening to the ripples of the deck, or some other red herring to make the audience think that he performed the same trick twice. By varying the technique the magician uses to perform the trick while keeping the performance and the red herring the same, they keep the audience off their tail.
Dennett thinks that Chalmers has pulled a *Tuned Deck *on many neurologists and philosophers by introducing the hard problem when really the explanation as to why we feel we have qualia exists within the “easy problems” of consciousness. What we’re calling qualia are, after all, just the sum of the features of consciousness. Dennett is saying that by giving that by naming all these features of consciousness under the umbrella term “qualia” proponents of the hard problem may be giving themselves the impression that there’s something additional to be accounted for when really explaining the features of consciousness, the “easy problems”, will suffice.
“Real magic, in other words, refers to the magic that is not real, while the magic that is real, that can actually be done, is not real magic. I suggest that many, e.g., David Chalmers has (unintentionally) perpetrated the same feat of conceptual sleight-of-hand in declaring to the world that he has discovered The Hard Problem of consciousness.” -Daniel Dennett
Computing the “Easy Problems”
Whether or not you agree with Dennett’s or Chalmers’ stance on the hard problem, there’s no doubt that algorithmic descriptions remain useful for explaining Chalmers’ “easy problems.” First, let’s take a look at the easy problems that make up consciousness as Chalmers’ describes them; then we will look at how some these can be explained at a high level as functions. Taking the time to address some of the functional explanations of these items is crucial for deciding whether they will ever be sufficient for fully explaining experience.
Here are Chalmers’ easy problems:
The ability to discriminate, categorize, and react to environmental stimuli
The integration of information by a cognitive system
The reportability of mental states
The ability of a system to access its own internal states
The focus of attention
The deliberate control of behavior
The difference between wakefulness and sleep
In the essay, he notes that these are by no means meant to be exhaustive. However, they might be a good starting point for our analysis.
To scope things down a little, let’s focus on one particular area of conscious experience: vision. If the solutions to the easy problems in vision can get us to why the experience of vision exists, then we’ll know if Dennett is right about the hard problem being a “magic trick”. Vision is particularly interesting for discussing theories of consciousness because it’s both our most adept sense, and because research in it has yielded some fairly dramatic advances in artificial intelligence. What can human vision, or computer vision tell us about the easy problems?
The “Easy Problems” in (Computer) Vision
Right now, photons of light are bouncing off the screen into your retina, hitting the cones of your eyes, and transmitting data up to the optic nerve and into the primary visual cortex of your brain, V1, and later to parts of the brain concerned with higher-order visual processing. Discoveries related to special neurons in V1 that impact actions like edge detection inspired machine learning techniques like convolutional neural networks. In these, the function of these special cells in V1 are imitated by mathematical operations on data like image files. In a sense, convolutional neural networks try to get a computer to view a JPEG the same way you, a human being, might. We can use these techniques to get computers to do image recognition, play video games with raw image input, and other things similar to the way we use our sensory information on a day to day basis.
Can we use these findings to bridge the gap between the algorithmic explanation of consciousness and experience? The explanation we just put forth explains a portion of consciousness in the sense that it demonstrates how we’re able to see. The narrative even contains clues as to how we can use the hard work evolution has provided for us to gain insights into getting machines to recognize images. However, it’s not clear how we can make the leap from understanding vision to why we need to be conscious of* *what we’re seeing. We can imagine building a system using things like convolutional neural networks to replicate human-like behavior, but still not be convinced the system we’ve built is conscious.
Building an AI that looks, acts, and observes much like a living creature is no longer very difficult to imagine. In the below example Google’s Deep Mind team built an AI called AlphaStar. Its purpose was to play the real-time strategy game StarCraft II. *This game is fundamentally different *from games like Chess or Go because the AI relies on imperfect information — it can’t see the entire map at once — this makes classic computer science solutions that work for perfect information games ineffective . The AI also needs to play in real time, as opposed to whenever its turn comes up; this increases the number of decisions the AI needs to make, further complicating traditional AI approaches. These difficulties led the researchers at Deep Mind to utilize many of the biologically inspired techniques described previously. In some experiments, the AI used the raw camera feed from the game to derive the locations of the troops in the match, much like what we do with our eyes, while still achieving expert-level gameplay.
Deep Mind’s AlphaStar in action. Notice how the game is real time, the AI needs to shift its attention constantly, and build a model of the world around it (the map). Source
The AI Deep Mind built here seems to give satisfying explanations for all of Chalmers’ easy problems; after all, we can define the operations mathematically, so we have complete functional explanations. It’s clearly reacting to environmental stimuli, integrating information into a “state” of the game, and “reporting” its mental states. The AI also feels closer to conscious for reasons not included in Chalmers’ list. It’s acting in real time, just like conscious creatures do. It’s also able to work with imperfect information, just as animals predict future outcomes without knowing everything about the state of the world. Its objective may be different than a living creature, but in many critical ways, it doesn’t seem so different. AlphaStar provides a useful model to help us bridge the gap between the easy problems and the hard problem. Without any additional information, if we consider this AI conscious, it’s not unrealistic to say that we’re able to answer the easy questions without getting any clue as to the answer to the hard problem.
However, I think most people would object to the idea that this AI is conscious. Some might say that there is no evidence that the AI is not self-conscious, and refuse to give it conscious status on those grounds, but this attitude is shifting the goal post too far. What is meant by self-awareness here is too vague — are young children self-aware? We know they have conscious experiences as many of us have memories of being 3–4 years of age, too young to have metacognitive abilities, but still conscious in the sense we’re concerned about. The AI’s inability to express self-consciousness to us isn’t grounds for thinking it’s not conscious.
Others may object when they look at this AI, or hear a description of what it does technically because they don’t feel like it has qualia. If we could put qualia on scientific grounding, could we decide if AlphaStar is conscious?
Let’s take a step back before we try to decipher if AlphaStar has qualia and should be considered conscious. Before we come to any conclusions about an AI, we should be sure why we think other people are conscious at all. After all, if you can’t be sure that anything other than yourself is conscious, there’s not much hope in deciding whether or not AlphaStar is conscious. The idea of a being that looks and behaves like a normal human being, but has no first-person experience is known as a philosophical zombie (or p-zombie). How do we know that others aren’t p-zombies? Verbal reports aside, the logic goes something like this: “You and I are made up of similar things — we both have brains for instance — and I feel like I have some subjective experience, so you must as well”.
This explanation isn’t ideal. It might be better if we could point to something specific and say “look, this dog has this neuronal pattern, so it must be conscious!”, then — with a monitor or a headset — view the consciousness of that dog for yourself. Unfortunately, as I’ve mentioned, no such location for consciousness in the brain has been found. But imagine there was a machine that could scan someone’s brain and show you their visual experience. Then, with both of you looking at the same thing, you could scan your brain, and they could scan theirs and compare the results. When you saw something similar come out of the machine, you could have more confidence that the other person wasn’t a p-zombie. This method doesn’t completely alleviate the p-zombie fear because you’d be scanning a correlate of qualia, not directly observing qualia (there’s no way to do this), but I think it’s as good a third person observation as you could hope to get.
This brain scanning experiment isn’t as far away as you might imagine. Researchers out of the University of Kyoto have had some success generating images of what subjects are seeing or imagining from fMRI scans. If you’re unfamiliar, fMRIs are heat maps of blood flow in the brain — these heat maps correspond to neuronal activity, so they’re a good proxy for electrical activity in the brain. The diagram below represents a subject who observed a picture of a cheetah under an fMRI, then had a machine learning model read the fMRI and output what the subject was seeing based on that fMRI data alone (I’ve got some technical details in the endnotes for those interested ). The resultant image is noisy, but you can see features such as the eyes and rounded shape of the cheetah.
The model translates scans from an fMRI into images of the subject’s perceptual experience Source
Showing that others aren’t p-zombies is no small task, but most of us are willing to accept others are conscious. Personally, I think this is as convincing an experiment as you’d be able to get for this phenomenon. Here’s a comparison between different subject’s reconstructed images — they’re quite similar!
Different subjects reconstructed conscious experience of 6 different images. Note that the images here are in ImageNet dataset used to train the model, so they’re a bit higher quality than arbitrary images (however, arbitrary images work too). Source
This gives us some evidence that people seeing the same image really do see similar things in their mind’s eye. Going back to the Inverted Spectrum question from before, one way that we could imagine an Inverted Spectrum occurring is by having two subjects exposed to the same stimuli, but having inverted color differences as a result of this test. This doesn’t close the door on the inverted spectrum — the contents of one’s consciousness are still private — but it gives us more confidence that our intuition on p-zombies is correct.
What if we ran the same experiment on AlphaStar? I’m unaware of anyone that’s done this, but I’d imagine we would get results that are similar to the University of Kyoto’s. I think we’d see something interpretable as the above experiment works by first correlating fMRI data to the activations of an artificial neural network, then trying to reconstruct an image based on the activations of the artificial neural network. If correlating fMRI data collected from a real human to an artificial neural network works, it’s easy to imagine that correlating two artificial neural networks would work as well and that our image reconstruction would be successful. However, I still don’t think this would convince many people AlphaStar was conscious. The experiment from the University of Kyoto only convinces us the other person has qualia because we also know that other person is made up of similar material to us. AlphaStar is materially dissimilar to us, so we can’t reason about its consciousness by comparing ourselves to it. It seems we can only be sure that things are conscious by analogy (If you’re tempted to reach for an intelligence test like the famous Turing Test here and expand it to consciousness you’re also out luck ).
Hunting for Neural Correlates
An AI is unlikely to be materially similar to us in a reasonable period of time. Is there anything that we could do convince ourselves that other people or AI systems are conscious outside of analogy? It’s often proposed that if we could locate a common physical substrate or some sort of “neural correlate” to consciousness in humans, we could see if AlphaStar had that neural correlate (or some equivalent of it) and decide if it was conscious that way.
One of the most confusing aspects of qualia is how they allow for illusions; a neural correlate for consciousness would have to give us grounding for figuring out when illusions are occurring in the brain.
The white circles in the intersections of this grid clearly don’t correspond to reality. They also shift based on where on the grid we pay attention to. Our neural correlate must also explain the illusions a subject is seeing.
To explain away illusions, it’s been posited that there’s such a thing as* mental paint; *a figment of sorts that the brain creates in order to provide an overlay of reality. In illusions, perceptual errors at the cellular level bubble up to qualia by causing the brain to “paint” incorrectly. The theory is a posit: no one has found anything like mental paint in the brain. Ned Block, a prominent philosopher arguing in favor of mental paint, has an argument for its existence based on several psychological experiments where the subject’s visual experience of an object changes based on the attention they pay to the object. Because there’s no physical location for mental paint, his argument for mental paint puts Block in line with dualists — people who believe that mental properties are non-physical — like Chalmers because he sees no obvious way to break mental paint down into physical events.
Despite this, Block considers himself a physicalist; he believes that there is an “easy problem” explanation for mental paint, but Block has no neurological explanation for mental paint’s existence. In a recent paper, Dennett has attempted to rescue Block’s defense of mental paint by rebranding it as virtual paint; *implying that it is a product of information processing in the brain, similar to how *virtual machines work in computer science, and not something that exists outside of the physical world. Virtual paint is tempting because, should it exist, it gives us something that could be experimentally verified. All we need to do is find whatever neural correlate corresponds to virtual paint, and boom, we have some explanation of visual qualia. But virtual paint isn’t special, any functional explanation of visual qualia might tell us whether or not AlphaStar is conscious by giving us a specific pattern to look for in its neural activations. Dennett sees virtual paint as a replacement for qualia in general, but questions whether we’ll understand virtual paint’s downstream effects even if it is found.
One might hope that looking into conditions where people or animals don’t have the conscious experience of vision, then comparing their brains to those with it would yield some neural correlate akin to a virtual paint (via fMRI or other methods). Blindsight is a particularly interesting condition where humans or animals have received damage to their primary visual cortex, V1, causing them to be unable to consciously see a portion of their visual field. Despite their inability to consciously perceive visual stimuli, Blindsight patients are unique because they are still able to guess the properties of stimuli such as motion and shape when prompted. This edge case indicates that the conscious experience of vision is not all there is to the classification of visual stimuli and makes it sound as if V1 acts as a “gateway” to visual experience while animals are awake (However this has been dismissed by some researchers ). Though research in the area looks promising, conditions like blindsight have yet to yield obvious neural correlates for visual experience.
“Finding” consciousness is a precondition for figuring out its causal role. One could imagine that if something like virtual paint was isolated first in animals, and then in an AI or animal , we could track down what downstream affect virtual clay’s neural correlate was having on the actions of the AI and find a causal role for qualia through statistical methods. Finding a neural correlate in animals sounds realistic, but I’m doubtful that we would be able to find anything similar in an AI using what we find in animals .
It seems like the easy problems shed light on whether different things are conscious, but can this alone get us to an answer to the hard problem?
The Limits of (My) Understanding
When I started learning about philosophy of mind — the branch of philosophy the hard problem is contained within — I was sympathetic to Dennett’s illusion argument. After all, we know consciousness is a physical phenomenon, so how could it be anything other than a “user interface” brought on by evolutionary necessity? I was sure that experience was something akin to RAM in a computer — temporary, fast storage, that was occasionally moved to long term storage on a hard disk. Chalmers’ framing of the hard problem has changed my mind on this. Lately, I see Dennett’s stance that the hard problem is wrapped up in the easy problems as a reframing of the question instead of a refutation of it. What changed when I started taking Chalmers’ side of the argument was that I started thinking of qualia and the neuronal states that compose them as being ontologically different things. My current stance is that I can’t buy into the idea that qualia and their neural correlate are logically identical, and this makes it such that I find the hard problem intractable for all the reasons that Chalmers and others suggest.
Let me unpack what I mean by “ontologically different”. One argument put forth by those who see the hard problem as nonsensical is that qualia and their neural correlates share an identity, so a functional explanation of a neural correlate should suffice as an explanation for qualia as well. I can’t bring myself to agree with this stance. Completely fleshing out this point could be the subject of an entirely separate blog post, but I see a couple of problems in trying to equate first-person experience with third-person observation of some material (namely, neurons in the brain, electrical charge, etc). One issue is that when I want to say that two words are referencing the same thing, it means more than those two things being in the same place at the same time; it also means that those things share the same properties. I can’t help but see the properties of qualia as being different than those of neurons, or electrochemical charge in the brain .
Let’s think about the properties of qualia vs those of neuronal states. When thinking about qualia the first is the perspective I see with on a day to day basis — whatever I’m looking at right now for instance — comes to mind. The other is a wet, electrical, pile of neurons. That pile of neurons may in fact be where my first-person perspective is physically, but saying that those two things are the same thing strikes me as a sleight of hand because I can’t see their properties as being the same. This is the core of why we can imagine p-zombies — you can imagine brains that work the same way ours do without qualia. This dichotomy doesn’t work for other physiological concepts that share an identity. Neuronal states cause changes in qualia, sure, but I’m not ready to admit that they’re the same thing.
If we’re able to consider qualia as being logically different than their neural correlate, they demand an explanation, but even the best functional explanation of the “easy problems” can’t suffice. To illustrate this, let’s broaden the earlier description of a virtual paint and suppose we had a precise mathematical definition of how consciousness existed, and what it did or did not do. Let’s suppose we had a description that answers all easy problems, its causal role, etc. In practice, this could look something like the activations of a neural network, or some equation representing potential states of consciousness, in addition to a description of the downstream effects this representation had. This would be something like what we talked about earlier with virtual paint, but in this case, I’m imagining a “perfect” functional explanation, something that looks more like it belongs in a physics textbook rather than some correlations found via fMRI data. We could even use the description to provide a minds-eye image of what a human being was experiencing, like the Tokyo university study, but more accurate. If our mathematical description ends up limited to living organisms, and we’re able to find a causal role for that description, it’s likely that we’d be able to necessitate the existence of that description from an evolutionary point of view as well. Even with a functional description like the one outlined above, I don’t think we can get to the “purpose” of qualia. Even if you’re able to grasp how each feature of qualia, be it vision, sound, etc, comes into being, there doesn’t seem to be anything in a mathematical or verbal description that can explain why you aren’t a p-zombie. In other words, despite explaining qualia from a physical perspective, our “perfect explanation” leaves out a “mental explanation”. Even if experience is tied to the way our brains utilize information — leaving philosophical zombies physically inconceivable — qualia are left hanging around as a useless artifact.
In lieu of any widely agreed upon functional, or experiential explanation for consciousness, the intractability of the hard problem has led many to believe that something radical needs to change for science to accompany the role of conscious experience. Some posit consciousness as a fundamental property, like that of mass or energy, while others speculate that the answer is tied up in quantum mechanics, or information theory (If you’re familiar with panpsychist theories like Integrated Information Theory, those theories and many others fit in with this description ). But suppose our description ends with consciousness as fundamental instead. A description like this one *starts with *consciousness but does not attempt to explain why it exists in the first place. This approach shifts focus away from the hard problem but does not answer it. However, this might be alright. There are certain questions that it’s ok to care very little about — for instance, few people think that asking why there are laws of nature is a fruitful discussion to have — panpsychists theories place qualia in this realm of questioning.
Regardless of the nature of your answers to the easy problems, or your explanation of conscious experience, the hard problem remains immovable. All that changes between explanations is whether or not it remains an interesting question. I think this makes sense. If our experience and perception are a cave we’re inside, asking why that cave is there should be as inexplicable as asking why the totality of its contents exist.
** **Nearly every modern philosopher has something to say on the topic. It’s remarkable how the hard problem is repeated over the course of history. Here’s an excerpt from Leibniz’s Monadology where he effectively describes Searle’s famous Chinese Room Problem.
Suppose that there be a machine the structure of which produces thinking, feeling, and perceiving; imagine this machine is enlarged but preserving the same proportions, so that you could enter it as if it were a mill. This being supposed, you might visit its inside; but what would you observe there? Nothing but parts which push and move each other, and never anything that could explain perception.
 **This **is more significant than it sounds at first. Historically, AI systems built to play games like chess have had access to the complete “state” of the game (they know where all the pieces are at any given point in time). This greatly simplifies the task of mapping the game to traditional computer-science problems like graph searching algorithms because with the help of some human-encoded rules the number of scenarios a game like chess could undergo is predictable. Once the possible scenarios of interest have been computed, there’s no new information that could cause the computer to need to reassess its strategy quickly. Basically, it just so happens that computers are inherently more suited to games like chess than StarCraft II. This is why older systems like IBM’s 1996 Deep Blue baffled the public in chess, but similarly built systems failed to succeed in games with missing information, like backgammon. It’s also why systems like Deep Blue weren’t useful for other types of video game AI, like first-person shooters, where a convincing AI would act similarly to a human with “imperfect information”. Calling something like Deep Blue “Artificial Intelligence” is somewhat contentious for this reason, but that’s a whole other topic.
 For machine learning readers: **For me, this was probably the most interesting thing to learn while writing this post. The process **the researchers used is a bit involved, but not too difficult to understand. The hallucinogenic-looking image is generated by searching for an arbitrary image that minimizes the difference in activations between a known convolutional neural network (CNN), and the activations observed through an fMRI. The fMRI activations are obtained by building another ML model that maps from fMRI data to CNN activations; the dataset to train this model is obtained by showing subjects images from ImageNet, recording their fMRIs, and showing the same images to a CNN model, then recordings its activations (VGG19 in this case).
An excerpt from the paper on the generative process for finding the perceptual image. Source
It’d be really interesting to see if the same images could be reconstructed with models other than CNNs. If the images from different models were much weaker because the encoding from the fMRI data to the new model’s output wasn’t as good, it may be a good indicator that the CNN has a deeper correspondence to a real brain than some arbitrary model.
An acquaintance of mine likened this technique to a side-channel attack on the mind. The ethical implications of such technology are both interesting and terrifying. t
 There are plenty of thought experiments that attack the Turing Test as a measure of consciousness. Recall that in a Turing Test a computer is deemed intelligent if an onlooker cannot judge computational responses from human ones in a natural language conversation. The Blockhead experiment is one of the simplest arguments against the Turing Test as a measure of consciousness. You can imagine storing a bank of sensible responses to every permutation of words in a language used as a conversation starter. Then, storing a bank of responses to those responses, etc. Clearly, a machine looking up a response from a bank of responses is not conscious.
** **Interestingly, some patients with serious V1 damage (including those who’ve had this area of the brain surgically removed) are still able to visually dream. Because dreams are still considered qualia, this dismisses V1 as being an exclusive gateway to the “mind’s eye”.
** **If we were lucky enough to see the same correlate in an animal and an AI, I could imagine someone putting an experiment together that looks at outputs of the AI and the activations of neurons involves in the neural correlate, and finding correlations that way. I think the experiment is more realistic to imagine on an AI than on an animal though, as the space of outputs on the AI is much more constrained. This is pure speculation though.
** **I can imagine finding a neural correlate for consciousness for a more-distant AI than we have today, but I have a hard time imagining this working for an AI like AlphaStar. If we did find a neural correlate we’d first have to find it in living things; I think that this limitation would make it unlikely to see the correlate in AIs that weren’t heavily biological as well. Some of the algorithms in AIs like AlphaStar are biologically inspired, but they’re based on fairly primitive or simplified representations of the brain. The convolutional neural network I mentioned previously is one such “biologically inspired” algorithm — it’s functions are based on cells in V1. However the presence of V1 alone is not thought to yield the conscious experience of vision, so I find the idea of locating such a correlate in an AI to be a stretch.
** **This paragraph is probably too ambitious as there’s a whole other set of philosophical problems around identity, but I’m trying my best.
* **The most bizarre, contentious, and fascinating of these theories I’ve come across is attributed to cosmologist Roger Penrose. Penrose suggests that the neurons themselves may be the gateway to consciousness; this stands in opposition to the widely held belief that consciousness is related to the pattern of neuronal activity in the brain. More surprisingly, he proposes that consciousness may play a role in an interpretation of quantum mechanics. If this sounds interesting to you, and you’re ready for a serious amount of computer science, physics, and neurology, I’d recommend you pick up his 1989 book *The Emporer’s New Mind. Most the book is spent explaining the requisite information, so even if you’re highly speculative of the idea of quantum mechanics having anything to do with consciousness, the book will still act as an excellent high-level introduction to technical concepts that are important for philosophy like Turing machines, and the Second Law of Thermodynamics.