Cognitive Science Essay


 In what way can cognitive science inform issues in the philosophy of mind?

Long before Rene Descartes's wrote his meditations on philosophy or John Locke did his treatises, there were humans that wondered about themselves and their minds. We can go as far as 500 BC when philosophers like Socrates and Aristotle questioned the human mind and behavior. Many eastern cultures have built upon the idea of searching for the truth about yourself as well. For example, the Buddha set out to find himself and who he truly was and found Buddhism. Just a little prior to Buddha, Mahavir from Jainism had sought enlightenment too. By such regards, enlightenment had been the act of knowing your true self, perhaps the metaphysical self. There is a long history about the inquiry about our own mind and it pardons to ask questions which seem ambiguous and unanswerable. The scientific realm has examined the mind by regarding several historical theories and finding an underlining conclusion about our mind. Cognitive scientists have explored their own areas to cover for the blind spots created by the philosophers. By acknowledging the hard problem of consciousness, which states that mental activity cannot be described by biological states of the brain, cognitive scientists have modeled human activity through computational methods. These not only allow us to produce efficient working robots but also allow us to learn about our mental activity and its so-called 'algorithm.'

Consciousness is the main topic of study of the mind. By definition, consciousness is the mental ability to perceive, recollect and understand. As described above, biological and neurological sciences can only go so far before hitting the wall of the hard problem of consciousness. Thus we have cognitive scientists who join the philosophical and biological concepts of consciousness and try to create a coherent theory. For example, Daniel Dennett, a cognitive scientist, put up the claim that shifts the whole paradigm of consciousness. Contrary to Ned Block's theory of phenomenalism which states that accessed information is different from phenomenal information which has a subjective 'me-ness to it, Dennett argues that the phenomenal information is actually an effect of the access information and not the cause of it. What he means is that consciousness is simply the outcome of several sensory modalities that come together to create an apprehensive image that we give meaning to. The more skewed it is, the more salient we are towards it as we want to grasp onto whatever it is that we have missed. Consciousness is also misinterpreted by our own cognitive biases and perceptions. That is why it isn't inherently within us rather a result of our interaction with the world. The rubber hand illusion suggests how easy it is to feel as if we have a rubber hand, which in reality has no connection with our nervous system. This goes onto suggest that our perceptions can flow, and these flawed perceptions can give rise to a whole new feeling that wasn't meant to be felt as it wasn't caused by our internal states of brain or body.

Andy Clark approaches this topic further in-depth to explain how our bodies and environment may shape the way we comprehend and perceive information. He gives examples of how humans have created such great infrastructure and economy in the world that they themselves are being affected by their invention. Our gadgets and technologies are equally as influential on us as our own internal minds. This means we are more and more part of the environment that we ourselves created. He backs this up by giving the example of the tuna fish which swims in its own electric current created by its tail to move at faster speeds. The tuna fish had neither the physical ability to swim so fastly nor did it have sheer motivation to exceed its limits. Rather it just so happened that the fish learned, through natural selection, that it could use its own body to produce certain emotions. The fish is no different than all us humans. We have learned to use our bodies so efficiently and passively that we do not even need to think or use our brains to do our daily tasks. For example, when we are changing domains of a platform, we don't have to think and use our brains to give signals to our bodies to move in a certain manner. Rather, our body's sensory receptors have passively received correct stimuli without us having to think about anything. All this suggests how our cognition is embodied and is part of the environment and our body. The brain is just one agent of our conscious experience, and there are other agents in the physical world that can produce different states of cognition within us.

Questions that tormented philosophers have been solved within decades for cognitive scientists. We would have got stuck at the hard problem if cognitive scientists would've never found out ways to test our mental activity. Surely, the biological states affect our thought, but Clark's research suggests that integration of our biology and environment has the best possibility of explaining our state's consciousness. Computational neuroscience has grown to uphold Clark's hypothesis and test this on robots. The theory that a machine can function without having biology remarked all philosophers. Up until the late 20th century, many had thought that robots can never get any close to human cognition. Yet, in the 21st century, we have robots like Asimo and Big Dog that are able to mimic human movement. Although higher levels of consciousness are still far from achievable in these robots, Marr's algorithmic level of understanding can take us to greater steps. By understanding the machine's task and what step it takes to do those tasks, we are able to tweak and manipulate the machine's behavior. Combined with Marr's implementation level, which suggests seeing those parts in action, we can figure out what changes the robot needs in order for it to function properly. All robots need a hierarchy of tasks. Like any species, the robot must know what to do when danger approaches and what to do when there is no danger. Thus, instead of using an internal program to regulate the robot's behavior, scientists use a multilayer control system that takes inputs from several motor modalities and when any of these modalities are dysfunctional, it adjusts accordingly. For example, the Big Dog robot is able to recognize when its back legs are slipping in ice and thus adjust accordingly by putting more pressure on its forward legs. In these ways, the robot can learn from its own environment. This brings the question: Can robots really take up human tasks? Can we create machines that are independent agents of cognition and thought? Most likely not as of today. But research indicates that robots can definitely be used for locomotion and other movable tasks.

In conclusion, cognitive science has come a long way to answer some of the most intriguing questions about our minds. We now know, thanks to cognitive scientists, that our consciousness is a result of our brain, body, and environment. We are able to create intelligent machines without biology because we can apply Marr's algorithmic level of understanding. Cognition includes a vast range of tasks such as perception, sensation, organizations, and so forth. But these do not necessarily need a thinking agent, rather we can still do all these tasks with a robot that has the ability to adjust according to its present environment. One of the limitations to robots is that they cannot feel emotions. Emotions influence our consciousness greatly and have much more influence than mere cognition. Whether robots can perceive strong emotions or not can answer the question of whether robots can take over human productivity. Then again, that brings the question of whether the emotion is even valid in a rational world- a computational world.