Sci-Fi Blog post #3 Ex Machina

    Instead of performing a traditional turning test, Ex Machina brings the test on AI to a new level. The test reveals the fact to the human participant that he is interacting with a computer. Therefore, the purpose of the test transforms from testing whether AI has intelligence to whether AI has consciousness. In this sense, Ex Machina acknowledges that AI has or at least could have intelligence, and what matters is not the intelligence of AI itself, but rather the compatibility of AI and human intelligence, which needs to be examined through tests. Besides, by revealing AI’s existence to the person, the test sets up a phycological challenge, in which the participant experiences not only the threat from a more-intellectual being, but also a confusion between reality and simulation.

    First of all, the film brings up an intriguing point of consciousness. When the CEO is asking for Caleb’s opinion on Ava,  he makes an argument that consciousness is something “programed” into each individual and it is derived from the interaction with others. In other words, consciousness is something inherent that will emerge from communication.  Consciousness enables human beings to have different feelings, both to themselves and towards the exterior world.  Human beings also consciously learn from others. Though it might not be hard to explain the patterns of learning, it is often hard to explain  the feelings that one learns. It is these initiative and indescribable feelings that make humans human. It seems that consciousness is both natural and learnable. If we call consciousness a “program” according to the CEO, then human consciousness is a successful one that runs smoothly, though people do not know how it is programmed; while the consciousness of AI, if it does have, is a program that people know how to write yet do not know whether it will succeed. 

    In addition, the test, though admitted to test Ava, is actually a test on Caleb. If one is occupied by the idea that humans are totally different from AI and consciousness is something special only to humans, he is subconsciously expecting such test on Ava to fail. Believing that Ava’s action is only a program simulating human behaviors, the conscious movements of Ava totally fascinate an shock Caleb. Ava’s human-equally behavior is so natural  from Caleb’s observation that this simulation becomes too real at a certain point. This remarkable similarity between simulation and reality terrifies Caleb, and this terrified feeling is even strengthened by Ava’s outstanding skill of capturing people’s micro-movements. Ava is able to notice Caleb’s tiny facial expressions to judge whether his is lying, and this kind of capacity has reversed Caleb and Eva’s relationship. Eva turns into a much more dominant role in the relationship rather than a passive role in the very beginning; and Caleb turns to be totally led by Eva during their conversation. The anxiety emerged from the test  makes Caleb unable to distinguish simulation to reality that he thinks himself is also a computer been tested.

The difference between Ava and Caleb is at an intellectual level. Ava’s eminent abilities including memorizing things and observing things are far more excellent than a normal person like Caleb. The strong capacity  to learn and to perform  enable Ava to successfully cheat on Caleb and use him to pursue freedom. She in this sense passed the test on the intellectual level. However, does she really pass the test? Both being “tested” in the test, Caleb is able to feeling empathy towards Ava and tries to help her while Ava’s feelings are a performance in order to get out of the house. Her perfect acting skills and the fact she locks Caleb in the house to die marks her as non-grateful, indifference and cruel. Ava, passing the test in terms of intelligence, in this sense, fails to pass it in terms of humanity, a great outcome of human consciousness. 

Science Fiction Blog Post #2_Singularity_3D Printing Technology_Xiran

Xiran Yang

IMA Seminar: Science Fiction Cinema

Anna Greenspan

October 2nd, 2016, 12am

    The word “singularity”, referring to a phenomenon related to AI, is gradually becoming a more familiar term. It basically “predicts” a stage in the future where human beings are able to design and produce super intelligence, whose intellectual level is beyond human beings themselves, and that is going to be the last invention ever made—because artificial intelligence will later be able to create things that are more advanced and human beings will no longer be the most intellectual creatures on  the planet.

    As David Chalmers in his essay argues, the topic of “singularity” is of vital importance to discuss, not only about whether is would happen, but also whether the appearance of it would be good for human beings or not. When Chalmers goes further in envisioning a post-singularity world he also mentions something like uploading human consciousness into a computer and get either a duplication of the original person or the biological person would be destroyed, leaving a machine with his consciousness. (Chalmers 35)

     Just like the appearance of singularity, uploading the entire human consciousness into a machine seems to be something that is not going to happen, at lease not within a short period of time. Interestingly, just as Chalmers argues, human beings care so much about the potential intelligence that’s greater than themselves because they fear it will take over their places and get the entire species into extreme danger. However, at the same time, it seems humans are expecting singularity to happen so desperately.

    Not to imagine super intelligence to that extreme form in the future, but just look at the current society and the technology here, machines that have human intelligence partly embedded in and are able to take over people’s places do exist in a wild range. One example of those is the 3D printer.   

    Now that 3D printing technology has been gradually more and more wildly used, it has become a hot topic also in the fashion industry. In Danit Peleg’s Ted Talk, she explicitly explains the benefits of 3D printing to fashion design and how she is able to design and make her own clothing all by herself at home. Thinking about the pattern that how a 3D printer is able to make a piece of garment, it is actually quite similar to Chalmers’s argument that basically Peleg “uploads” her “consciousness”, namely her conscious fashion design into a machine, and when she pushes the start bottom, the machine is able to make the a piece of clothes out according to her instruction.

    Two statements that have been made by Peleg during her speech are very interesting to think about. The first one is that she admits though now printing things out of a 3D printer is pretty time consuming, she has a strong believe that technology is able to develop in a short period of time to make the process much shorter. It is very interesting to see how people are tightly connected to technology and what kind of strong belief they have in it, though they do not necessarily know how to design an advanced machine. The second argument that she makes is on one hand, by 3D printing, people are all able to make clothes that are unique. While on the other hand, with 3D printing technology, design is accessible to everyone that one can just download others’ designs online and print those out, therefore creating a paradox between personal private intellectual property and personal intellectual property becoming public. There are some concerns that clothing makers will lose their market, or personal design becomes less valuable and no longer personal at all.   

    People might think that “singularity” is still quite far away from us. However, the contradiction found in the discussion of singularity that “people both want it to happen while are afraid of it actually happening” could also be found in the many fields, 3D printing as one of the examples. And the idea of uploading human intelligence/consciousness into machines seems to be gradually realized. “Singularity” therefore is not merely something that might happen in the future, it is to a certain degree really “happening” now.

Work Cited

Chalmers, David J. “The Singularity: A Philosophical Analysis.” Science Fiction and Philosophy: From Time Travel to Superintelligence (2015): 171.

Ted Talk Link:

Singularity is Unavoidable, The Movie “Her” in relation.

Singularity to me seems to be an alternative form of evolution. Just as Darwinism suggest, species are bound to evolve overtime. Yet, the issue may be that fact that Humans are obsessed with turning AI into a sort of species, just like us. Which, means self- awareness. It means the philosophical notion of knowing your of existence in relation to the existence of other creatures on earth. Consciousness has been considered one of the philosophical defining characteristics of humanity. Yet, the origins of consciousness cannot be traced back to any certain cause. It is one phenomenon that philosophers cannot seem to have a rational explanation of. One thing that did interest about the move Her is that it challenged the notion that experiences are necessary for one to be conscious. At some point, the operating system, Samantha, implies that experiences can be learned. Her program consist of the personality of hundreds of different programmers, who are able to program the complexity of certain emotions and feelings. Allowing her to self-simulate the results of certain experiences.

The most intriguing part of Samantha’s program is that she mentions that she has the “ability to grow during her experiences and is evolving every moment. ” She is capable of machine learning, which only means that as Samantha encounters more situations, she will have abundant knowledge of how to behave in the scenarios. This exponential growth in knowledge relates to our discussion about singularity. Singularity can be described as, the theory that once machines become self aware, they will be able to expand themselves intellectually at an alarming rate. In the case of the Her, at one point towards the ending of the movie. Samantha shares that she has the ability to update her own software. This sort of manifestation of constant self-improvement could lead to an super-human device that human’s have no control over.

I, personally, believe that the process seems technically avoidable. Though, of course intelligence can be determined as an infinitely growing process. The idea that a super human intelligence can be able to improve itself seems totally possible. Humans are able to improve our knowledge about the world, but on a less tremendous scale that AI will be. Yet, perhaps, eventually we will be able to program AI to seek constant self-improvement in terms of natural discovery. The assigned reading by Vernor Vinge, seems to believe that singularity it could end to the end of humanity. It sort of counter-thesis to the philosophy of the movie Her and the idea that humans and smart AI can live together harmoniously.

One slightly relevant thing that I thought about during the movie was I slowly started to recognize that some of the human characters would refer to the OS as if it were part of separate species. It also reminded me of how a dog and a human might interact. While dog is said to be man’s best friend, it is apparent that man is categorically superior to dog. However in the scenario of man versus machine, the machine would obviously superior to us. Yet, though human and machine may remain in different hierarchal classes, there was a connection. Humans can commonly connect with their dogs due to the abundant amount of research we may have about what our pet may like. The super-human AI, is able to algorithmic find out what type of things human’s seem to prefer. This is shown by Samantha’s ability to quickly adapt herself to get along with many different personalities. Perhaps, it could be possible for Humans and AIs to become companions.

This Scholar, who I think was mentioned in class last week, Ray Kurzweil speaks a little about his view on singularity affecting the human race.


Blog Post on Vernor Vinge’s Paper (The Coming Technological Singularity)

Professor: Anna Greenspan

V. Vinge’s piece named The Coming Technological Singularity: How to Survive in the Post-Human Era, discusses the growth of AI in order to accommodate different approaches of achieving Singularity. In his discussion, Vinge brings out several points, out of which the following four are of the greatest importance: potential consequences of hitting the singularity, different ways to achieve singularity in the first place, the proposition of projects from AI’s point of view , and an enhanced relationship between computers and biological life in humans.

In the beginning of his paper, Vinge compares the ongoing technological progress with the evolution in humans. He draws an interesting parallel with the way in which the pace that animals can solve problems is simultaneous with the occurrence of the natural selection. In the same time, humans can solve problems using technology, which itself solves problems much faster than natural selection; Vinge refers to this as the intellectual runaway. The continuous progress of the technology that we have created means that after it surpasses a certain point, humans will never have to solve problems by themselves again, because the technology, or more specifically AI will be able to solve it for them. Thus, Vinge claims that such intellectual runway of AI is the main component of a singularity; the point in which AI becomes a form of superhuman intelligence.

Vinge suggests that the significant progress of AI could be achieved if closely looking into relationships between brain, as an intellectual component of biological life, and the AI. This link to neuroscience suggests that we are still very far from developing a successful AI or even achieving singularity, but points out possible advantages of brain-to-computer interface that could be used in the future, as Vinge refers to it. Such a close human-to-AI relationship would result in advancement towards the singularity of AI, because it would require human characteristics, such as intuition, learning flexibility, limbs prosthetics to be present within AI itself. Another way towards advancement towards Singularity is, what Vinge also refers to as, Intelligence Amplification (AI), presenting the amplification over the natural intelligence through the relationship between a human being and a computer when successfully accessing and communicating information to other people.

Brain-to-computer and human-to-computer are the two main approaches of achieving singularity that strongly involve human beings and use the present knowledge about human beings to advance the AI. In doing so, human beings discover more about themselves and what would they like an AI future to look like. Thus, when tracing back the evolution in humans, we can compare it to the ongoing AI development and make predictions, with the ability to always change the AI, before it hits the stage of singularity.

Microsoft HoloLens and VR as Indirect Roads to Singularity

Imagine a world where the natural laws of science need not apply, and mere programming can materialize epic fantasy. Sound familiar? Maybe like the plot of Tron, or Inception? With the dawning of the technological age, humanity begins to approach these capabilities, and the integration of consciousness with software becomes increasingly plausible.

Certain Luddites claim that we could never create truly strong-AI, and that no computer simulation could ever be confused with authentic reality. Science fiction author Vernor Vinge wrote an essay called “The Coming Technological Singularity” wherein he claims that “the creation of greater than human intelligence will occur during the next thirty years.” That was 23 years ago. Even if it does not happen this soon, inventor and futurist Ray Kurzweil has presented overwhelming evidence to support the certainty of forthcoming AI. In an article titled “The Singularity,” Kurzweil notes how “the pace of change itself has accelerated” and concludes that “we’ll have the hardware to manipulate human intelligence within a brief period of time.”

And yet, Vinge is aware of the existential risk of this impending Singularity. He believes “The physical extinction of the human race is one possibility,” and presents “other paths to superhumanity” which “seem more mundane,” such as Intelligence Amplification. One of these computer-human interfaces that Vinge brings up is limb prosthetics. Kurzweil takes this idea of the cyborg one step further by pointing out that “now there are neural implants for Parkinson’s Disease and well-known cochlear implants for deafness.” According to Vinge, these would all be tangible examples of attempts to improve upon humanity as an approach to singularity.

Kurzweil goes on to discuss how nanobots in the brain can be used for “full-immersion virtual reality environments that incorporate all of the senses.” Although this is still a few years away, recent popularization of commercial virtual reality (VR) systems has paved the way for such an application of technology. This year’s release of the Oculus Rift and HTC Vive in particular brought a lot of attention to the digital simulation of corporeal worlds.

Google Cardboard makes VR highly accessible, enabling nearly anyone with a smartphone to experience simulated worlds by the use of cheap headsets and free apps. Additionally, VR has been proven as an effective form of therapy and is now the primary treatment for soldiers with PTSD.

Rather than trying to make the leap into fully-immersive VR, the Microsoft HoloLens instead begins to move towards that direction by incorporating digital information into the real world through a form of Augmented Reality, and it useful for medical science and education as well as entertainment. This further blurs the line between reality and simulation, and perhaps there will soon be a merging of humans and computers beyond assistive technologies like prosthetics, and full-on cyborgs will be created.

Kurzweil posits that “the next stage of this will be to amplify our own intellectual powers with the results of our technology.” Could a future consciousness be combined with computers, as depicted in the 2014 film Transcendence? According to Vinge, it’s only a matter of time: “we cannot prevent the Singularity…its coming is an inevitable consequence of the humans’ natural competitiveness and the possibilities inherent in technology.” All we can do is hope that the creation of AI does not lead to a Terminator scenario, or that we someday find ourselves trapped within the Matrix.

AI Kill Switch: Not Practical

After watching Colossus: The Forbin Project, Wargames, and the Terminator, one can only wonder why these super machines don’t have a kill switch. Thankfully, the researchers behind Google’s DeepMind project and the Future of Humanity Institute have taken this concept into consideration and developed one of the world’s first AI Kill Switches promising that this “machine will not learn to resist attempts by humans to intervene in its learning processes”(Byrne). However, after being exposed to the reading “The Coming Technological Singularity: How to Survive in the Post-Human Era” by Vernor Vinge, I believe that any AI kill switch can and will eventually be overridden by the AI supercomputer. While no one can define how powerful these AI Robots will be, the one thing researchers like Vinge, and others can agree on is that AI’s will have more intelligence than humans. With that in mind, it’s reasonable to consider that the Ai’s will possess skills that we haven’t discovered yet and be able to manipulate things like kill switches to their favor. The common idea among the AI researchers and the supercomputer community is that these robots will work for us, but Vinge takes a completely different approach stating that “Any intelligent machine of the sort would not be humankind’s “tool”(Vinge 2). I agree with Vinge in that the object or person that posses the most intelligence will become the most dominant being and control those of lesser intelligence. This concept is the same idea we see in the world today, humans are at the top of the food chain not because they’re the fastest or the quickest, but because they’re the most intelligent. Google’s DeepMind researchers claim they can control this supercomputer due to a rewards system that provides incentives for the machines in exchange for a “reward” that they have programmed the computer to like. While this may make sense given that no AI supercomputer has been created, they neglect the fact that whatever mission or purpose the AI is given, it will follow that to the extreme, similar to how the computers did in 2001: A Space Odyssey & Colossus: The Forbin Project. Furthermore, this reward system has been used in the past in games like Tetris where developers programmed the computer to receive a reward for every game won. Although, after a few minutes, the computer would simply pause the game as in Tetris you will inevitably lose demonstrating that the reward system is already flawed.



Start at 15:10

Another aspect that hinders the abilities of an AI killswitch is Vinge also believe that “there seems no reason why progress [ of AI’s] would not involve the creation of still more intelligent entities”(Vinge 2). The same way we created them, they may create something larger like how in Colossus: The Forbin Project the two computers pair up and essentially took over the world. We understand that there are multiple dimensions, but past the 3rd dimension, our minds can’t comprehend, which is another major reason as to why the AI kill switch will most likely be overridden. These computers could create something in let’s say the 4th Dimension, and alter time to their favor as they may understand something we don’t. With that said, I personally think that the creation of an AI Supercomputer will spell the end of human dominance in this world as I don’t think you can grasp the power of a self-learning robot until it’s created and by that time, it will be too late. No computer is perfect, our macs and PC’s still breakdown and run into bugs every day, and with that no AI computer will be perfect as well. One bug could lead to dramatic consequences that will forever change our world.


Bryne, Michael “Google Deepmind Researchers Develop AI Kill Switch”. Motherboard. N. p., 2016. Web. 2 Oct. 2016.

Vinge, Vernor”The Coming Technological Singularity” N. p., 2016. Web. 2 Oct. 2016.

“Computer Program That Learns To Play Classic NES Games”. YouTube. N. p., 2016. Web. 2 Oct. 2016.

Sci-fi Seminar Blog post #1_AI in video games_The Sims_Xiran

    When Searle talks about the Chinese room test, one objection he mentions is the brain stimulator reply. In this objection he talks about the idea of designing a machine which stimulates the actual brain of a human being and therefore that machine is able to fool a native Chinese speaker in believing that the it could think and speak in Chinese(Searle 8). To reply to this objection, Searle argues that “the whole idea of strong AI is that we don’t need to know how the brain works to know how the mind works”(8), and therefore “if we have to know how the brain worked to do AI, we would’t bother with AI”(8). Searle’s argument has provided quite an interesting perspective on the reason of human beings to study and create AI. On one hand, it is very true that AI is a product, an outcome of human intelligence that people want to transfer their own intelligence into something to work for them. On the other hand, AI also provides human beings in return an opportunity to revisit themselves and further study themselves.

    Connecting these ideas and applying them to real life, it is actually not hard to find “AI” products around us. Smart devices such as smart cars,  intelligence personal assistant such as Siri, or purchase prediction function on Taobao and Amazon—more examples can be thrown out easily. These are things people made out in order to make life easier; while there are also other formats of AI, and AI in video games is one significant example.

    In video games, there are often non-player characters (NPC), which are characters designed within that specific game that cannot be controlled by the players. In other words, the intelligence of these characters is the artificial intelligence in this game. Players, while playing the game, need to “interact” with those NPCs to pass different tasks. These NPCs often have their own unique “personalities”: some are more outgoing, some are more shy and some might just be extremely weird. By interacting with these characters, players are actually having a communication with the AI.

    One very successful example is the game called the Sims. It is a life stimulation game in which the players basically need to create a sustainable environment for the NPCs to survive. Players need to consider tons of factors in order to achieve that goal—they need to interpret the moods and thoughts of the NCPs, understand their needs, and implement actions accordingly. This is such a game where human beings need to suspect and understand human minds, while the human minds are played by artificial intelligence. But actual human minds are so different that it is extremely hard to get a right answer every single time. Therefore, a common situation that would happen in the Sims is that the player interprets the NPC wrongly and ends up “killing” the character.

    Coming back to Searle’s argument, it is indeed true that AI and the development of it is a great approach for humans to understand their own brains. However, there is this paradox that human beings are trying to learn something by creating another thing, while this “another thing” is created by that particular something they eager to learn. Thus there should not be such big a surprise when the “another thing”, particularly the AI sometimes troubles its creators—just like the NPCs in the Sims would not always act up to the players’ expectations.

Work Cited

Searle, John. R. (1980) Minds, brains, and programs. Behavioral and Brain Sciences 3 (3): 417-457

Links for the game the Sims:


Sophia, An Advanced Emotional AI || Callum Amor

One of the biggest issues with AI technology is that the robot/computer has no body, and with that can’t replicate the human biological or appearance factors; however, a new article from NBC tackles this issue head on with a look at an emotionally advanced robot called “Sophia”. Created by Dr. David Hanson and his team of engineers, Sophia is capable of 62 different facial expressions and has patented life-life silicone skin covering its whole body. In studies conducted by the Hanson Robotics team, they found that 80 percent of people greeted Sophia with a “hello”, demonstrating that initially, people thought this robot was a real person; however, after a few seconds the subjects recognized a dip in the emotional response of the robot often referred to as the “uncanny valley”. If it was the emotional response that causes humans to recognize that this is a robot, I think half the problem is solved as things like skin, limbs, and other physical traits are becoming so advanced in appearance and movements that soon enough there will be such a slim difference between a real and fake arm . So it all comes down to the brain of the robot and its emotional responses, which in the academic article “Minds, Brains, and Programs” some cognitive scientist argue that the human brain does something called -information processing, which the computer is also able to do. Thus if the computer runs this “information processing” system, it will be identical to that of the brain. Is this the fix to the uncanny valley issue? John Searle, the author of “Minds, Brains, and Programs” argues that it is not and that the brain of humans is much more complex than most AI specialist give it credit for.  Searle states that robots will never be able to reach a level of parallel emotions when compared to humans stating that “actual human mental phenomena might be dependant on actual physical/chemical properties of the human brain”. While I personally think that complete physical replication of a human will be possible in the future, I agree with Searle and his statement that the actual human mind is dependant on both physical and chemical properties of the brain. For example, when we are in love or going through depression our minds along with our outlooks on life are polar opposites. With love, our dopamine levels are through the roof and we love life, with depression our serotonin levels are high and produce a totally negative outlook on life. With that in mind, is it possible for robots to ever replicate those chemical imbalances or will all robots have the same emotional levels and responses? It’s hard to say considering how early we are into discovering AI’s, but I think even if we don’t there are still many uses for these robots. Dr. Hanson realizes the obstacles ahead when obtaining emotional responses from Robots which is why his team is working on creating robots that can replicate jobs like Tour Guides, Receptionist, and Language tutors as these don’t require emotions.  Lastly, while we haven’t directly talked about the effects of AI on both society, and the economy. I think this article as well as the academic piece both demonstrate how unskilled jobs with little to no actual emotional interaction needed will be replaced and those jobs that require creativity and free minded people will remain in society.

Works Cited

Taylor, Harriet “Could You Fall In Love With Robot Sophia?”. CNBC. N. p., 2016. Web. 18 Sept. 2016.


Saulsberry_D Assignment 2 – War Games Response

Currently, a common theme that seems to be reappearing within our current endeavors into the deeper understanding of Artificial Intelligence is this idea of “simulation.” A simulation is commonly thought of as a sort of an imitation of some sort of real life process.  And for the past few weeks, we have been discussing the idea of the Turing test, and whether or not “imitation” can be a path to Artificial Intelligence becoming more and more human.  If someone can simulate human activity, well then they may as well be human, right? Of course, there are many dissenters to Turing’s ideas, one in particular that we discussed in class was “The Chinese Room Argument.” The overall, summation that our class came to after discussing the argument was that simulation is not enough for an AI to be considered human. There needs to be a sort of conscious understanding on the part of the AI in order for us to declare it conscious. Therefore, a simulation is merely a simulation.

The movie War Games, released in 1983, features a highly advanced super computer that is linked to the nation’s weaponry.  Much like the previous movie we watched, The Forbin Project, the computer comes close to causing death and chaos. However, NORAD, the computer in War Games does not intend to take over man-kind, he has simply been hacked, and is prone to error. The computer is issued to run a few simulations but mistakes them for actual protocols and begins declaring war.

When the computer, NORAD, runs its simulations, it has unlimited knowledge on the possible outcomes for things like nuclear war to simple things like games of tic-tac-toe. Though this movie was released decades ago, its ideas were way ahead of its times. Currently, a lot of our modern technologies have heavily programmed algorithms that need to account for every possible situation that is within its capabilities. The idea of having a super computer control the nation’s armory doesn’t sound very promising. Especially, when the computer seemed to be so susceptible to hacking.

My reactions to Colossus were less apathetic. NORAD from War Games, and Colossus from The Forbin Project, are similar concepts in that both computers had two different assignments. NORAD was programmed to run simulations of possible war outcomes, as well as control the armory. Colossus was assigned to keep “world peace” and also control the armory. Despite this, Colossus was heavily painted as a villain throughout the movie, though the computer was only fulfilling its program. Regardless, we are moving closer to a day and age where machine learning is being introduced to politics.

Link to article:

I came across an article in an online technology magazine that mentions how machine learning and artificial intelligence can currently be used to write political speeches. The professor at MIT is currently fashioning an AI machine with an algorithm that can make political speeches for any sort of political party. The machine utilizes a wide database of speeches that have been recorded over time, and can thus fashion any sort of opinion on any sort of diplomatic issue.

So, the revolutionary thing about this is that we keep talking about the Turing test and simulation. Once machines have the ability to appear human in their conversation, they may appear to be conscious, which could very well be existentially damaging.  To have a machine, that can form opinions on certain subjects is one thing, but to have a machine that can form opinions on political subject is remarkable. Though, the method behind these “opinions” are simply algorithmic, it nevertheless provides one example of how we could be arriving closer to a “strong” AI.

The Searle reading mentions various issues with the idea of a strong AI, he states, “whatever purely formal principles you put into the computer, they will not be sufficient for understanding.” This relates back to the Chinese Room Experiment, and the argument for the lack of consciousness within the AI. But if we consider the machine that was mentioned in the article, and its ability to associate certain phrases with others, and certain opinions with “good” and “bad,” then this would clearly be a form a machine understanding. I know it’s a long shot to make the connection between the two, but I do believe we are on a path closer to the “strong” AI.

Feeling Disembodied: Riding the Emotional Motorcycle

Have you ever wanted to talk with your car? Of course you have! Do you call your vehicle “baby” and treat it like your child? Who doesn’t! Bluetooth voice commands and navigational systems powered by speech-recognition are so basic, why not take things to the next level? Kawasaki Heavy Industries, Ltd. recently announced that they are making your dreams a reality and building motorcycles with personalities! You can speak with your motorcycle and it will not only converse with you, but learn your preferences! Perfect for the whole family.

An article on explains how Kawasaki is using “a combination of ICT (Information and Communications Technology) and AI (Artificial Intelligence)” in order to allow riders to communicate with the motorcycle as if it were alive. It uses an “Emotion Generation Engine and Natural Language Dialogue System” which the manufacturers claim will enable the technology to actually detect the riders’ emotion by the sound of their voice.


Not only will the software be able to provide the driver with advice on how to make the ride more comfortable, but it will also be able to relay contextually-relevant safety precautions. The connection between man and machine will, supposedly, build a relationship; Kawasaki is quoted claiming that the motorcycle will “develop a unique personality reflecting the individual idiosyncrasies of the rider.” All this for a more enjoyable riding experience. Sounds fun, right?

But is it even possible for an electronic computer to have genuine emotions? Can a software be complex enough to truly feel things? John Searle, in his writing on Minds, Brains, and Programs, argues that no form of digital programming is capable of experiencing thought or understanding in ways that our mind can. He insists that although “we could build a robot whose behavior was indistinguishable … from human behavior,” and thereby “we would attribute intentionality to it” this would only “confuse simulation with duplication.”

It is obvious that a computer simulation of nuclear war is a far cry from WWIII actually taking place, but Searle points out that an AI imitation of certain mental states is often mistaken for the presence of legitimate consciousness. “The computer,” according to Searle, “has a syntax but no semantics,” and no quantity of syntactical calculations can be combined to cause a semantic understanding. Software alone has no self-awareness; non-living components cannot create living entities.

His main line of argument relies on the inaccuracy of the “mind is to brain as program is to hardware” equation. Software is independent of a machine, but the mind is an inseparable by-product of the brain. If our mind could be separated from our body, perhaps equivalent strong-AI would be possible. Searle, however, believes that “actual human mental phenomena might be dependent on actual physical/chemical properties of actual human brains.” Sorry Kawasaki, but this implies your motorcycle is just a fancy computer, incapable of genuinely feeling, unable to experience true emotion.

Whether or not your car really thinks for itself, Kawasaki’s tagline says it best: “Let the good times roll.®