Computer Simulation Theory Cant Tell You Why I Am Laughing

During today’s Istanbul lunch, my friend and I began a heated discussion on the future of humanity. I noted that I firmly believe that the future human race will be cybernetic organisms-AKA cyborgs.  Modern technology will allow the human brain to extend into the machines we use and in turn will make us “bionic” (thanks to the  Jeff Gomez’s  TED talk for setting my brain in this direction). In a moment of ultimate geek-a-tude, we found ourselves discussing Nick Bostrom’s  simulation theory.

Bostrom’s work is based on 3 assumptions:

  1. It is possible that an advanced civilization could create a computer simulation which contains individuals with artificial intelligence.
  2. Such a civilization would likely run many, billions for example, of these simulations (just for fun, for research or any other permutation of possible reasons).
  3.  A simulated individual inside the simulation wouldn’t necessarily know that it is inside a simulation — it is just going about its daily business in what it considers to be the “real world.”

Now based on these assumptions Bostrom asks the community which of the following is more likely to be true? Are we the one civilization which develops artificial intelligence powered simulations and happens not to be in one itself or are we one of the many (potentially billions) of simulations that is running? (Remember point 3)

As I discussed this subject I found myself unable to believe that a machine will ever be able to fully comprehend human emotions and feelings, simply because we as humans are unable to understand them ourselves. I have too many times entered a situation expecting to react and feel one way only to be shocked by my emotional response. Day to day, I think that while elements remain somewhat cyclical, the majority of our attitudes and views are largely unpredictable. Billions of dollars are spent each year globally on attempts to understand our feelings through therapy, treatments and self discovery; how can we teach a machine to master the application of emotion, when it blatantly alludes us in our daily lives?

Additionally, if we ever could master the creation of synthetic emotions through artificial intelligence, how would we derive the proper weight of memories, previous experience, attachment and other relational metrics on the recall of a certain emotion? When an individual gets angry it is rarely solely the result of a single interaction. For example, if John Doe gets his wallet stolen, steps in puddle, and forgets his umbrella he is likely to react differently when he hears his assistant is going to be late to the office than he would if he had a error-free day. Perhaps this example is too basic as it does not fully show the web of millions of events that have happened in both our short and long term history that morph and shape how we respond to stimuli and events in our lifeline. Additionally, this example also seems to forget the seemingly random (I can see the argument that this is all part of a greater pattern) nature in which these references are recalled.

While many of the other roadblocks to pure artificial intelligence (such as goal setting, semantics, natural language processing, reasoning and decision making) all seem obtainable with time and focused research- social intelligence seems forever out of our reach. Perhaps I am just waxing on as an attempt to justify my own inability to understand why I feel the way I feel half the time. More likely it’s that I like to think that there’s a piece of life that should and will remain mysterious, exciting, and unpredictably simple.

 

This entry was posted in Academia and tagged , , , , , , . Bookmark the permalink.