Google v. Death: Can Google Win?

A FUTURE NEAR YOU?

A young child sits playing with an old Game-boy hand held system. He looks at it quizzically, confused by its antiquity turning it over in his hands. He shakes it a few times, taps its screen, but the device remains unresponsive. A man in his 50s leans over and pushes the rigid plastic power switch on. The switch gives way and the device comes to life, igniting the white-hot fuzzy green animated rendition of Super Mario in a Tanooki suit flying across pixelated brick-laced landscape and angry piranha plants. The young child looks up to the man beside him in fascination, “what do they call it?” “A Game-boy,” the man replied. “That exact model was given to me by your great, great grandfather on my 16th birthday.” The child laughs, astonished at the mans response. “That would make this old thing 115 years old!” The man works a smile, “you’re right, your great grand dad here is a ripe old age of 126.” He laughs, pausing to reflect before continuing, “and that’s why we must take care of our things. If we take care of what matters a lot to us, we can have them around for a very, very long time.” The man gazes off into the horizon through the dome of a futuristic city, immersed in a bio-habitat far removed from earth. He looks down to a hologram of the present date and time: March 23, 2110, 24:00.

GOOGLE’S LATEST MOONSHOT

It sounds like a bizarre science fiction story. But radical life-extension is actually Google’s latest venture. Google aims to take care of things that matter to it and many other entrepreneurs in Silicon Valley; the human condition. By investing in Calico, it’s latest company within the odd-ball “Moonshot program,” Google hopes to aid the human condition, so much so that it plans to overcome death itself. It’s a controversial statement coming from a company that has always put itself on the cutting edge and among the far-out. But the program is very serious indeed, and is stepped in a growing scientific movement that has long gained traction in Silicon Valley: The Singularity.

WHAT IS THE SINGULARITY?

The term was popularized by science fiction author Vernor Vinge, and later proselytized by inventor and futurist Ray Kurzweil (currently Google’s Head of Engineering). While there is not one single definition of the Technological Singularity that has been officially recognized, there are several key components of what must occur in order to “beat death.” Much of the science behind it comes from the marriage of human biology with the aid of technology. In order to “beat death,” Kurzweil and others note that we must move beyond our traditional biological makeup. And so what Google is actually investing in is Transhumanist science. Google is investing in merging the human condition with technology. And naturally, there are going to be plenty of skeptics, and that is what this article will seek to explore.

I’ve spoken at length about the Singularity, namely why I am skeptical of the timeline being reached at 2045, but also that I still support its merits and goals. I can think of nothing more morally respectable than finding a scientific answer to the afterlife. Understandably as a result many in the Singularity come off as semi-religious. And in a way they are. They see humans as demi-Gods, destined to solve the mystery that is death by overcoming it. Singularitarians are atheists who are terrified of the prospect of death because they do not believe in a Judeo-Christian after-life. And so their mission is to use science as an answer to religion, which requires preaching in the same way converting religious belief requires. And while I have taken issue with their message branding in converting moderates to the cause of beating death, I still support their goals. There is literally no reason not to support such a noble venture.

But is it feasible? What are other computer scientists, biologists and artificial intelligence researchers saying about Google and The Singularity?

MEET THE TECHNOLOGY NAY-SAYERS

In a response to the overwhelming interest in the subject of Singularity, Stanford University department of Computer Science created the following informative guide in debunking its plausibility.

They note several prominent scientists who do not believe the Singularity could ever occur, or at least not within our lifetime. The most notable critic is a highly regarded artificial intelligence expert, Theodore Modis. Modis is actually cited by Kurzweil in his books. However, Modis takes issue with Kurzweil’s understanding of exponential growth applied to new technology.

A key to the Singularity is the idea that technology is growing at an exponential rate. However, Modis argues that there is no such thing as a pure exponent in nature. Kurzweil agrees, noting that even exponential growth in computing power will eventually become an S-Curve when applied to a graph. But Where Modis and Kurzweil disagree is on the point of when that will happen.

A necessary assumption for the Singularity to occur is the continuation of Moore’s Law. IBM’s Gordon Moore hypothesized in 1965 that processing power in chips will double at a rate of every two years, while also decreasing in size. This result will also be matched by a decrease in cost based on efficiency. This observation became known as Moore’s Law, and it is this exponent that Kurzweil uses to advocate for the occurrence of Singularity.

So what the population in turn gets is faster computers and gadgets, which become smaller and smaller in size at a lower price (just think of cell-phone technology improvements over the past decade). The problem is that this period of exponential growth is set to end, as soon as the 2020s in fact. So now, Silicon Valley must look beyond Silicon, as the laws of physics, namely thermodynamics cannot possibly contribute to smaller, faster chips required for something like a brain implant or Kurzweil’s brain nanobots. Gordon Moore himself noted the exponent will end in the 2020s, and he himself does not believe in the Technological Singularity according to Kurzweil and Google’s definition.

Modis argues that in order for Kurzweils exponent to continue to soar upwards, the knee (point where an exponential curve begins to soar before finally coming back to an S-curve formation) would only be occurring now and that is not possible since Moore’s Law is near or at its end. In his paper The Singularity Myth, Modis uses several graphical illustrations of comparable exponential growth patterns to illustrate his counter-theory in practice (including oil reserves and world population). The problem with Modis’ criticism is that like Kurzweil’s argument (tenets of Moore’s Law sustaining itself) it is based entirely on an assumption. Modis fails to consider in his criticism of Kurzweil that Moore’s Law can continue to function by way of replacing Silicon with other properties.

Calico and Google are assuredly investigating this possibility. And it is not out of the realm of possibility. Just recently, a group of Intel researchers were able to construct quark chips. Beyond this dynamic new invention are groups of researchers using the bodies electrical impulses to function like the electrical flow on a silicon-based chip, giving way to research on biological chipsets. More incredible than both of these inventions is the recent advent of a computer based on carbon nanotube processors. According to a publication in MIT Technology Review, researchers believe this could be the most practical step beyond the limits of Silicon power, thus extending Moore’s Law. If such an invention were to hold up, this would support Kurzweil’s assumption over Modis’ assumption.

And so if we can create an alternative to Silicon, there is still hope for a Singularity at some point. But this too rests largely on an assumption versus scientific fact.

MEET THE BIOLOGICAL NAYSAYERS

The other part of the Singularity is combining micro-technologies and advanced artificial intelligence to the human condition. Transhumanists and presumably Google’s Calico want to use technology to begin to overcome biological limitations. Recent feats in this arena include thought controlled prosthesis and pills containing microchip sets that provide a crude scanning of the bodies internals. By delving further into understanding the biological data behind what causes illness, doctors hope they can better create a cure.

Singularitarians and Transhumanist advocates propose substituting organic with the inorganic. Humans have been obsessed with immortality since the era of Gilgamesh but have failed to overcome death because they have mainly focused exclusively on the biological component of the puzzle. Transhumanists hope that like any other technological problem, all we need is a bunch of processing power (see Moore’s Law) and massive data. And so the next part of achieving Singularity is to codify the human experience.

Kurzweil’s most recent book is titled “How to Create a Human Mind.” He argues that through complex brain nanobots that survey and scan the brain, we will be able to effectively map and codify the human brain like computer code. By understanding the human experience on such a detailed level, we will be able to simulate it. And so Kurzweil goes as far as arguing for brain-machine uploading. He argues that we can create a host or machine (dependent upon strong general artificial intelligence) which will replicate the entire conscious of a real individual, even if that person is no longer living. By simulating the human experience, machines could act as hosts for the human mind. It’s an astronomical long shot by all scientific standards, and is the one part of Singularity I most certainly doubt.

But what do experts say about its feasibility? Can we codify the brain? Can we at the very least understand the way the body functions by way of technology?

“Absolutely not” says David J. Linden, a neurological scientist, medical professor and chief editor of the Journal of Neurophysiology. He gave an exclusive interview to technology publication Boing Boing on this very subject. Kurzweil’s nanobots on the scale he suggests (7 microns across) are biologically and physically impossible to navigate the brain, Linden argues. An excerpt from the Boing Boing article shows just how infeasible brain nanobots actually are:

You might imagine the nanobot as a car, something the size of a Volkswagen Beetle. It drives down the road, until it finds something the size of an SUV (a neuron). Here is the first of many problems in Kurzweil’s scenario: The brain is composed of neurons and glial cells—non-neuronal cells that outnumber neurons 10-to-1 and provide metabolic support and slow forms of information processing in the brain. These cells are packed together very tightly, leaving only miniscule gaps between them.

It is easy to look at the left panel of the figure that shows a computer-based reconstruction of the tip of a growing axon in the brain and imagine that there is plenty of space around it. However, the complete view of this same growing axon tip is shown in the panel on the right. This image is made with a transmission electron microscope and it shows how the same growing axon (marked with asterisks) is packed into a dense and complex matrix of tissue containing other neurons and glial cells. The scale bar in the left panel is 0.5 microns long, about 1/160th of the diameter of a human hair. So you can imagine Kurzweil’s brain nanobot, a structure about fourteen times larger in diameter than the scale bar, crashing through this delicate web of living, electrically-active connections.

What’s more, the tiny spaces between these cells are filled not just with salt solution, but with structural cables built of proteins and sugars, which have the important function of conveying signals to and from neighboring cells. So let’s imagine our nanobot-Volkswagen approaching the brain, where it encounters a parking lot of GMC Yukon SUVs stretching as far as the eye can see. The vehicles are all parked in a grid, with only one half-inch between them, and that half-inch is filled with crucial cables hooked to their mechanical systems. (To be accurate, we should picture the lot to be a three-dimensional matrix, a parking lot of SUVs soaring stories into the sky and stretching as far as the eye can see, but you get the idea).

Even if our intrepid nanobot were jet-powered and equipped with a powerful cutting laser, how would it move through the brain and not leave a trail of destruction in its wake?

Beyond the biological impossibility of this scenario at present is also the complexity of the human brain itself. Linden also argues that the scenario is further complicated by just how complex the human brain is. It could take ages to codify and map out the brain and all of its adjacent properties. This remains so even if there were a scientifically plausible way of scanning the brain. Then assuming this could be done at some point, coding the human brain is still of little consequence for what Kurzweil and mind-upload components are advocating for. Just because you have a map of the brain for example, doesn’t necessarily infer that is sufficient to then replicate and create the human mind. Kurzweil’s argument in favor of mind-uploading is a logical error in reasoning based upon highly theoretical science which confuses necessary and sufficient conditions. In other words, just because something is necessary to achieve an outcome does not mean it is sufficient on its own to bring about that outcome. Using an analogy; you need keys to start a car, but keys on their own while necessary are not sufficient on their own to start the car. And needless to say, the human brain is by far more complex than the inner dealings of an automobile.

THE FUTURE OF CALICO AND CONCLUDING THE PLAUSIBILITY OF TECHNOLOGICAL SINGULARITY AND OVERCOMING DEATH

While it seems plausible that the evolution in technology will be of great significance in extending and aiding human life, much of the claims in favor of Singularity are highly theoretical at best. While it seems Moore’s Law can be extended to deliver bio-technology augmentation of humans in our lifetime, much of the grander predictions of Singularity such as mind-uploading are likely to occur only in the extremely distant future at best, if at all. Using theoretical physics, nothing is impossible, only imprbable to whatever degree. As it stands now, there is little to suggest we will be able to replicate the human mind.

However, aiding the human body with technology is absolutely possible, to where we may witness a “Singularity-lite.” It is this that I, and many others interested in Transhumanist science are believers of. While using brain-implants and pills that scan for illness seems far off, there is a lot of promising research in this field.

Google’s investment in Calico is just the beginning in what will become a fascinating discussion about the technological and ethical implications of Transhumanism and any Singularity. And even if you seriously doubt the Singularity by 2045, as I do, we must start somewhere by investigating and learning about how to augment the human condition now for the humans of the future. There is no reason to not support Calico. Google itself estimates overcoming death is a Moonshot, but think of all the wonderful inventions that could come as a result of its pursuance.  The program itself could do wonders to look at solving diseases and human suffering in a whole new light, and that’s something most anyone should appreciate. And if you look at all the inventions NASA gave us as a result of its moonshot, we can only assume much will come from this moonshot as well.

In concluding this very long post, I will leave one with the wisdom of President Kennedy on why we should go to the moon. “We choose to go to the moon in this decade and do the other things. Not because they are easy, but because they are hard.” And personally, in the spirit of Kennedy, I believe this is a moonshot everyone should support, no matter how outlandish it may seem:

(Anti)Social Media?

And in a moment, in an instant, it was over. Almost ten years after the Sopranos took our breath away, Breaking Bad did it again. But this finale would be markedly different. It would be different because despite watching the show end on our own televisions, in our own intimate space, we would share our viewing experience with millions around the nation at the very same time.

The phenomena plays out over several different similar scenarios, or what are essentially cultural events. But what makes these events so interesting is that they take on a whole new level of intensity. Social media, namely Twitter, has made sharing opinions, experiences and relation to one another so much easier than ever before. This weeks phenomena is Breaking Bad. Next weeks may be something else (however on a smaller scale). In February, we will watch another Super Bowl. Eventually, as DC does nothing about gun-control, we are bound to come together via Social Media again as a nation to mourn another gun tragedy.

But have we really come together? Does sharing ones opinion on a social network validate that feeling or experience? Does the even more informal method of re-tweeting someone’s opinion of an event create any sort of sense of kinship or relationship with that individual? On a deeper level, it cannot possibly do so. It exists, to use Twitter’s slogan as a mode to “start the conversation.” But it’s not just about starting the conversation as much as how we actually converse.

Breaking Bad and its finale is a moment which drew together millions, only those millions of people would never share an actual word with one another. In fact, most people conversing about Breaking Bad on Twitter aren’t even directly talking to one another. Twitter is incredibly convenient in the ways it brings people together, but it is also incredible for the way that very convenience also breeds isolation. In a way, when we tweet, we are talking to ourselves. We are starting the conversation, but it seems people rarely are conversing back. We have no way of really knowing what people think of our thoughts or whether they even read them. It literally just begins to become noise amid the chaos after a while. I can’t even say I read all the Tweets in my time line, for there’s too little time. And so we become selective in what events we discuss, who we respond to and what we engage with online.

Twitter and social media has transcended the traditional perception of space and time. We like to feel that tweeting about an event has fostered some sort of kinship or communal experience, but it has not. We like to think that those 75 minutes watching Breaking Bad was spent as if millions were in our living rooms, but they were not. We exist as a sort of cyborg-like being, communicating via cell phones and computerized extensions of ourselves on events from TV finales to national tragedies. It has psychologically altered the perception of space and time.

Of course further analysis would have us realize that all this social behavior is actually rather anti-social. And yes, I know this point has been made before. Psychologists and the everyman swear that disconnecting is a good thing. But on that I also disagree. As with everything in life, it is about balance. And so we must learn to correct our perception of space and time and place that into context for what social media is actually for.

We have grasped onto these fleeting cultural experiences to try and be more communal but have in effect reduced the meaningfulness of these events. We have reduced the meaningfulness because we haven’t actually shared any experience with a physical person. That disconnect cannot possibly be overcome, no matter how personal any conversation or exchange online or via cell can get. Without actually chatting with a co-worker, speaking with a friend, there is no real way to contextualize any relationship or direct sense of community. The community exists, but it is limited by the space and time confinements it presents contrasted to a traditional community.

This is completely true of online friendships as well. More and more, Twitter, Facebook and other social media has led to a wave of online-only friendships. Message boards, Twitter and other blogging services allow us the chance to interact and really even get to know one another. But are we really interacting? We are interacting through a brick wall at best. All the subtle nuances, body language, tone of voice is eroded by online only communication. And this sort of interaction has an impact on the way we perceive and exchange cultural events as well.

And so when the Sopranos left air, I remember it. I remember it well because there was no social media to distort my perception of the space and time with regard to the actual event. It was a major cultural event, but one which in comparison to tonights Breaking Bad finale seems small. But is it small? No, absolutely not. In fact, Vince Gilligan made reference to that finale in his finale appearance on Talking Bad. The reason it seems small is because social media ultimately makes it so much larger than it necessarily is. And when these events happen often enough on this scale, their meaning and sense of importance is sort of distorted.

And so from cultural events to interpersonal communication, social media enhances but also distorts the experience. I love Twitter, in fact I owe a great deal of gratitude for its existence. I’ve had the opportunity to have conversation with a favorite filmmaker of mine, and showcase my writing. But at the same time, I’ve never met them face to face. And so this exchange is limited and can even lead to misunderstandings. I could know any number of things about an individual from conversation online, but there is so much more to communication and interaction than an online exchange, or even less formally, a 140 character tweet. Much gets lost in translation.

And so I conclude this post with a question, do you really feel social on social media? And if you’ve gotten at “no, not really,” follow that question with why. The answer is likely that social media can never replace classic human interaction. We as humans are gregarious beings, who need to have contact with one another beyond a virtual screen. And so as technology further distorts space and time, we must remember to always keep grounded. If we spend too much time online, social media quickly can become anti-social media. And that defeats its purpose entirely, for social media exists as a new convenient method to foster classic human contact.