I've long been fascinated by "disruptive technologies"---technological innovations that seem to arise out of nowhere and have sea changing effects. The Gutenberg printing press is an oft mentioned example. Perhaps the best recent example is the Internet---a technological disrupter that directly affected the course of my life. In 1992 I was working at a car wash and had only a dim concept of the what the internet was. (I largely associated it with uninteresting, text based, digital bulletin boards.) Within five years I was employed as a web developer and making graphically rich web sites. I certainly never saw the internet coming, and to my knowledge there was no pundit in the pre-internet era loudly proclaiming that the benefits of interconnected computers were just around the corner.
Recently, I've been musing on what I think will be the next batch of disruptive technologies. I suspect that we're on the cusp of seeing shifts in lifestyle and culture of even greater magnitude than those brought about (and still being brought about) by the internet. I'm hardly alone in saying this. It's the gist of Ray Kurzweil's (and others') theory of a technological singularity --- a moment when technology is advancing so quickly that future events become impossible to predict.
My focus for this piece is on four broad areas of human activity that I think will be affected by disruptive technology. They are: making stuff (manufacturing), building stuff (assembly), thinking about making and building stuff (planning), and the arts.
When one contemplates technological disruption, one can conjure up various dystopian scenarios involving Terminator style robots declaring war on humanity or virtual reality environments overtaking our brains. And, who knows? Maybe those scenarios will occur. But they are not what I'm focusing on here; I'm contemplating what I feel will be likely but still disruptive outcomes.
I should be upfront about a few things. I don't claim to be a scientist (unlike members of the band "We Are Scientists.") I don't claim to be a technologist (though I have worked in the computer software field.) I'm just an ordinary guy (perhaps unusually good looking) with a certain curiosity about the world and a fondness for imagining what life will be like tomorrow.
Let's take a look...
The human race has been involved in the fine art of making stuff since the dawn of culture. A key difference between humans and other species is our predilection for making and using tools*. The manufacturing of stuff has driven trade and commerce and spurred much of our social interaction and exploration of the world.
* To be clear: other animals use tools, but not to the degree we do.
It's interesting to note that the way we pertain most stuff has remained basically unchanged for at least the past ten thousand years. We purchase it from others. (Before monetary systems we did it via barter and trade.) We simply can't make every item we need so we turn to individuals who specialize in making or supplying what we seek. 200 years ago we would have gone to the blacksmith for horseshoes. Today we go to Wal-Mart for a Cuisinart. We buy things that are manufactured far from our homes.
But what if we could manufacture it all ourselves? That is the promise of 3D printing. 3D Printing, as you've probably heard, is the process by which one can "print" various items from raw materials (like plastics, glass and metals) on a device that can fit in a home or office. What type of items can be printed? Figurines, coffee cups, guitar bodies, shoes, prosthetics, guns, car parts... the list is ever growing. (Here's a Pinterest gallery displaying 3D printed items.) Even items with interlocking parts can be printed, and there's a related technology being developed to enable the printing of food!
On one hand, the promise of this technology sounds terrific. Who wouldn't want the ability to print out a Scrabble set when the nieces and nephews come over, or replace a broken knob on the oven without leaving the house? But the disruptive powers of 3D printing are easy to envision. If we are able to conveniently manufacture things locally, many people employed in the manufacturing sector are out of work. Why have a factory full of people making widgets when folks can just print widgets on demand? For that matter, why have shipping and distribution departments if people are printing things locally?
And what about piracy? When we are printing objects, it's no longer the objects themselves that have value, it's the designs of the objects. With 3D printers, the design is held in a downloadable computer file. If mp3s and digital movies can be pirated, there's little reason to think schematic files will not. I suspect that when it becomes easy and free to download and print stuff, we can expect profound ramifications for the economy. (Ignoring the issue of piracy, it also seems likely that schematics for many useful and entertaining objects will simply be offered for free by charitable or anarchistic designers.)
While the 3D printing technology is impressive, there are items that cannot be printed (at least not now or in the near future.) Certain objects are complex enough to require assembly. Cars are a good example; even if you print out the parts (there is a car whose parts are printed!!!), you still need to put them together. Traditionally this "putting stuff together" has been performed by humans. The assembly line has employed a significant percent of the population over the past hundred plus years.
That said, we've all anticipated the rise of a robot workforce. I think most of us presume that robots will one day do most manual assembly. It was upon watching the enclosed video of Rethink Robotics' Baxter the Robot (Click here to see how Baxter works.) that I realized that day is closer than we think. Baxter can easily be trained to perform repetitive tasks and he's affordable, coming in at a little over 20 grand. If he's doing the job of an employee who makes $18,000 a year, it doesn't take long for Baxter to become worth the investment. (He also doesn't need smoke breaks or health insurance.)
Another intriguing use for a repetitive task performing robot: the robot bartender! (But will he patiently listen to you weeping about your girlfriend leaving you?)
So far we're talking about computers and robots taking on rather blue collar jobs: making things, putting parts together etc. What about tasks that require higher levels of thinking? What about jobs that require an element of planning, of applying disparate bits of information? Enabling computers to perform higher level thinking has been the dream of artificial intelligence designers for some time, but so far their results have lacked a wow factor. I don't think teachers or programmers have a sense that their jobs are in immediate danger.
Part of the challenge designing AI has been that we don't really understand how the world's greatest thinking machine - the brain - works. But AI architects, cognitive psychologists and neuroscientists are starting to abstract out the processes the brain uses to make decisions and think. By duplicating those processes on non-biological thinking machines (e.g. computers, software etc.) humans can conceivably make some serious leaps in creating artificial intelligence. For example, a recent Discover article (link goes to article teaser) reports on engineers designing neural computer chips that are modeled on the circuitry of the brain. These chips are substantially more energy efficient than ones in use today and thus hold the promise of increasing available computing power. The article notes that...
Cognitive computers... will weave together inputs from multiple sensory streams, form associations, encode memories, recognize patterns, make predictions and then interpret, perhaps even act - all using far less power than today's machine.
Once fine-tuned, artificially intelligent tools could start to do jobs that require mental heavy lifting. Software could start programming software. Computers could design and execute advertising and political campaigns. Programs could write news articles. (Oh, wait. That's already happening.) White collar workers and academics may join their blue collar brethren in the bread line.
Okay. We may accept that computers will be able to perform relatively brainless, repetitive tasks such as manufacturing and assembly. We can even concede that they can perform more logic based tasks as long as they are programmed cleverly. But what about the arts? Will computers ever be able to paint? To compose music? To write novels? Some might say it's unlikely, even impossible, being that producing art requires dipping into a mysterious pool of inspiration and creativity---a process largely inaccessible to our conscious thoughts. If we can't convincingly explain to ourselves how we create art, how can we design a computer to do so?
In my view, the arts are not quite that mysterious. I think creating art is much like any other human ability; it involves learning and applying various skills that can be consciously understood. In the realm of novel writing, it's a process of learning the recurring structures for stories and the various techniques for building tension and drama (as well as grammar and vocabulary etc.) In the world of composing music it's a matter of understanding the "rules" by which a pleasing melody can be constructed or an intriguing harmony designed. (Some musicians go through their lives unaware these rules exist, but to others they are well known. Jimmy Webb's book "Songsmith" is a good source for study.) While I know less about the visual arts I suspect similar, learnable techniques are at work.
Some might say computers will not have the soul to create art. I can only note that computer programmed music has already earned rave reviews. I'll also point out that lots of human created music---and art---is rather soulless. To be competitive in the world of music making, computers don't need to write at the level of Mozart any more than humans do. And I suspect the same is true with other art forms.
That said, in the world of the arts, I presume the role of technology will be more supplementary than author-ly*. A program won't write a symphony but rather fill in the blanks for a composer who has sketched one out. Software won't write a screenplay but rather devise a template for a story and perhaps "remind" the human author when she is drifting from the core narrative and losing her dramatic edge.
* One art form I could see computers having a strong effect on is comic books. In video games and CGI animated movies, a character or scene is defined by various computable dimensions (size of arm, skin color, shape etc.) and then rendered visually. Why not do the same in the world of comics and then render the still panels? Artwork could even be rendered in the style of great comic artists from history. (Imagine new comics done in the style of the now deceased Jack Kirby.) If the reader is reading on a digital device like an iPad (which seems the direction comic consumption is headed in) he or she could even switch the style of the artist being used to create the panels. ("I'd like this comic to have more of a Steve Ditko look.")
It's a changing world. Stay thirsty my friends.
After writing this article, I discovered this PBS video segment which explores many of the topics I talk about here and provides additional food for thought. Worth the time (about 7 minutes) it takes to watch if you can spare it. And you will once a robot takes your job.
Wil Forbis is a
well known international playboy who lives a fast paced life attending
chic parties, performing feats of derring-do and making love to the
world's most beautiful women. Together with his partner, Scrotum-Boy,
he is making the world safe for democracy. Email - firstname.lastname@example.org