Will Robots Really Create a Permanent Underclass?
Ever since robots were conceived by science fiction writers, they captured the fascination of humans and created a lot of fears. One of the most widespread fears of late has been that robots will soon displace humans in most jobs thus creating a permanent underclass. The most recent voice to add credence to this idea is Adam Smith Institute’s Sam Bowman on Medium:
Eventually someone may invent a robot that is better than humans at virtually any given job — cheaper to produce and maintain, faster, smarter and better at learning. Before then, we will probably invent similar kinds of robots that are better than a significant portion of the labor force at any given job that they could do.
If a significant portion of the workforce cannot produce anything more efficiently than a robot they will become permanently unemployed. The market cannot be relied upon to ‘work something out’: human labor is an input and markets are about outputs. There are lots of basically-useless inputs that the market does not use much, like toxic waste and whale oil. There will probably come a time when the labor of a large number of human beings is included in this category.
I would argue that the widespread belief, approaching conventional wisdom, that robots are capable of permanently displacing human labor is based on a misunderstanding of what computers (and thus robots) actually do when they are used by their human creators to solve problems.
For most people, the functioning of computers is deeply mysterious. Thus, when computers achieve impressive results like beating the world champion at chess, it indeed creates an impression that they will soon overtake us in terms of intelligence. However, once we realize what they actually do, it becomes clear that they are incapable of genuine understanding of anything.
Computers rigidly fulfil sets of instructions they are coded to fulfil. The designers of computer programs may miss some logical implications of the programs they designed but that doesn’t mean that a computer that produced an unexpected result this way is an instance of Skynet suddenly gaining awareness and deciding that it better get rid of its masters. Such a computer would still be blindly implementing the code.
In his famous thought experiment about the “Chinese room”, philosopher John Searle convincingly demonstrated that even a human mindlessly following instructions does not qualify as understanding what she is doing. In Searle’s hypothetical set-up, a human is put in a room separated from the outside, and given instructions how to respond to Chinese hieroglyphs flashed to her from the outside with other Chinese hieroglyphs. Even if the subject manages to convince the Chinese speakers communicating with her this way that she understands Chinese, it is crystal clear that she is clueless about Chinese. In Searle’s words, she has “syntax but no semantics”. Later, Searle showed that computers do not intrinsically have any syntax, either, but this does not concern us here.
Searle’s groundbreaking argument prompted a heated debate and lots of responses but none of them seems to have succeeded. One of the most popular ones was that the analogue of a computer was misidentified. It is not just the experimental subject who plays this role but her coupled with the hieroglyph-handling instructions. It is sufficient to rebut this objection by contemplating what would have happened if the subject encountered a hieroglyph or combination of hieroglyphs for which there is no instruction. If she coupled with the instructions had a genuine grasp of Chinese, Chinese speakers outside could use other hieroglyphs to make her understand the meaning of an problematic message. It is clear, though, that the instructions will be of no help whatsoever here.
The implication of this is that robots will never suffice in a situation that was not encountered and taken into account of in the code by their programmers. Thus, especially jobs that even sometimes involve reacting to genuinely new circumstances (where the ability to react to them is important) cannot generally be automated away. It is difficult to come up with a detailed list of such jobs but even seemingly simple jobs may involve facing uncommon high-stake situation. For instance, I personally witnessed such a situation in a bar in Aix-en-Provence where a violent brawl erupted outside of the bar and a part of the fighting people suddenly tried to take refuge in the bar. The bartenders had to quickly decide what to do about it based on the unique context.
Supporters of the “permanent labor displacement” hypothesis may, however, still claim that handling uncommon, unpredictable situations is not essential for most jobs. It is difficult to conclusively prove this point one way or the other but there is a potentially more important hurdle for robots to overcome before they can displace human labor completely.
In order for a robot to execute a program, it must determine that the relevant elements are in place. If a robot performs the functions of a waiter, for example, it must recognize the objects it is handling, what clients tell it, and so on. One of the most important recent innovations relevant to such recognition was the advent of neural network and the associated deep learning.
The precise ways neural networks achieve recognition are multiple and highly technical but they all share a basic approach. A neural network is a computer program that has successive layers of weights assigned to inputs it has to deal with and a procedure of optimizing them based on the examples on which it is “trained” in order to achieve the minimum set recognition failure rate. In image recognition, inputs are data arrays derived from data arrays of pixels constituting images. What a neural network then essentially does when confronted with an image is assigning to its small parts weights based on its past training.
Setting aside the high specificity of this approach, while it may be highly successful at particular tasks, researchers have shown that it is clearly fundamentally different from however humans form concepts (see the YouTube illustration and the paper), which may result at peculiar errors. It turns out that sometimes neural networks would misidentify images that they previously correctly identified because of small modifications imperceptible to humans. In addition, images can be deliberately generated that are meaningless to a human eye but are classified by neural networks as familiar objects like guitars, bubbles and peacocks. Since this appears to be an intrinsic feature, it is unclear whether tinkering with more and more sophisticated neural networks can bring the errors down to a rate acceptable in the majority of contexts.
Similarly, a recent study has demonstrated a profound problem with language recognition in a contest with deep-learning-based chatbots:
The Winograd Schema Challenge asks computers to make sense of sentences that are ambiguous but usually simple for humans to parse. Disambiguating Winograd Schema sentences requires some common-sense understanding. In the sentence “The city councilmen refused the demonstrators a permit because they feared violence,” it is logically unclear who the word “they” refers to, although humans understand because of the broader context.
The programs entered into the challenge were a little better than random at choosing the correct meaning of sentences. The best two entrants were correct 48 percent of the time, compared to 45 percent if the answers are chosen at random. To be eligible to claim the grand prize of $25,000, entrants would need to achieve at least 90 percent accuracy. The joint best entries came from Quan Liu, a researcher at the University of Science and Technology of China, and Nicos Issak, a researcher from the Open University of Cyprus.
Again, it appears that the problem here is intrinsic to the method. The way that neural networks handle text is fundamentally different from genuine understanding, and no heroic coding effort can guarantee that it is actually possible to radically reduce the error rate.
These deficiencies of the most advanced recognition techniques are important because in many contexts, gross errors that humans will never make may cause unacceptable damage, even though a robot will be more efficient than humans in the routine sense. Let us consider the example of waiters. Robots may be more efficient than humans in the sense that they won’t forget or misremember some of the ordered items, for instance. But if a robot misunderstands a clear request by a client not to put nuts into a dish because the client has a severe allergy, this may do irreparable harm to the establishment’s reputation.
All points considered it would seem more plausible that rather than displacing humans at work en masse, robots and humans will tend to be employed side-by-side, with each specializing at the kind of things each does better. Instead of a catastrophe envisaged by most, this could well allow people to work less, be more creative, and so on.
Picture: Creative Commons JD Hancock
This piece solely expresses the opinion of the author and not necessarily the organization as a whole. Students For Liberty is committed to facilitating a broad dialogue for liberty, representing a variety of opinions. If you’re a student interested in presenting your perspective on this blog, you can submit your own piece to firstname.lastname@example.org.