In this series of posts:
Carl B. Frey & Michael A. Osborne. The Future of Employment: How susceptible are jobs to computerization? Technological forecasting and social change. 2017. Published version here. Open access working paper here.
Melania Arntz, Terry Gregory & Ulrich Zierahn. The risk of automation for jobs in OECD countries. OECD working paper available here.
David H. Autor. Why are there still so many jobs? The history and future of workplace automation. Journal of economic perspectives. 2015. Published, open access version available here.
Susan F. Lu, Huaxia Rui & Abraham Seidman. Does technology substitute for nurses? Staffing decisions in nursing homes. Management Science. Forthcoming. Published open access version available here.
I wanted to pick up where we left off with ‘The End of Scarcity (maybe)’ and the discussion that Brynjolfsson & McAfee have about the employment effects of machines (artificial intelligence, robots, etc.). There are (at least) two ways to think about this. The classic way is to think that machines substitute for labor, that machines replace human workers in order to increase productivity. The other way is to think that machines make human workers more efficient and therefore enhance the value of human work, leading to increased demand for human labor. Machines, in this view, complement labor. Of course, the answer to this question is not either substitution or complementarity, but rather what conditions lead to substitution and complementarity respectively. Context matters and the world is complex, but theory (perhaps) helps. There’s lots to read about this, but this initial series is going to just sample some illustrative work.
Frey & Osborne focus on the substitution part, specifically calculating the number of jobs at risk of being replaced by machines. Basically, they take descriptions of 702 occupations from US Department of Labor and evaluate the degree to which each occupation is automatable (combining subjective ‘eye-balling’ by machine learning experts and an objective classification algorithm). It’s important to note that they focus on nine engineering bottlenecks to automation. These are the nine things that are currently hard to automate: finger dexterity, manual dexterity, working in cramped spaces or awkward positions, originality, fine arts, social perceptiveness, negotiation, persuasion, and assisting and caring for others. If a job requires high levels of dexterity working in a cramped space (working as a plumber, for instance), then that is a bottleneck – it’s very hard to automate, at least right now.
This allows them to show what share of current jobs are at high, medium and low risk of computerization. It also shows what classes of jobs fall in each category (i.e. what kind of job you want to pick if you don’t want to be replaced – look at the full list in appendix A). They find that 47% of jobs in the US labor market are in the high risk group, 19% in the medium and 33% in the low. In itself, these are impressive (distressing!) numbers. Their interesting interpretation, I think, is that this really should mean that 47% are automatable in the short term, 18% in the medium term and 33% in the long term, because things that are hard to automate today (implying low risk of automation) will be easier to automate in the future, with more advanced technology. This carries the further implication that we will see two ‘waves’ of automation over the coming maybe fifty years, separated by a plateau (say, two decades) where little further automation happens.
The OECD report on the risk of automation uses the same method as Frey & Osborne, but with a slight tweak: They take into account that parts of a job, not the entire occupation, could be automated. If you think about construction, parts of the job may be automatable, even if the entire job is not. Casting the concrete that forms part of a wall could be done by a machine in a factory away from the construction site, with Lego-like assembly of the pre-cast elements happening on site. Some of the construction worker’s tasks are therefore automatable, but the construction worker cannot entirely be replaced (right now). There is too much ‘ground-level complexity’ to deal with. You would think that this would yield a higher estimate of automatability than Frey & Osborne get. Take nurses as an example: it’s hard to replace nurses (the occupation) entirely, but it’s relatively easy to automate some tasks involved in nursing work. However, they also use different data on what work involves – they use surveys of people doing the work. When you do this, it turns out that people within the same occupation do different things (an obvious truth to an organization scholar) and that people in ‘high risk of automation’ occupations often do some things that cannot be automated. Bookkeeping clerks, for instance, don’t just do bookkeeping. They also do face-to-face interactions with the people working in their organization. If you put those two methodological choices together, it creates a very different picture of the risk of automation. Averaged across the OECD, the share of jobs at high risk is only 9%*. And the ‘two wave’ future that Frey & Osborne actually goes away, becoming instead a ‘one hump’ future with automation kicking in some two decades from now and then gradually eliminating more and more jobs.
All of this is caveated by points made in the report, two of which jump out at me. For one, it’s not necessarily the case that it will be worth it to automate certain occupations and tasks – it depends on the relative price. Second, customers may have a preference for human, rather than robotic, work. This is obvious in nursing and caring, but also applies elsewhere. If you look at something banal like coffee, people still seem willing to pay a premium for having a person rather than a machine make their coffee for them. So in ‘reality’, less than 9% might end up being automated. In a pretty profound way, in other words, automation (observed at the societal level) is tied to decisions and strategy (emergent at the organizational level). That said, of course, the OECD report may be setting the number too low. If you did automate bookkeeping, the interaction that bookkeepers have would go away, and only time would expose the unintended consequences of that (even though we could, in the present, hypothesize about those consequences).
But this is only the beginning of the discussion, of course. Both Frey & Osborne and the OECD focus on projections into the future and in doing so have to rely on job descriptions. Of course, this makes the whole affair somewhat hypothetical – it’s not real jobs being lost (or not), but it does give some cause for ‘automation anxiety’. Robots may or may not take our jobs, but increasingly it is becoming clear that adjustment from this and other kinds of (creative) job destruction is really very hard. That last link will take you to a recent paper by Daron Acemoglu and Pascual Restrepo (which we may cover in a later post), showing how robots reduce employment and wages without any real offsetting effects in local labor markets. Those authors look only at a small subset of labor-replacing technology, but the pattern seems broadly consistent with other observations and predictions related to employment in manufacturing (which is falling even as the sector grows, in the US at least) and wage growth (which is stagnating, because both low-wage and well-paid jobs are being taken over by technology). Adjustment is when people who lose jobs in one industry get more training, move to new places and ultimately find new work. Adjustment is what makes creative destruction a socially acceptable part of capitalism. Innovation is destructive and in that sense painful, but it increases the bounty (for the many, over time) and things adjust so that the pain is temporary. But if adjustment is hard, maybe we need to deal better with the destructive part of creative destruction.
*) I think this difference is going to be my go-to case on why you need to read quantitative (and qualitative, for that matter) studies carefully and with much greater reservations than is commonly the case. There are so many forking paths and implicit decisions that shape the outcome, and you can’t always theorize what alternative choices would have meant. This is a great (technical) paper on that issue, this is great (accessible for the non-specialist) podcast.