Table of Contents
Where AI, robots, IoT and the so-called Fourth Industrial Revolution are taking us, and how we should prepare for it are some of the hottest topics being discussed today. Perhaps the most striking thing about these discussions is how different peopleโs conclusions are.
Some picture a utopia where machines do all work, where all people receive a universal basic income from the revenues machines generate and where, being freed from a need to work for wages, all people devote their time to altruism, art and culture. Others picture a dystopia where a tiny elite class uses their control of AI to horde all the worldโs wealth and trap everyone else in inescapable poverty. Others take a broad view that sees minimal disruption beyond adopting new workplace paradigms.
What all sides agree on
All sides agree that AI will take over more work tasks. They agree that this will affect many current jobs and occupations.
AIโs effect is not limited just to menial labor, either. Its ability to learn as it encounters new situations and draw from an almost limitless amount of data as it does analysis raises questions about how much it will encroach on knowledge worker jobs that previously were considered safe from automation.
The disagreements
Those with a dystopian view point to studies that show a vast number of jobs that could be affected by AI and other emerging technologies. Those studies show that almost all jobs contain tasks that could be done by AI and suggest frightening numbers of worldwide jobs lost, ranging from 47% to 97%.
Others take a broader outlook. They relate how, for each occupation minimized or rendered obsolete by past disruptions, the new technology had new needs that generated new occupations. Makers of handcrafted goods were displaced by the First Industrial Revolution, but factories provided new jobs, and so on through each successive revolution. Many of todayโs occupations were not even imagined only one or two industrial revolutions ago. Thus, holders of this view suggest that everything will work out as new occupations arise to provide jobs for those displaced from those that AI renders obsolete.
Others yet extrapolate both views into a single, utopian future. They take the figures of near-universal job loss and combine them with the belief that disruptions will work themselves out. But they define the idea of disruptions working themselves out to the extreme of a work-free society where the pursuit of culture is humankindโs universal goal.
The problem with these views
Neither the dystopian view of AI leaving 97% of the population unemployed nor the utopian view of AI bringing worldwide prosperity, peace and harmony are likely.
The dystopian view ignores the fact that such an insulated elite is unsustainable. Without viable markets to which to sell goods or services, such a miniscule upper class would have no ongoing source of income and would ultimately collapse.
On the other hand, the utopian view rehashes the age-old fantasy that technology will be a catalyst that helps humankind overcome its baser nature and evolve into fully actualized human beings. But with each of those new technological features, manโs nature eventually won out.
At their inceptions, radio, television, computers, cable TV and the internet each were trumpeted as technologies that would bring enhanced communication and understanding between people, or increased love of the arts and culture. Yet none delivered on their lofty promises. In fact, they often became vehicles to allow individuals to further isolate themselves and reaffirm their basest elements. AI is not likely to push humankind to a more highly evolved level any more than past technologies did.
The most likely reality is the broad view, which suggests that AI disruptions will produce new workplace paradigms. Even this view, however, is lacking. It takes a laisse faire approach to AIโs disruption rather than a proactive one. Letโs look at what we can reasonably expect about the path AI will take, so we can determine what actions to take to keep it negative effects to a minimum.
Benefits of AI
A 2017 PwC report, Sizing the prize: Whatโs the real value of AI for your business and how can you capitalise?, predicts that AI could contribute up to $15.7 trillion to the global economy in 2030. To put this in perspective, that amount is more than the current output of China and India combined. Furthermore, less than half of that increase, $6.6 trillion, is likely to come from increased productivity. The other $9.1 trillion should come from increased consumption.
The report suggests that automation of routine tasks, improving productivity through making augmented intelligence systems available to employees and freeing employees from routine tasks to focus on higher value-adding work will drive the initial growth.
Whereas most predictions about AI look only as far as automationโs effects, PwCโs assessment goes farther. It assesses AI not only as a cost-cutting tool, but also as a tool to drive development of new offerings and penetrate new markets. Thatโs where the predicted $9.1 trillion uplift comes in.
It envisions AI enabling higher quality and increased personalization ability for consumers. With AI and other emerging technologies reducing the time consumers spend in activities that today are routine, such as driving themselves to work, PwC envisions that they will spend more time engaging in activities that provide more data touchpoints. This should lead to better insights that enable companies to provide products that better suit consumersโ desires and, therefore, drive more consumption.
With improved ability to tap into consumer preferences, AI front-runners will be better able to tailor their products to match consumer demands and capture more of the market. Add to this the likelihood that AI will become increasingly commoditized as the technology matures. This would shift the focus away from the technology itself to the need for humans who can develop innovative ways to use it more effectively and in ways that are, at this time, beyond our imagination.
What has driven all past technology disruptions has been creativity. When people respond to gaps that new technologies create growth occurs, both in terms of developing new industries and jobs, and also in helping disrupted organizations emerge from the pack and thrive under new conditions.
That means that the ultimate benefit of AI will be realized only by the application of human qualities that AI cannot duplicate. Human creativity and insight will be needed for AI to achieve its full potential. This suggests that future job prospects are not as bleak as the dystopian predictions suggest.
How AI will affect jobs in general
The main weakness in predictions of massive โ or near universal โ job loss to AI is the methodology that most who take that view have used. Their approach has been to assume that if a job contains any tasks that could be automated, that whole job would cease to exist. Only by using that assumption can you arrive at job loss figures of 47% to 97%.
McKinseyโs assessment of less than 5% of current occupations being candidates for full automation is more realistic. That is not to say that disruption of current occupations is that limited. McKinseyโs report also says that approximately half of all activities done by the current workforce could be automated. They point out, however, that this does not point to massive job losses as much as it does to the workforce needing to shift their skill sets to fit new needs that transformed business processes require.
Studies in Germany, where automation in manufacturing has progressed far faster than in the U.S., showed no lower employment level from automation. Rather, they showed a shift in entry level jobs from the manufacturing sector to the service sector. Unfortunately, they also showed stagnation or even reduction in wages for those who remained in manufacturing, but this underscores the necessity for workers to enhance their skill sets if they want to thrive in an AI-enabled workplace.
Another study showed that manufacturing companies who moved toward automation did not greatly reduce their employee base. They largely replaced production jobs by investing in new jobs in sales and marketing.
AI as a job creator
An example of this is the adoption of ATMs by banks over the past three decades. If any technology seemed poised to kill an entire job category, ATMs fit the description. While the average number of tellers per branch decreased from 20 to 13 between 1988 and 2004, increased automation brought down the cost of operating a branch. It also enabled banks to redeploy personnel into more sales-oriented functions. This reduction in cost and increase in revenues enabled banks to open 43% more branches. As a result, banks were able to employ far more people, including more tellers.
AI is likely to continue this trend. The PwC report mentioned earlier contained an extensive analysis to identify new use cases that AI likely will drive, from near-term use cases that are currently emerging to long-term use cases that are likely to emerge over the next decade. The analysis identified nearly 300 use cases, and PwCโs separate, business-segment-by-business-segment analyses promise to go into even greater depth.
At the very least, AI will require two new types of workers. One will focus on thinking creatively to identify and facilitate new ways to develop and deploy AI. The other will build, maintain, operate, and regulate AI and the emerging technologies that accompany it.
So, rather than AI and other emerging technologies replacing existing occupations and jobs, those technologies likely will replace only specific tasks of those jobs. This will free workers to focus on more high-value tasks, as occurred with bank tellers.
The need for new skill sets
This, however, is where the disruption will most likely occur. These more high-value tasks will require workers to enhance their skill sets to keep up with the technology. Some predictions expect a rapid shift. Skill sets not considered crucial to many jobs today could make up more than a third of the skill sets considered essential as soon as 2020.
A 2016 report by KPMG pictures some of the technological skill sets that AI will require:
If youโre building and maintaining robots for transaction processing and repetitive tasks, you need people with strong analytical skills who understand how to translate business rules into logic statements. While a programming background is not required, it does help shorten the learning curve of the new technology and ultimately decreases the payback period for the investment.
On the other hand, if you are pursuing robots for cognitive technology, you need people with deep subject matter expertise to provide the robotโs initial knowledge base, validate that knowledge base over time, and respond to cases when the robot does not know the answer. You will also need people who can codify the robotsโ knowledge base, which may require some technical expertise depending on the product.
Either kind of robot will require people to set up and maintain the technologyโs infrastructure, identify opportunities for adoption throughout the business, and mitigate risks.
Many of the skill sets needed to keep up with the technology are not technological skills. Instead, they are distinctly human ones.
AIโs need for the human touch
Whereas machines shine when it comes to situations that call for data and rules, they come nowhere close to matching humans when it comes to social skills, emotional intelligence, persuasion, collaboration or โ perhaps most important of all โ creativity.
We saw this clearly in the ATM example. Rather than replacing humans, the technology enabled banks to redeploy humans into jobs that required a distinctly human touch โ sales, customer service, collaboration.
It also empowered the explosion of new products and services that banks launched during that time span. It enabled banks to deploy more personnel not only to selling and servicing these offerings, but also required people who could envision and develop them.
For all the capabilities that AI has been trumpeted of having, it is incapable of determining how best to use its own capabilities. It can mimic human creativity, but it cannot move beyond what humans have equipped it to learn. In other words, humans remain essential to the process of determining how to use AI to best advantage.
Where we go from here
AI will bring neither a utopian nor dystopian future. It will bring a distinctly human one. It will not bring a future where machines run all things and make all decisions, but one where humans remain fully in charge, directing AIโs development.
We must not take a headlong, blind rush into this technology, assuming it will automatically work out all its own kinks. As Iโve described in the first chapter of my book on cyber-kinetic attacks, many mistakes have been made by rushing technology adoption without thoughtfully assessing all aspects of its deployment. Mistakes that led to cyber-kinetic impacts in the past will only multiply as we continue adopting ever more advanced technologies.
But, with steady human guidance of AI and other emerging technologies, they can be a steppingstone to a new and higher level of technology development that brings leaders, workers and consumers new levels of utility.
Marin Ivezic
For over 30 years, Marin Ivezic has been protecting critical infrastructure and financial services against cyber, financial crime and regulatory risks posed by complex and emerging technologies.
He held multiple interim CISO and technology leadership roles in Global 2000 companies.