If youโ€™ve read the many predictions about the future of AI, youโ€™ve likely found them to be wildly different. They range from AI spelling doom for humanity, to AI ushering in Golden Age of peace, harmony and culture, to AI producing barely a blip on societyโ€™s path toward ever-greater technological achievement.

Those three views โ€“ dystopian, utopian and organic โ€“ present issues we need to consider as we move deeper toward an AI-integrated future. Yet they also contain exaggerations and false assumptions that we need to separate from reality.

The Dystopian View of the AI Future

Those with a dystopian view of emerging technologies point to studies such as the often-quoted 2013 Oxford report on the susceptibility of more than 700 job categories to automation. This report predicts that 47% of jobs are under threat of automation.

Other predictions are even more dire, predicting up to 97% future unemployment as a result of AI. All these studies focus on tasks within jobs that AI could do. By assuming that any job that contains any tasks that AI could do will lead to the entire job being eliminated, those with dystopian views arrive at such frightening job-loss numbers.

The world that those with dystopian views of AI envision features all power being consolidated into the hands of a miniscule class of super-rich who have seized control of AI and placed the remainder of society into impoverished servitude. It views these elite as enjoying untold riches and lives of ease.

A second form of the dystopian view of AI advances the view to positively apocalyptic status. It suggests that AI will eventually evolve to surpass humankindโ€™s ability in every way, becoming itself the ruling elite that either enslaves or exterminates all humans as being inferior and obsolete. Aside from the obvious sci-fi overtones of this view, the idea of such an evolution of AI is reliant on assumptions about AIโ€™s capabilities that we will examine more closely later in this chapter.

False assumptions in dystopian views

For now, letโ€™s focus on the idea that massive job losses will create a super-rich elite that forces the vast majority of humanity into poverty. The problem with this view is that it ignores the fact that such an insulated elite is unsustainable. Without a viable market to which they could sell their goods or services, such a miniscule upper class would have no source of income to fuel its ongoing wealth. It would ultimately collapse upon itself.

As for the idea of near-universal job loss, AI professor Toby Walsh tempers such predictions with two examples:

[W]e can pretty much automate the job of an airline pilot today. Indeed, most of the time, a computer is flying your plane. But society is likely to continue to demand the reassurance of having a pilot on board even if they are just reading their iPad most of the time.

As a second example, the Oxford report gives a 94% chance for bicycle repairer to be automated. But it is likely to be very expensive and difficult to automate this job, and therefore uneconomic to do so.

In other words, Walsh suggests in the first example that there are some jobs that humans will always feel more comfortable knowing that they are being done by other humans, even if those doing them merely oversee the automated systems to ensure that it operates properly. And in the second example, he suggests that just because a job could be automated, it doesnโ€™t mean that it will always be economically feasible to do so.

Walsh also mentions that the Oxford report gives a 63% chance of the jobs of geoscientists being automated, but he claims that any such automation would only offer geoscientists the opportunity to do more geoscience and less administrative work. He supports his statement with predictions from the U.S. Department of Labor that the number of geoscientists will increase by 10% over the next decade due to increased demand for people with the skills to find more of the earthโ€™s resources as known deposits diminish.

What bears consideration in dystopian views

Despite the evidence that shows the dire conclusions of those who promote the dystopian view to be overblown, it would be irresponsible to dismiss the issues they raise. Some of their points, although taken to extremes, are very valid.

There will be job losses, even if they are not as extreme as those with a dystopian view claim. Weโ€™ll examine that in more detail later in this chapter. Also valid is the warning against rushing into the new technologies without adequate forethought of their possible side effects.

AI will produce a significant disruption to society that must be thoughtfully planned to reduce the negative effects that it inevitably will produce. The more care we put into planning the direction of AI in both our industryโ€™s future โ€“ and our personal future โ€“ the better we will be able to limit its disruption and keep it from coming anywhere near the doom and gloom predicted in the dystopian view.

The Utopian View of the AI Future

The second popular view of AI has it leading humanity into a utopian future. Those who take this view accept the figures of near-universal job loss as not only true, but a cause for celebration. They picture a society in which AI frees humankind from the need to work for a living, thus permitting humanity to pursue the advancement of altruism and culture.

The world they envision pictures all work being done by AI-controlled automation. Rather than this leading to poverty for those who no longer have jobs, the utopian view sees this as a boon. With no one needing to be paid for producing all the worldโ€™s goods, the profits from those goods being produced without human input could be distributed equally to all people as a Universal Basic Income (UBI).

This UBI would provide everyoneโ€™s basic needs and free them to devote their lives to the betterment of society. The idea behind this assumes that those who are free from working for a living would then use their time to volunteer to help others or would pursue artistic excellence, thus enhancing civilization.

False assumptions in utopian views

The utopian view of AI bringing worldwide prosperity, peace and harmony rehashes the age-old fantasy that each new form of technology will be the catalyst that enables humankind to overcome its baser nature and evolve into fully actualized human beings. At their inceptions, radio, television, computers, cable TV and the internet each were trumpeted as technologies that would bring enhanced communication and greater understanding between people, or increased love of the arts and culture. Yet, somewhere along the way, each of them failed to deliver those lofty promises. Humankindโ€™s baser nature has always co-opted those technologies to serve the lowest common denominator.

Rather than leading to greater understanding of others, they have often become vehicles that help people isolate themselves even further and reaffirm their tendency toward self-absorption, insensitivity, anger and even violence. Ponder this for a moment, knowing the attitudes and actions that people display in our world today, how many people, if released from the need to work for a living, would respond by seeking ways to better society? Even those who are convinced that they would seek societyโ€™s greater good would likely find it hard to agree that the masses would spontaneously do the same.

Can AI really surpass human capabilities?

AI is not likely to push humankind to a more highly evolved level any more than any of those other technologies did. Not only that, but AI, contrary to the claims of many proponents of either dystopian or utopian views, remains far from showing an ability to fully match humankindโ€™s capabilities that both views presuppose.

Those who believe that AI will eventually surpass human intellectual capability look only at AIโ€™s ability to speedily process and analyze data. They picture AIโ€™s ability to learn from the data it processes as the only element involved in human intelligence. In doing so, they overlook the essential distinction between AI and the human brain.

Any AI system is essentially what we would call, among humans, a savant, someone who possesses far more advanced mental ability in a tightly limited sphere of expertise, at the expense of diminished ability in all other areas. Like a savant, AI systems are designed for a single or limited set of purposes.

They can more quickly retrieve and use information stored in them than human brains can, enabling them to surpass the ability of grand masters in games like chess or Go that are based on structured rules and probabilities. They fall woefully short of human capability, though, when it comes to applying knowledge of one task to a task that lies outside the scope of its programming.

The human brain, on the other hand, is capable of successfully using its experiences and understanding across an almost unlimited set of situations. By virtue of its multi-use capability, the brain is far more capable of connecting unrelated ideas into a new creation โ€“ intuitive leaps of understanding โ€“ than an AI system is.

A 150-ton supercomputer could process 93 trillion operations per second; the human brain can process 1 million trillion โ€“ staggeringly more. An AI system can be programmed to process and learn from a defined set of data; the human brain naturally processes and learns not only from limited set of data, but can intuitively incorporate all data to which it is exposed, with no limits on the kind or variety of data that enters a personโ€™s sensory range.

Even in storage capacity, an area that AI proponents frequently quote as proof of AIโ€™s superiority to the brain, the comparison is not as clear-cut as proponents suggest. Estimates of how much data the brain can store are equivalent to 2.5 million billion gigabytes. Granted, an AI system is far quicker at retrieving data than the brain is, but that is because of two other significant advantages that the brain has over mere retrieval speed:

  • The data that the brain stores is far richer that what a digital system stores. It can include any sights, sounds, sensations, smells or emotions related to a piece of data โ€“ and the tools to creatively reshape and connect them in different forms.
  • The brain, having access to such an enormous and rich store of memories, data, current sensory input and the ability to manipulate those elements creatively and intuitively, has an auto-focus feature that locks onto the information most relevant to the current situation and limits the conscious mindโ€™s focus to what matters at the moment. It pushes data irrelevant to the current situation into the background so it can deal more efficiently with present needs.

When you look at all the ways the brain is superior to AI, itโ€™s clear that AIโ€™s computational โ€“ and even its machine learning โ€“ capabilities, while impressive, leave it far from surpassing humanityโ€™s capabilities.

The risks in overconfidence in AI

Even some at the forefront of AI, like Evon Musk, founder and CEO of Tesla and SpaceX, have found AI less advanced than they give it credit for. Musk, confident that his most robot-intensive assembly line in the auto industry would be able to produce 5,000 of his latest model per week, set delivery dates for preordered vehicles accordingly. Despite his most strenuous efforts, however, he could not get the line to produce more than 2,000 per week and customers were predictably dissatisfied. In response to the delays, he tweeted โ€œYes, excessive automation at Tesla was a mistake. To be precise, my mistake. Humans are underrated.โ€ Although he continues to approach his problems by trying to improve the automation, his admission is spot-on.

Another reason we should not expect AI to displace humans is the old โ€œgarbage in, garbage outโ€ maxim. The judgments that AI systems make are only as accurate as the data fed into them. People need to remain involved to ensure that conclusions reached by AI systems are not based on bad data.

One AI system designed to decide which patients should be hospitalized for pneumonia delivered a startling recommendation. It determined that patients who were diagnosed as asthmatic were less likely to die from pneumonia than those who were not and should not be prioritized for hospitalization. This shocked the medical professionals who received this recommendation, because it directly contradicted common medical wisdom about the danger of pneumonia to asthmatic patients.

Statistically, the AI systemโ€™s recommendation was totally accurate based on the data fed into it; a smaller percentage of asthma patients died than their non-asthmatic counterparts. But the reason for this lay in a piece of data that had not been fed into the system: The reason that fewer asthmatic patients died was because doctors were much quicker to hospitalize them than non-asthmatic patients. Had the AI recommendation not been checked by doctors who had real-life experience with the issue, a deadly policy of not prioritizing asthmatic patients with pneumonia for hospitalization would have been adopted.

Again, the superiority of the human brain reveals itself here. The doctors who determined what data should be fed into the AI system possessed such a wide body of knowledge that they didnโ€™t even think to include some details that were so basic that they took them for granted as common knowledge. They overlooked this crucial piece of data and the AI system came back with a recommendation that would have been tragic if people hadnโ€™t caught it in time.

We dare not overestimate the capabilities of AI. It will remain a tool that requires human input and guidance for it to benefit humanity.

The Organic View of the AI Future

That brings us to the view of AI that is perhaps the most tempting to adopt, the organic view that jobs lost to AI will be negated by jobs that AI creates. In this view, too, the assumptions that underlie it are dangerous and must be tempered with reality if we are to face AIโ€™s growth with minimal disruption.

Those who advocate the organic view point to past industrial revolutions support their view that effects of AIโ€™s disruption will be minimal. They relate how, for each occupation minimized or rendered obsolete by past disruptions, new occupations developed to fill the needs generated by whatever new technology caused the disruption. Makers of handcrafted goods were displaced by the First Industrial Revolution, but the rapid growth of factories provided new jobs, and so on through each successive revolution.

Granted, many occupations available today had not even been imagined only one or two industrial revolutions ago. Who would have envisioned such occupations as video game designers or cybersecurity specialists before the technology behind them existed? Thus, holders of this organic view suggest that everything will work itself out as new occupations arise to provide jobs for those displaced from jobs that AI renders obsolete.

False assumptions in the organic view

That assumption, however, ignores the rough, and sometimes violent, transitions that past industrial revolutions spawned before the labor force could adapt to them. It took time โ€“ and sometimes bloodshed โ€“ before the transitions to new job categories in some of those revolutions worked themselves out.

The move from goods produced by craftsmen to goods produced by machine led to riots as displaced craftsmen sought to preserve their familiar way of life. The rise of the assembly line led to widespread exploitation of workers under inhumane working conditions, which, in turn, led again to labor riots. It took governments decades in both cases before legal protections for displaced workers finally afforded them basic protections that made the newly created jobs desirable.

And, although the digital revolution of the late 20th century did not result in a violent response from those who were displaced, entire job categories were wiped out. Workers found themselves scrambling to obtain new skills that would qualify them for jobs in an increasingly digital marketplace. The disruption to their lives that they suffered is incalculable.

The danger of overconfidence in the organic view

Taking a laisse faire approach to the growing AI disruption would be, at best, ill-advised and, at worst, callous. A real threat to jobs exists. In some places already, labor statistics show as many job openings as there are unemployed workers.

In other words, people in those locations are failing to find jobs even though plenty are available. Available jobs in those locations require different skills than job hunters have. Such conditions are likely only to accelerate as AI replaces workers in lower- and middle-skill jobs while creating jobs that require skills for which our current education and training systems are not preparing workers to fill.

For example, the previously quoted prediction of the need for 10% more geoscientists over the next decade presupposes that 10% more people trained in this specialty will be available. That increase in geoscientists will not come from insurance underwriters, loan officers, cashiers and data analysts โ€“ displaced by AI โ€“ effortlessly shifting into jobs as geoscientists. Future geoscientists will need specialized training. Most displaced workers will not have the skills that AI-created jobs require.

Consider also that AI will disrupt jobs all the way up to the C-level of management as it becomes more commonly employed in data analysis and process management. Companies will turn to AI to perform many tasks currently associated with upper-level management positions. If leaders do not prepare themselves for the encroachment of AI on their positions, many will find themselves in as much risk as those workers mentioned in the previous paragraph.

Takeaways

The three common views of AIโ€™s future picture wildly different scenarios. But they agree on one key point: AI will cause massive disruption in todayโ€™s workforce. Many tasks that we are used to seeing being done by people today will be done by AI.

History of past industrial revolutions suggests that this transition will follow a path similar to what the organic view foresees. But that same history suggests that the transition will not be without pain and disruption for many people. The nature of what AI can do, in fact, suggests that this pain and disruption will likely extend much farther up the ladder of skill levels than has been affected in past industrial revolutions.

As weโ€™ll see in future chapters, AI is poised to have an unprecedented effect on society and commerce. Weโ€™ll look also at specific ways in which it will likely shift needed job skills, and weโ€™ll focus on how todayโ€™s leaders can best position themselves for the expansion of AI.

Avatar of Marin Ivezic
Marin Ivezic
Website | Other articles

For over 30 years, Marin Ivezic has been protecting critical infrastructure and financial services against cyber, financial crime and regulatory risks posed by complex and emerging technologies.

He held multiple interim CISO and technology leadership roles in Global 2000 companies.