Artificial Intelligence and the IT Professional
Published on October 22, 2019
Rohit Talwar
Rohit Talwar – Futurist Speaker
By Rohit Talwar, Steve Wells, Alexandra Whittington, and Maria Romero
How might the IT profession be reshaped by intelligent machines?
A Catalyst for Change
Technology workers are on the front lines of a major breakthrough in work productivity and business performance: artificial intelligence (AI). The central role of AI in the future of almost every sector is practically a given. Business and technology analysts the world over agree that AI will have an impact across all industries. For the IT profession, the future could involve being called upon to work with AI, develop AI solutions, and potentially help their customers strike the perfect balance in work design between technology and people.
Almost every new technology arrives with a fanbase claiming it will revolutionize life on Earth. For some, AI is just one more in a long list of over-hyped technologies that won’t live up to its promise. At the other end of the spectrum are those who believe this could literally be the game-changing innovation that reshapes our world. They argue that humanity has a directional choice: Do we want the transformation to enable an unleashing of human potential, or lead us toward the effective end of life on the planet? We believe AI is like no technology that has gone before, but we also think we are far too early in its evolution to know how far and how rapidly this Fourth Industrial Revolution powered by smart machines might spread. So, what broader issues do IT professionals need to be mindful of to ensure that we go beyond genuine stupidity in preparing for artificial intelligence?
Unquantifiable Economic Impact?
There are numerous attempts being made to predict the overall impact of AI on employment at a national and global level, and where the skill shortages and surpluses might be in the coming decades. In practice, the employment outlook will be shaped by the combination of the Fourth Industrial Revolution, the decisions of powerful corporations and investors, the requirements of current and “yet to be born” future industries and businesses, an unpredictable number of economic cycles, and the policies of national governments and supra-national institutions.
Collectively, the diverse economic factors at play here mean it is simply too complex a challenge to predict with any certainty what the likely progress of job creation and displacement might be across the planet over the next two decades. Globally, many of the analysts, forecasters, economists, developers, scientists, and technology providers involved in the jobs debate are also largely missing or avoiding a key point here. In their contributions, they either don’t understand, or are deliberately failing to emphasize, the self-evolving and accelerated learning capability of AI and its potentially dramatic impact on society. If we do get to true artificial general intelligence or artificial superintelligence, then it is hard to see what jobs might be left for the humans. Hence, through the pages of our recent book, Beyond Genuine Stupidity—Ensuring AI Serves Humanity, we argue that perhaps a more intelligent approach is to start preparing for a range of possible scenarios.
Emergence of New Societal Structures?
The potential scale and spread of the impacts of AI raise issues for IT professionals that simply haven’t been a major consideration with previous technologies. For example, right now, many in society are blissfully unaware of how AI could alter key social structures. For example, if the legal system could be administered and enforced by AI, would this mean that we have reached the ideals of fair access, objectivity, and impartiality? Or, on the contrary, would the inherent and unintended bias of its creators define the new order? If no one has to work for a living, would children still need to go to school? How would people spend their newfound permanent free time? Without traditional notions of employment, how will people pay for housing, goods, and services?
For wider society, what might the impacts of large-scale redundancies across all professions mean for the prevalence of mental health issues? Would societies become more human or more techno-centric as a result of the pervasiveness of AI? How would we deal with privacy and security concerns? What are the implications for notions such as family, community, and the rule of law? These are just a few of the key topics where the application of AI could have direct and unintended consequences that challenge our current assumptions and working models and will therefore need to be addressed in the not so distant future. An inclusive, experimental, and proactive response to these challenges would help ensure that we are not blindsided by the impacts of change and that no segment of society gets left behind. These issues give a sense about how the focus and nature of IT roles could evolve over the next decade.
New Challenges for Business and Government?
With many technologies in recent history, businesses have had the luxury of knowing that they can wait until they were ready to pursue their adoption. For most firms, they could be relatively safe in the assumption that being late to market wouldn’t necessarily mean their demise, so they are treating AI the same way. Furthermore, a predominantly short-term, results driven focus and culture has led to many ignoring or trivializing AI because it is “too soon to know,” or worse, suggesting “it will never happen.” Finally, those at the top of larger firms are rarely that excited by any technology and can struggle to appreciate the truly disruptive potential of AI.
However, the exponential speed of AI developments means that the pause for thought may have to be a lot shorter. There’s a core issue of digital literacy here, and the more data-centric our businesses become, the greater the imperative to start by investing time to understand and analyze the technology. From the top down, we need to appreciate how AI compares to and differs from previous disruptive advancements and grasp its capability to enable new and previously unimaginable ideas and business models. Already three domains of application are emerging—firstly, processing data on a scale that is beyond human capability—for example, scanning thousands of people’s faces in seconds to identify potential security risks in a busy shopping mall. Secondly, automating entire tasks such as processing an insurance claim, and finally, augmenting human decision support in areas like medical diagnosis by identifying the statistically most likely causes of a patient’s symptoms. Within our businesses, we need to understand the potential for AI to unlock value from the vast arrays of data we are amassing by the second. We also need to become far more conscious of the longer-term societal impact and the broader role of business in society.
Call it corporate social responsibility or enlightened self-interest, but either way, businesses will have to think much more strategically about the broader societal ramifications of operational decisions. Where will the money come from for people to buy our goods and services if firms in every sector are reducing their headcounts in favor of automation? What is our responsibility to the people we lay off? How should we respond to the notion of robot taxes? How could we assure the right balance between humans and machines, so the technology serves people?
Clearly there is some desire in business today to augment human capability and free up the time of our best talent through the application of AI. However, the evidence suggests that the vast majority of AI projects are backed by a business case predicated on reducing operational costs—largely in the form of humans. Some are already raising concerns that such a narrowly focused pursuit of cost efficiency through automation may limit our capacity to respond to problems and changing customer needs. Humans are still our best option when it comes to adapting to new developments, learning about emerging industries, pursuing new opportunities, and innovating to stay abreast or ahead of the competition in a fast-changing world. Business leaders must weigh up the benefits of near-term cost savings and taking humanity out of the business, against the risk of automating to the point of commoditizing our offerings.
Governments are clearly seeing the potential—and some of the risks and consequences of AI. For example, the Chinese government is estimated to be investing US$429 billion across national, regional, and local government to ensure it becomes a global leader in AI. The Finnish government has provided an online platform for all its citizens to learn about AI for free. The UK government has announced plans to invest over US$1 billion in AI, broadband, and 5G technology, and a further US$530 million to support the introduction of electric autonomous vehicles.
However, governments are also confronted by tough choices on how to deal with the myriad issues that are already starting to arise: Who should own the technology and direct its likely power? What measures will be needed to deal with the potential rise of unemployment? Should we be running pilot projects for guaranteed basic incomes and services? Should we be considering robot taxes? What changes will be required to the academic curriculum? What support is required by adult learners to retrain for new roles? How can we increase the accessibility and provision of training, knowledge, and economic support for new ventures?
How IT Professionals Can Ensure AI Serves Humanity
The ability of smart machines to undermine human workers is a valid threat, but it doesn’t have to be a death sentence, especially if the tech worker of tomorrow is enlightened about AI. One of the best ways to guarantee that AI will serve humanity is to keep it beneficial but benign: exploit the benefits but reject the aspects which threaten the greater good. If the choice is made to ensure that AI does not unravel the basic support systems for society, future IT staff might find themselves in a social profession providing a public service. By 2030, could the exercise of technological expertise come across as an act of humanity, rather than a commercial transaction? Such drastic transformation would be a startling development, yet somehow resonates with previous technological breakthroughs, like the internet, which led to entirely new economic systems, business models, and jobs—most notably creating the entire IT profession. In what ways will AI have similar ramifications? Information and ideas about the potential futures of AI are an antibody giving businesses a jolt of immunity against genuine stupidity about technological disruption.
- Does the IT worker of the future have an obligation to defend humanity?
- Which forms of AI seem to pose the biggest existential threat?
- How will the IT industry of the future use AI?