Hope is Not a Strategy – Retention, Engagement, and Productivity in the Era of Artificial Intelligence
Published on August 28, 2019
Rohit Talwar – Futurist Speaker
By Rohit Talwar, Steve Wells, April Koury, Alexandra Whittington, and Maria Romero
With regular warnings that technologies such as artificial intelligence will replace most of the workforce, what could this mean for the future of employee retention, engagement, and productivity?
Artificial intelligence (AI) is the latest manifestation of a techno- logical revolution that started over a century ago. Each phase of workplace revolution has stirred things up and disrupted jobs and the way they are rewarded. This time round is no exception, and the smart machines of this Fourth Industrial Revolution open up new possibilities for how we employ, engage, motivate, retain, and reward people.
Where Do We See AI Going?
As a result of AI and related exponentially advancing technologies such as cloud computing, new industry sectors are emerging, old ones are being disrupted, business models are being upturned, and workplaces and the nature of work are being reinvented. As a result, jobs, professional roles, management, motivation, and rewards are all being revisited to ensure they are relevant to the next world of work. Whilst disruption creates threats, there are also significant opportunities emerging in relation to employee retention, engagement, and productivity.
With a life expectancy of 110 or more for today’s 11-year-olds, the idea of lifelong employment becomes more mythical by the day. Indeed, the definition of a job as one’s sole source of financial security might become obsolete sooner than we’d hope. Over time, and possibly quite rapidly, the proportion receiving universal basic income (UBI) might rise in comparison to those on salaries. We may rely increasingly on a governmental or corporate-sponsored fund that provides—potentially unconditional—cash payments to all. As employees become more sensitized to the job risks posed by automation, they could increasingly evaluate employers based on what provisions they will offer to those displaced by technology. This might include physical and mental health support, skills retraining, and assistance with small business creation.
As salaries and pensions come under threat, employees might use new criteria to size up their job prospects. For example, employers might offer to help create and maintain individuals’ social media profiles and show them how to monetize their networks and generate additional income streams—for example securing advertisers for the digital screen on the back of your jacket. Employers could also aggregate the personal data of those employees who have opted in, selling it on and sharing the revenues with those employees.
One emerging possibility is that employees would get paid extra for sharing their own cognitive assets. Uploading thoughts to a digital AI cloud, even authorizing company ownership of their mind, might become one path to a raise or job security. In this sense, AI offers new ways for organizations to commit to a code of ethics; some companies might have policies against such practices, whereas others might exploit employees or practice “thought slavery,” or reward such employee commitment generously. The company ethics guiding the use of AI in the workplace could determine the attractiveness of a place to work or invest in.
We’ve all seen the comedy caricatures of tech firms where the employee literally gives their soul to the firm—available 24/7, participating in firm-sponsored social activities, and being a total “brand ambassador.” With AI, this becomes more of a reality, with the technology in the workplace and on our phones monitoring every aspect of our engagement from the words we use to our purchase of rival brands.
The use of AI also means the future of employment may involve benefits unlike anything available today.
In this potentially disturbing future, insurance companies might look at a person’s data and digital assets as commodities; with so much transparency being provided by AI and the vast data it can amass and analyze, risk could become obsolete. We could literally predict every activity, choice, and outcome down to the most likely time, cause, and place of death. Health benefit providers would think differently since they could proactively monitor health and other behavioral factors. Furthermore, in the absence of cash from a steady life-long job, people could trade their own data to extract value from it, like borrowing from a pension. With AI, employee benefits should become more personalized—for example taking rewards in the form of discount vouchers and personal services.
With AI, some expect economic productivity to skyrocket. This is the main appeal to companies, of course, in the sense that it is much cheaper to write one algorithm than to support an office full of employees. So, will humans seek to enhance their minds and bodies to be able to compete with robots? One rather strange outcome might include enhanced employees using bionic, pharmaceutical, or digital augmentation to perform their job. Some companies might offer human augmentations, support groups, or inclusivity trainings.
Others might come up with “Augmentationships” or “Enhancementships” where candidates could try enhancements for a limited time. Augmentation could be an employment benefit and an attractive quality in a potential employee.
The Gifts of AI
The gifts from AI to society include smarter decision making, the capacity to draw new insights from vast arrays of data, the potential for cost-saving replacement of humans, and efficiency beyond human capacity. However, a sweeping implementation of AI without regard for the impact on employees could have devastating consequences. In the here and now, organizations might explore radical concepts like a pension designed around the phased automation of jobs, knowing certain work will be performed by AI in five to ten years. The best- case scenario is the future where AI emerges as a benefit to workers, organizations, and society; however, this requires careful planning as hope is not a strategy.
- What sort of approach should governments and companies take towards regulating or mandating the use of human augmentations?
- What might a company reasonably be expected to provide as a “safety net” for its employees?
- How can we best teach people about the value of their data and the potential commercial uses and misuses of what they share?