Automation is one of the many phenomenons that drive change and impact the job market. Aging populations, the rise of the middle class, climate change, all disrupt to a certain measure the labor market.
Written by UX Designer & Anthropologist Yisela Alvarez Trentini for Wevolver.
But contrary to popular belief, during the last century technology has actually created more jobs than it has taken. Although losses in the agricultural and manufacturing sectors were particularly significant, so was the growth in other areas such as creative, care, tech and business services. Only one position was, in fact, eliminated (not because of lack of demand or technological obsolescence, but by machines) — The elevator operator.
In January, a McKinsey & Company study found that about 30% of tasks in 60% of occupations will be exposed to be taken by robots. However not all jobs included in them are at the same risk, as exposed doesn’t necessarily mean replaced. Every occupation includes varied activities, and each of those has different requirements for automation. The majority of these occupations will most likely be redefined instead of replaced.
If there is a set of risky professions, however, it’s those with a repetitive and predictable routine. Telemarketing, for example, has a 99% chance of being automated. So does tax preparation,paralegal and legal assistance and fast food.
But when it comes to jobs, the areas least likely to be replaced are those linked to creativity and the ones that involve creating complex relationships with people, such as nursing, clergy or business roles, as well as jobs that are considered highly unpredictable like plumbing, repairing and handy work, supervisation or social work. Like redefinition of roles, agaist a prospect of unemployment is one of redeployment.
Still, automation will not happen overnight. As processes get transformed by the automation of individual activities, people will begin to perform activities that complement the work that machines do, and vice versa. The impact of such change will undoubtedly affect all levels of society.
The goal is to ultimately complement human objectives. Machines enable people, but people guide an provide machines with motivation.
Half of today’s work activities could be redefined by 2055, but it will take years for automation to reach its full effect on our work activities. Luckily, humans are uniquely talented at adapting to dynamic situations, as we evolved to think creatively and imagine novel solutions to survival threats.
A robot is no more than a machine capable of carrying out a complex series of actions automatically. However when we think of one, and this is especially the case for those we imagine will accompany or replace us in the workplace, we tend to imagine humanoid robots. Are these the droids we are looking for?
Very humanlike robots can be perceived as a realistic threat to human jobs and resources, and be seen as a threat to human identity and uniqueness — especially if the robot outperforms humans. The way people perceive a robot can also affect their support for robotics research, and the likelihood of robots becoming fully integrated into human society (a sort of robot revolution) questions the acceptance of robots by the general public.
Experimental data revealed that participants perceived robots to be significantly more threatening to humans after watching a video of an android that could allegedly outperform humans on various physical and mental tasks relative to a humanoid robot that could do the same. Yogeeswaran et. al. 2016
The above quote was extracted from a breakthrough study by Kumar Yogeeswaran from the University of Canterbury, where a group of participants were randomly assigned to watch a one-minute video clip of an interview with either an antropomorphic robot with hair, skin and clothing (the Geminoid HI-2 robot) or an identical interview with a humanoid robot with shared characteristics but easily distinguishable from a human (the NAO robot ).
Although the Gemonoid was perceived as ‘more human’, both robots were considered equally good at their tasks. However, significant interaction was discovered between the anthropomorphic robot and its perception as a threat. The Geminoid was seen as bigger threat to human jobs, safety and resoures compared to the NAO, but only if the participants were told that the new generation of robots would outperform humans on various physical and mental tasks. This wasn’t the case with the humanoid robot, regardless of what they were told about its abilities. The same results were found in regards to the perceived threat to human identity.
We’ve seen how robot anthropomorphism affects people’s evalulations of it. Exposure to an android can backfire when the person is simultaneously told that these robots are capable of not just performing but outperforming humans in physical and mental tasks, while humanoids are perceived as more functional and assumed to be under human control, posing less of threat.
For good or bad, we are extremely familiar with what humanlike behaviour looks like. As robot behaviour are still not fully humanlike, the bar it should meet keeps getting higher. That’s why cartoonized or even non-anthropomorphic designs are more convenient than human-like ones.
Different contexts can benefit from robots that have more machine-like or more human-like characteristics. We’ll explore two cases where this is taken into consideration to provide successful robot-human interactions: healthcare and police work.
Artificial companions (or social robots) are gradually becoming part of healthcare environments, triggered by an aging population and the need to decrease the burden on social and primary care systems.
Social robots hold the promise of extending life expectancies and improving health and quality of life. They allow elderly people to stay fit, live alone in their homes automously for a longer period of time and help them feel less lonely.
Healthcare robots are usually classified in three groups: Those that provide physical assistance, those that provide companionship, and those that monitor health and safety (although in the context of an old person’s home, the division between assistive robots and health robots can be blurred.)
Robots like the dog-likeAIBO have been shown to increase communication between dementia patients and reduce loneliness in elderly long-term care facilities.
In a study by Hayley Robinson from the University of Auckland (The Psychosocial Effects of a Companion Robot: A Randomized Controlled Trial), the assumption that animals can have a positive effect on health is complemented by the incorporation of a (this time robotic) pet into a residential care facility. Robot-pets have similar benefits to real pets: they create a positive social athmosphere and produce beneficial psychological outcomes — for both the patients and the staff.
When Robinson introduced PARO, an advanced interactive robot developed by Intel and ISRI and modelled after a Canadian harp seal, the results were encouraging. Both diastolic and systolic blood pressure decreased from baseline when the residents interacted or stroked the the robot (the touch and feel of the robot is important in this populaton where older people might be experiencing personal discomfort.) A robot in this environment may even be more beneficial than a real pet — the robot reacts to the person talking or touching it, while a pet might not.
However, healthcare robots are still not widely accepted for this kind of therapy. People’s willingness to adopt robotic systems is intertwined with cultural responses. In Japan, where Shinto animism gives objects, animals and people a common “spirit”, the latter are more predisposed to look at robots as helpmates and equals, inbued with something akin to a soul.
Americans, on the other hand, tend to see robots as dangerous and willful constructs, anxious to bring the death of their makers.
The sun, the moon, mountains and trees each have their own spirits, or gods. Each god is given a name, has characteristics, and is believed to have control over natural and human phenomena. This thought has continued to be believed and influences the Japanese relationship with nature and spiritual existence. This belief later expanded to include artificial objects, so that spirits are thought to exist in all the articles and utensils of daily use, and it is believed that these sprits of daily-use tools are in harmony with human beings. Naho Kitano in Animism, Rinri, Modernizationp; the Base of Japanese Robotics
Healthcare robots have been criticised as being inferior to human caring and contact. The sad reality is, however, that human contact is in short supply — or of poor quality. A healthcare robot is primarily intended to improve or protect the health and lifestyle of the human user. Lifestyle is a keyword: It’s not just about the way we feel physically. Most older people in western cultures are living independently in their own homes, and would like to stay in them as long as possible. Governments and care funders also favour what they call ‘aging in place’. Robots can also help socially impaired people to relate to others, practise empathetic behaviours and act as a stepping stone towards human contact.
While some social robots are perceived as mere utilitarian systems, just able to perform tasks, others can also be recognized as hedonic systems that offer social interaction opportunities. When users see these robots as companions (and build a relationship with them,) they are more likely to continue interacting with them. If such relationship is not established, the use of the robot is frequently discontinued (Kanda et al., 2007.)
One way to appropiate a robot is to take the technology and give it a place in the person’s own domestic environment, in their everyday life.
In general, older age negatively impacts how willing a person is to use a robot. Older people are less comfortable with computers and new technology (compared to younger people,) and less confident in the abilities of robots. They tend to prefer those with a female voice, small size, less autonomy and a generally serious predisposition. Lack of familiarity with a technology can make people feel more uncertain; direct experience of the usefulness of assistive devices can change older people’s attitudes from ‘unnecessary’ to ‘useful’.
The needs of the person strongly affect how they feel towards social (and in particular healthcare) robots. They are prepared to accept assistive devices if they can help them maintain their independence.
The appearance of a healthcare robot also contributes to the elder’s sense of identity. It’s not just independence that we value, but also the appearance of it. People will tend not to use an assistive robotic device if they feel they portray them as dependent, disabled or weak. It’s not so important that the robot give facial expressions or have a personality, as long as they are easy to use, safe and reliable. As long as the personality of the robot matches its role, the way it looks can vary considerably. In general, not too serious, not too human-like, and not patronizing or stigmatizing.
An estimated 20% of the world’s population experience difficulties with physical, cognitive, or sensory functioning, mental health, or behavioral health. These experiences may be temporary or permanent, acute or chronic, and may change throughout one’s lifespan. Of these individuals, 190 million experience severe difficulties with activities of daily living tasks (ADL). Laurel D. Riek — Association for Computing Machinery
What truly matters is that the robot meets the person’s needs. Adaptability is paramount for older people, who will have a range of individual problems. As the individual’s abilities decline over time, the robot will need to adapt to these changes. One way to facilitate this is by allowing the user to personalise the robot to their own needs, accomodating individual differences.
Robots have great potential to improve health outcomes and aid independence in aging populations, relieving the burden for caregivers. A single design for a healthcare robot would be unlikely, as carefully asssessing the individual needs and preferences will produce greater acceptance. One of the biggest advantages of social robotics is that we can create robots that cater to our specific needs, which is invaluable when it comes to healthcare.
Another context in which robots can be used effectively is in mediated interviews.
The NAO robot was also used in a 2013 study by Cindy Bethel and Mississippi State University, where she studied how eyewitnesses were misled by humans but not robot interviewers.
Although eyewitness testimony is often the most compelling evidence used by jurors in a trial, time (and technological advances such as DNA analysis) has shown that such confidence is often misplaced. The memories of witnessed events can be impaired when followed by misleading information. In other words, a witness might change their testimony as he or she absorbs the social demands and goals of the interviewer.
This misinformation effect is responsible for creating particularly bad memories when a someone is subjected to misled conditions, implying that the memory of a witnessed event can be hindered by exposure to misleading information after such event.
When a position of authority is added, the likelihood of compliance to social demands increases. The human interviewer was perceived as more of an authority figure than the robot (the robot had always received lower ratings for trustworthiness, honesty, believability and comfort).
Robots have been used in a variety of educational, therapeutic and entertainment contexts. Robot-mediated interviews were also utilised by Luke Jai Wood in a study that examines how children’s responses towards the humanoid robot KASPAR differ in comparison to their interaction with a human in a similar setting. Previous studies had shown that children responded very well to the size and appearance of the robot and its human-like, but very simplified features.
Each child (aged between 7 and 9) participated in two interviews, one with an adult and one with the humanoid robot KASPAR, about a special event that had recently taken place in the school. Although there were differences in the duration of the interviews, the eye gaze directed towards the different interviewers, and their response time, results revealed that the children interacted with KASPAR very similar to how they interacted with a human interviewer. The goal of this study was not to replace human interviewers, but to provide professionals with a robotic tool as an interface to create an enjoyable and comfortable setting for children to talk about their experiences.
Because humans are prone to inexact interviewing techniques, the use of robots to gather eyewitness testimony and sensitive information could be beneficial in social work and criminal investigation. This could have a dramatic impact on how veridical information can be obtained from victims of crime, in particular children.
In Heather Knight’s Carnegie Mellon University “How Humans Respond to Robots” (2014), she proposes a division of human-robo partnerships into three categories, each with industrial and consumer applications: Telepresence robots, collaborative robots, and autonomous vehicles.
Humans give higher level commands to a remote system in an environment that is difficult, dangerous or inconvenient. Think remote-piloting when looking for survivors in a search and rescue scenario.
Different telepresence robots can have different levels of autonomy; rapid response situations call for higher autonomy, while greater levels of control should be assigned to the person when human expertise is reuired or there are liability considerations. The key when designing an interface for such a system would be to enable a balance between human and machine capabilities.
Telepresence robots can also be used in the workplace, imagine a remote employee driving it into a conference room. In the case of the elderly, while most had a negative reaction to their grandchildren visiting in robotic form, the idea of having a robot in their children’s home that they could log into at will was appealing.
Robots can also work directly with people and share a common environment. They are usually especialized to a particular domain but can operate with certain flexibility.
This category includes robotic toys and robots on stage, for example in the entertainment industry. The Disney theme parks have incorporated mechanized character motions into rides and attractions. A robot created by Disney Research uses a hybrid air and hydraulics system which makes it mirror movements with precision. The robot is created in a way so that it can handle delicate things properly and it can be safe to use among people
Autonomous cars have had a rapid increase in production and testing, although gthey weren’t deprived of a steady flow of critique.Their benefits include safety and convenience, but their presence requires a change in the distribution of decision making.
Vehicle technology should empower people, and as it happens with social robots, particular robotic driving styles might cater to different human preferences and make for a larger acceptance. By placing this type of robot in shared human environments, there are other things that will need to be accounted for. For example, pedestrians frequently look at the person driving the car when they are about to cross the street. Autonomous vehicles would need to cover this interaction, signaling somehow that the person can go through.
Automation is somewhat inevitable, but it’s what we do about it that matters. Proactive policymaking and ethical design should be at the center.
The goal of creating a partnership between humans and robots is to bring the best of both worlds.
This partnerships will be supported by valuing human capabilities and positive human impact, while also taking advantage of higher-level systems thinking and the ability to deal with novel or unexpected phenomena.
It’s important that we design clear interfaces and readable behaviours, robots that can partner with us to allow social and collaborative robotics to flourish and have a positive impact on our lives.
This article was written by UX Designer & Anthropologist Yisela Alvarez Trentini and first published on Wevolver.com