This story is adapted from The Love Makers, by Aifric Campbell.
Childcare is the most intimate of activities. Evolution has generated drives so powerful that we will risk our lives to protect not only our own children, but quite often any child, and even the young of other species. Robots, by contrast, are products created by commercial entities with commercial goals, which may—and should—include the well-being of their customers, but will never be limited to such. Robots, corporations, and other legal or non-legal entities do not possess the instinctual nature of humans to care for the young—even if our anthropomorphic tendencies may prompt some children and adults to overlook this fact.
As a result, it is important to take into account the likelihood of deception—both commercial deception through advertising and also self-deception on the part of parents—despite the fact that robots are unlikely to cause significant psychological damage to children and to others who may come to love them.
Neither television manufacturers, broadcasters, nor online game manufacturers are deemed liable when children are left for too long in front of their television. Robotics companies will want to be in the same position, as no company will want to be liable for damage to children, so it is likely that manufacturers will undersell the artificial intelligence (AI) and interactive capacities of their robots. It is therefore likely that any robots (and certainly those in jurisdictions with strong consumer protection) will be marketed primarily as toys, surveillance devices, and possibly household utilities. They will be brightly colored and deliberately designed to appeal to parents and children. We expect a variety of products, some with advanced capabilities and some with humanoid features. Parents will quickly discover a robot’s ability to engage and distract their child. Robotics companies will program
experiences geared toward parents and children, just as television broadcasters do. But robots will always have disclaimers, such as “this device is not a toy and should only be used with adult supervision” or “this device is provided for entertainment only. It should not be considered educational.”
Nevertheless, parents will notice that they can leave their children alone with robots, just as they can leave them to watch television or to play with other children. Humans are phenomenal learners and very good at detecting regularities and exploiting affordances. Parents will quickly notice the educational benefits of robot nannies that have advanced AI and communication skills. Occasional horror stories, such as the robot nanny and toddler tragedy in the novel Scarlett and Gurl, will make headline news and remind parents how to use robots responsibly.
This will likely continue until or unless the incidence of injuries necessitates redesign, a revision of consumer safety standards, statutory notice requirements, and/or risk-based uninsurability, all of which will further refine the industry. Meanwhile, the media will also seize on stories of robots saving children in unexpected ways, as it does now when children (or adults) are saved by other young children and dogs. This should not make people think that they should leave children alone with robots, but given the propensity we already have to anthropomorphize robots, it may make parents feel that little bit more comfortable—until the next horror story makes headlines.
When it comes to liability, we should be able to communicate the same model of liability applied to toys to the manufacturers of robot nannies: Make your robots reliable, describe what they do accurately, and provide sufficient notice of reasonably foreseeable danger from misuse. Then, apart from the exceptional situation of errors in design or manufacture, such as parts that come off and choke children, legal liability will rest entirely with the parent or responsible adult, as it does now, and as it should under existing product liability law.
Most PopularThe End of Airbnb in New YorkBusiness
What robotics manufacturers will need to worry about is that robots will be banned due to incredibly rare cases of neglect or misuse, as with extremely popular children’s toys such as assisted walkers and lawn darts. The failure of a small number of parents to adequately safeguard stairways or properly supervise their children using such products has resulted in injuries or deaths. However, rather than the parents being blamed, these products were quickly banned. No legislator wants to be associated with dead babies.
But legislative banning is not inevitable. Not all products are created equal. No one has yet banned guns or automobiles, which are far more significant causes of child injury and death. Automobiles are seen as too critical to our economy and our individual freedom to be banned, despite the horrific cost in loss of life resulting from negligent use or design. Guns are (in some countries) afforded political protection for fulfilling their primary function—dispensing death.
These kinds of immunities from legislative bans could very likely extend to robots as they become more essential economically and politically and more embedded in our daily lives.
Robotic childcare will have significant implications for childcare workers and wages. Nurseries may be able to support more children, and more people could be attracted to the field or prove competent in it with the addition of AI assistance. This could reduce earnings in what is already a low-paid occupation, though of course it could just improve the quality and therefore the perceived value of the service. Eventually, most societies should come to value important labor even if it was previously provided for free—primarily by women in domestic households—but this is a wider problem for many service industries. We would hope that parents and taxpayers would value childcare and other AI-augmented human services and believe that good wages and investment in good technology are essential. Society needs to become more attentive to technology’s potential to improve the human condition rather than simply focusing on more immediate payoffs, like wealth and consumption.
Robots are very different from television and dogs in that robots provide interactivity of a highly reliable sort. While this extreme reliability can be partially offset by artificial emotions and noisy sensing, ultimately children will realize that robots are more predictable than humans. Robot nannies will not be irritable; they will not lose their temper. They will only very seldomly (and catastrophically) forget or ignore, and they are available 24/7. Robots may therefore increase the probability that children develop bonding issues with their parents and friends. Some children will prefer the more reliable style of interactions they find with machines—just as some prefer simpler interactions with animals or the high-bandwidth, low-risk stimulation of books.
We know that a child’s ability to bond with parents has a long-term impact on their ability to form friendships, romantic relationships, and generally integrate with society. Our concern is that children who prefer the predictable interactions with robots may be setting themselves up for a life-long preference for machines over humans. This important possibility can already be explored experimentally, by looking at children and adults who prefer AI versus human opponents in online gaming. Yet, to our knowledge, this research has yet to be done. Nevertheless, another legislative direction that might plausibly emerge with respect to robot nannies would be mandatory warnings about how addictive these technologies can be for some children, or recommendations about time limitations for robotic exposure and engagement (though China has instituted strict limits on young gamers). However, despite many academic studies recommending limits in exposure to television and computer games, no such legislation has yet been written for television.
Most PopularThe End of Airbnb in New YorkBusiness
Of course, even if we discover correlations between children with a preference for interaction with AI and robots and the expression of other forms of introverted behavior, this does not mean that AI and robots are necessarily bad for these children. Indeed, these devices may provide stability and comfort that reinforce some children’s sense of self-worth.
More generally, we should not overlook the possibility that robot nannies and AI toys might be beneficial in unexpected ways. There is, for example, the chance that protracted experience of AI might enhance a child’s understanding of themselves and what it means to be human.
The United Nations Convention on Rights of the Child—which has been ratified by every nation in the world with the notable exception of the United States—enumerates certain basic human rights for each child. A primary right of this all-encompassing social, political, economic, and cultural treaty is the concept of a child’s agency and right to participation. This right to be heard ensures children’s right to have their opinion considered in matters that affect them. Gauging the individual capability of each child to engage in certain activities may be augmented as it is now by technology, but it will remain a parental duty. Real-time, ongoing evaluation of a child is not delegable to a robot or the corporation that manufactured it, though.
Information derived from interaction with the robot may inform the child or the parent. Robotics and AI have already entered the domestic domain in the form of smartphones and AI assistants, and research reveals their impact on traditional family dynamics. For example, it will be more difficult to protect or restrict children from perceived danger if a child can use their AI to present empirical data to the contrary. As children mature and grow in their understanding of AI and robotics, we can expect they may convert these devices into personalized legal and advocacy tools. This could empower a child or young person to assert and defend their rights and enable them to challenge and override parent/guardian restrictions. Such a development would depend on legal protections concerning the transparency and accountability of AI products being equal for products through all stages of childhood.
This is not to suggest children will necessarily be wiser than their parents. Children will always be at least as susceptible to advertisers and political manipulation. Still, our goal must be that AI empowers not only the adults who love children, but the children themselves. While this may challenge traditional parenting techniques, it will likely not harm the child or young adult. Rather, or perhaps also, AI may help build resilience and prepare them for the challenges of the human condition. We believe it is unlikely that robots will cause significant psychological damage.
The bottom line is that the robots themselves will not love our children. Robots are manufactured devices that if they represent anything, represent the entities that develop and market them. With adequate regulatory oversight, they should represent the needs and desires of parents and children as well. And as with many far less engaging toys, we must always remember that children will love their robots.
'Robot Nannies Will Not Love' by Joanna J. Bryson and Ronny Bogani from The Love Makers by Aifric Campbell. Published by Goldsmiths Press, London, 2021. Distributed by MIT Press.
More Great WIRED Stories📩 The latest on tech, science, and more: Get our newsletters!The Twitter wildfire watcher who tracks California’s blazesHow science will solve the Omicron variant’s mysteriesRobots won’t close the warehouse worker gap soonOur favorite smartwatches do much more than tell timeHacker Lexicon: What is a watering hole attack?👁️ Explore AI like never before with our new database🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers, running gear (including shoes and socks), and best headphones