There is no shortage of doomsday accounts of technology triumphing the human in film, literature, blogs, interviews with famous tech personalities, you name it. An episode of “Black Mirror” predicts the future of babies with an AI monitor wired to their brains. The episode was meant to horrify us about how badly parenting could go in the age of technological acceleration. In the episode, the mother uses the AI monitor to literally see what the child sees and be alerted of emotional distress and potential physiologically dangers (drugs, poisoning, etc.). In the name of child protection, this sort of technologically aided hyper-surveillance fuel’s the mother’s obsession with controlling the child. In the end, the child rebels by destroying the device by smashing it repeatedly against her mother. The episode reflects a commonly held fear of technology as destructive. But, is that really the case? Instead of predicting the constructive or destructive possibilities of AI on the Generation Alpha, it is first important to understand the developments of AI and how they will function exactly in everyday life, healthcare, and education.

Robot nannies and AI toys

Source: heykuri.com

Home robots are the hot new things in the market, take Kuri for example. Made by Mayfield Robotics, Kuri the robot is extremely lovable and made to ease and light up your everyday. It’s like having a pet that does work for the household. Kuri has an in-built camera that not only sends you videos of “suspicious activity” when you are not home, it automatically captures short moments of you and your child. It serves as a smart camera for both entertainment and security purposes. The cool thing about Kuri is that it will learn to recognize who is who, what is where, what to do. It wakes up when you wake up and is programmable to wake up your child. Changes color to communicate moods. Looks at you when addressed. Responds to your touch. The allure of Kuri lies in its cute factor — manifested mainly through romojis (robot-emojis). But, who are we kidding? Kuri is a luxury, not a necessity.

Source: cnet.com

On the other hand, we have Mattel’s Aristotle, a voice-controlled platform much like Alexa or Google Home, but catered towards children from infancy to adolescence. Aristotle is made to soothe babies to sleep at night, alert you when diapers are low, make automatic orders online, play games with toddlers, reinforce good manners, help with homework, and more. While more generic home robots and voice-controlled platforms are celebrated, Aristotle was met with complaints and was consequently pulled out of the market. The main concerns were about child privacy. Furthermore, people were worried about pretend empathy, where in a nut shell, a child can’t tell robots from real humans and this destroys their emphatic capabilities.

In an article for the World Economic Forum, Sandra Chang-Kredl, Associate Professor in Education at Concordia University, warns of objects mimicking empathy. However, her target is the Luvabella AI doll developed by Spin Master. An AI doll is basically a blend of physical toy with mobile devices, apps, or technological sensors. In the case of Luvabella, she/he blinks, moves cheeks and lips, responds to voice, laughs when tickled, and learns words.

There are fears that because these products are online, corporations can abuse recorded data in order to manipulate their product value. “My Friend Cayla”, an interactive doll, was recently banned in Germany¬†as it can be easily intercepted by any third party.

Source: youtube.com

Raised on algorithms

The debate about pretend empathy seems a bit far-fetched, especially in the case of Aristotle, which doesn’t in any way resemble a human being. Secondly, there is nothing wrong with learning to empathize with non-human beings that will co-exist with us. Parents should be aware that their children’s privacy is already vulnerable through other less obvious means. Any wireless device with a camera, recording device, and personal data can easily be hacked into nowadays. We know through Facebook’s recent data harvesting scandal that our online transactions and conversations can easily be misused for the benefit of corporations and institutions. Moreover, while child advocacy groups can stop devices like Aristotle from ever coming onto the market, they cannot reverse the flow of technology. We are dependent on the internet for everyday transactions and the younger generations will be no stranger to social media. They will be raised on algorithms and the question isn’t about stopping the technologizing of children, but about teaching them to understand what is behind-the-scenes, like algorithm bias.

AI isn’t just about robots

What is really both intriguing and alarming is the far reach and influence of machine learning and big data. AI is really picking up through machine learning, where the machine is programmed to self-learn from data. Take for example the AI-powered App, Muse, known as the technology that teaches children to be more human. The app measures a child’s developmental growth with “unstructured” data, meaning photos, audio, and video recordings, and answers of the child. With this information, the app then gives daily activity suggestions to foster 50 developmental traits, such as divergent thinking, fluid intelligence, empathy, ethical judgement, etc. The problem with this technology is that it attempts to measure a person against 50 arbitrarily objective traits. Not only that, on what basis do they measure traits like empathy? These are highly subjective traits that are often manifested in divergent ways. Furthermore, big data is rife with class, race, and gender bias. As much as the researchers claim to scrutinize results among different demographics and culture, the technology itself isn’t capable of critical thinking. Why should anyone trust a technology without critical thinking to teach children critical thinking?

On the other hand, it makes more sense to trust AI to teach children more objective subjects like math and science. In fact, there are many companies specializing in adaptive learning platforms which identifies specific modules of knowledge a child is weak at. One example is DreamBox, an online K-8 math program. DreamBox is great because it is adaptive. Specifically, it collects more than 48,000 data points every hour a student uses the program. The machine learning technology then evaluates each student interaction and problem-solving strategy in order to immediately adapt the lesson to the student. This allows the student to not only progress at their own pace but to deepen their conceptual understanding. There are even AI programs that generate new materials like custom textbooks, chapter summaries, and multiple choice questions, such as the one by Content Technologies, Inc (CTI).

Source: spectrum.ieee.org

Hanson Robotics has also come up with a miniature Professor Einstein robot that talks to kids and challenges them with brain games. The robot is extremely expressive and programmed to share his passion for science and answer questions on a range of topic. Not only that, Professor Einstein is connected to the cloud to take kids on a journey of learning. This kind of smart robot is another way of personalizing education. Moreover, kids can now tap into Dr Roboto, an open source AI platform, to program new movements or expressions on the Einstein robot, so there are many new learning opportunities for the coming generation.

Better treatments for preemies and prosthetic limbs for kids

People disagree intensely on how AI can support parents and teachers in the upbringing of kids. However, when it comes to AI-led medical advancements, the oppositions tend to join hands. BABA Center, the first clinical research center in Finland dedicated to studying baby brain activity, recently published their findings of using AI software to interpret EEG readings from premature babies to understand how they function at specific intervals. They have developed an AI software that tracks a premature baby’s brain development in real time, which allows researchers to more or less determine the effects of therapy within days or weeks, not years. This means that you might not need highly trained neurophysiologists to tell when something might be wrong which allows for better treatments that is often critical for preemie babies. Most importantly, this project could also lead to new research and interventions.

Source: community.wevolver.com

Somewhere else in the world, Open Bionics has made prosthetic hands for kids.¬†A lot of children amputees do not have access to prosthetics since they are extremely expensive and do not usually come in smaller sizes. The nice thing about Open Bionics is that the bionic hands are developed at a low cost with 3D scanning and printing technology — it’s made to fit. Not only that, Open Bionics prototypes are open source, which means that anyone can contribute to their development.

With the recent launch of SingularityNET, a decentralized protocol for AI algorithms, services, and agents, AI tools and services will become democratized. This means that with time, AI-related services will become more and more accessible. The acceleration of AI and technology cannot be stopped, in other words, the ball is rolling, so better to embrace that fact and learn to live intimately with AI. There is only so much parents can do to “prepare” their little ones for the upcoming wave of technology, a lot will also be dependent on how well the child adapts.

Don't miss out! Subscribe now!