Three Myths about Children and Algorithms
What Do YouTube Kids and ClassDojo Have in Common?
Posted Dec 16, 2018
As more and more developers are perfecting their learning analytics, classroom management systems, and personalized reading recommendations, let us think critically about the wider assumptions underlying the current design of algorithms.
There’s no doubt that algorithms are vital for the Internet’s functionality. Through machine learning, algorithms can automate searches and draw on big data to provide personalized recommendations that could make children’s learning more effective. But the phrase “BIBO Algorithms” (Bias In and Bias Out) fittingly explains that even the most powerful algorithms developed by the FANG group (Facebook, Amazon, Netflix and Google) are not free from bias and error.
The industry’s desire to educate children and remove bias is there, but the issue is far too big to fix overnight. Evidence of unconscious and even intentional bias propagated by AI in systems for adults should be enough of a warning before we rush into algorithmic education. Here are some questions adults could pose, and some misconceptions, to help us evaluate the importance of algorithms in children’s learning:
1. Introducing algorithms to our class/family will improve children’s learning.
The problem of personalized recommendations was fully exposed in relation to online filter bubbles. Because algorithms are designed to aggregate similar content, they create echo chambers on social media and in news feeds. Adopting the same design for learning means that children are less exposed to cognitive challenges and things they don’t like. This might be beneficial for their motivation and engagement, but for learning to stick, children should be provided with concepts that stretch their minds, and not always match their preferences.
Some providers recognize these limitations, and instead of personalizing the content, their algorithms recommend the same topics at a different pace or in a different sequence. More advanced algorithms begin to expand children’s horizons as they recommend content of gradually increasing difficulty. This, however, could only work in a suitable learning environment, where there is a solid database to accommodate the highest and lowest achievers. It would be far too ambitious, at this stage, to expect such a sophisticated algorithm for the entire education system without running into the problem of fake news.
2. Personalized recommendations democratize children’s learning.
When we access the Internet, we are all using algorithms, and our sustained use of them makes them smarter. Notably, the current user base is disproportionately larger than the developer base of algorithms. A lot of decision-making that impacts billions of people worldwide is made by just a few thousand people. As a result, the technology giants are a way ahead of the general public’s understanding of how algorithms work. Even some US Senators do not seem to know how algorithmic economy works, as demonstrated during Mark Zuckerberg’s Senate committee hearing.
Put simply, the current design of algorithms is meritocratic, not democratic. Applied to education, it means those with an initial advantage, namely those who have some prior knowledge, will benefit, and can develop that knowledge further. This is because the algorithms adapt to the child, they do not instruct the child. Had algorithms been designed with democratic principles in the first place, they would be transparent by design and more community-oriented. Anyone would be able to see what gets recommended, and why. Anyone would be able to alter the mechanisms, and their power would be fairly and equally distributed.
3. Algorithms ensure children’s online safety.
To avoid the proliferation of disturbing videos aimed at young children, both Google and Facebook invested in large numbers of moderators and human content checkers. However, even the administrators for YouTube for Kids correctly admit that ‘no filter is 100% accurate’. So full reliance on YouTube’s “restricted mode” will not guarantee that your child won’t see disturbing Peppa Pig videos. It is simply a fact of life that there are more potential content creators, and regrettably, many with bad intentions, than those who would flag up abusing videos. Prohibition and censorship from national and international regulation bodies is not a full solution, either. It would be a step back to encourage a model that ensures children’s safety at the expense of freedom to explore. A new suite of algorithms, quality checks, and community regulations will need to be developed to ensure a balance between creative contributions and protection of vulnerable users.
We are often told that ‘AI is the future’, but also that a ‘handful of tech companies control millions of minds’. If we want AI algorithms to improve the future for all children, we need to tap the pause button on algorithmic childhood and think critically about what it captures children’s attention for. We need transparent algorithms that match carefully selected content for children’s preferences and needs with those of the wider community. This is not an idealized, technophile view of a few academics. It is an essential condition for ensuring that personalized education doesn’t convert into commercialized education.
The Office of the Children's Commissioner (2018). Who Knows What about Me? A Children’ s Commissioner report into the collection and sharing of children ’ s data, https://www.childrenscommissioner.gov.uk/wp-content/uploads/2018/11/who-knows-what-about-me.pdf
Kucirkova, N., Fails, J., Pera, S. Huibers, T. (2018) Algorithms for children: what parents and educators need to know, DigiLitEY, UK: http://digilitey.eu/publications/digilitey-publications
Manolev, J., Sullivan, A. & Slee, R. (2018) The datafication of discipline: ClassDojo, surveillance and a performative classroom culture, Learning, Media and Technology, DOI: 10.1080/17439884.2018.1558237