2022/23: My annual review

I have a tradition, loosely kept since 2009, of writing a short annual review — a looking back and ahead (something like what UNICEF calls a “strategic moment of reflection”). At the intersection of digital, children and policy, what have I done and how have I tried to provide thought leadership?

Biggest areas of interest: Working with amazing colleagues and experts, I’m analyzing issues that could profoundly impact the future of humanity, especially children and youth who are the largest online cohort and driving force of connectivity:

  • Achieving digital equality — how can we better address all the issues (including the non-tech ones) that prevent every child from being able to seize digital opportunities and avoid risks?
  • Shaping the next evolutionary step of the internet — will it see us moving into the virtual reality metaverse or us staying IRL but laden with wearable and embedded technologies in every aspect of our lives (or both)? How can we ensure AI best facilitates how we interact with information and each other? Our Office’s previous work on AI, data governance and personalized learning has been hugely valuable here.
  • Tracking neurotechnology — while it offers unprecedented health benefits for people, like helping those with paralysis move again, will it signal the death of privacy if our thoughts are no longer our own?

In his opening speech at the 2022 UN General Assembly, Secretary-General António Guterres called the “lack of guardrails around promising new technologies to heal disease, connect people and expand opportunity” a “crisis”. The need to unpack what frontier technologies mean for children, and to build those policy guardrails, is pressing today. Positioning UNICEF as a leading organization in this role to help policymakers get ahead of emerging issues is both critical and exciting.

Most impactful moment: Engaging the first cohort of UNICEF Youth Foresight Fellows, a group of bright and talented young futurists, to anticipate global trends. Their insights and ability to see opportunity in crisis were immensely instructive (and refreshing in the current climate of ‘techlash’).

Most brag-worthy: Being a member of the World Economic Forum’s Global Future Council on Artificial Intelligence for Humanity, its Metaverse Governance Working Group, and a contributing expert to MIT Sloan Management Review’s responsible AI initiative.

Most fun: Playing around with AI tools like Dall-E and ChatGPT that generate images and text (see below).

And in 2023 …

Mantra (inspired by the fellows): With young people, shaping the digital future we want beyond putting out fires in the internet we have.

Looking forward to: Working with colleagues in the newly merged UNICEF Innocenti – Global Office of Research and Foresight to bring together the best of research, foresight and policy to better anticipate and direct frontier technologies. Our ambition is nothing less than having children’s rights at the heart of global digital discourse and enabling a future-ready UNICEF. Contributing to the forthcoming Global Digital Compact will be an important moment for influence.

I was hoping for the original Grumpy Cat, but this is pretty cool
Quickly generated during a workshop with the Youth Foresight Fellows
Not bad, ChatGPT

#DigitalBits

As part of my job I get to read a lot, benefiting from links shared by colleagues and listed in great newsletters. Rather than just posting them on internal Slack channels, Outlook and sporadic tweets, I’m starting an experiment to share them here. This is a list of curated links at the intersection of digital, children and policy.

1. Google, Meta, and others will have to explain their algorithms under new EU legislation (The Verge)
Great summary of the Digital Services Act’s key issues, including that minors cannot be subject to targeted advertising

2. Does a Toddler Need an NFT? (NY Times)
“The new frontier of children’s entertainment is internet-native cartoon characters selling nonfungible tokens on social-media apps for tots”

3. Snap CEO Evan Spiegel thinks the metaverse is ‘ambiguous and hypothetical’ (The Verge)
“Our big bet is on the real world, and that people really enjoy spending time together in reality.” 250 million people engaging with AR every day in just the Snapchat application.

4. Inside the Metaverse Are You Safe? (Channel 4) (Video, only available in the UK)
“In the metaverse, reporter Yinka Bokinni encounters a thrilling new world, but also a dangerous one, in which some apps expose users to racism, sexually explicit behaviour, and even sexual assault”

5. Apple to roll out child safety feature that scans messages for nudity to UK iPhones (the Guardian)
“Feature that searches messages will go ahead after delays over privacy and safety concerns”

6. Declaration for the Future of the Internet (The White House)
“United States and 60 Global Partners Launch Declaration” – includes safety for children, data protection, fair competition and a trusted digital ecosystem

2 podcasts interviews on UNICEF’s AI work

I was recently interviewed on two podcasts about UNICEF’s AI work:

The Lid is On by UN News, along with Jasmina Byrne. It is short — 11 minutes — and has a distinctly UN angle to the conversation. A nice summary of the big issues around AI and children.

Mixer LI (5).png

Data Science Mixer podcast, which is designed for data scientists and takes a deep dive into well-curated data topics, often with a human interest angle. A bit longer — 32 minutes — this dives more into the technology/data aspect of AI and children.

Three years at UNICEF: Looking back

After over three years at UNICEF, it is time to reflect on achievements and learnings and write a “brag pack” (this looking back is a tradition of mine — see my previous reviews).

I’m the Digital Policy Specialist for UNICEF, based in New York in the Office of Global Insight and Policy (OGIP). The Office serves as an internal think-tank, investigating issues with implications for children, equipping the Organization to more effectively shape global discourse, and preparing it for the future by scanning the horizon for frontier issues and ways of working.

I have tried to do two things since joining UNICEF: focus on key emerging digital issues for children, such as AI, digital literacy, and mis/disinformation, and position the Organization as a thought leader on digital issues for children. Below are some highlights:

Project leadership and innovation on emerging digital issues

AI for children

While AI is a hot topic, not enough attention is paid to how it impacts on children in policies and systems (see the report, which I co-authored, that reviewed how little national AI strategies say about children). I thus helped set up and lead the AI for Children Policy Project, a 2-year initiative in partnership with the Ministry of Foreign Affairs (MFA), Finland, that aims to see more child-centred AI systems and policies in the world. Working with a stellar team (Melanie Penagos and consultants Prof Virginia Dignum, Dr Klara Pigmans and Eleonore Pauwels, and under the guidance of Jasmina Byrne and Laurence Chandy), I:

  • Developed the work plan for the project, raised the funds for it (largest external funding for OGIP) and manage the partnership with the MFA.
  • Co-authored the Policy Guidance on AI for Children (a world first).
  • Pioneered a user-centred design approach to policy development within the UN: first we held consultations with experts around the world to inform and ground the guidance, then we released an official draft version and held public consultations on it as well as — here’s the interesting bit — invited governments and companies to pilot it (acknowledging that we don’t have all the answers in moving from AI policy to practice). From the field learnings we wrote 8 case studies about what works and what doesn’t, which informed version 2.0 (non-draft) of the policy guidance – released a year later.
  • Have overseen the first UN global consultation with children on AI, led by rock star colleague Kate Pawelczyk, to inform the development of the guidance. Adolescent perspectives on AI documents the findings from nine workshops with 245 children in five countries. A major contribution here is the workshop methodology on how to consult children on AI.
  • Helped to grow and manage an external advisory group for the AI project, including the World Economic Forum, Berkman Klein Center for Internet & Society (Harvard University), IEEE Standards Association, PwC UK and Cetic.br.
  • Hosted the world’s first Global Forum on AI for Children with 450 participants to raise awareness of children and AI and help plot a better AI future.

Achievements: the Government of Scotland has officially adopted the draft policy guidance in its national AI strategy. The policy guidance was shortlisted as a promising responsible AI initiative by the Global Partnership on AI and the Future Society, nominated for a Harvard Kennedy School Tech Spotlight recognition, and our Office’s most popular download.

Teen workshop in São Paulo. Credit: (c) Leandro Martins and Ricardo Matsukawa/NIC Brazil

Digital literacy for children

While many excellent digital literacy initiatives were being driven at UNICEF, the efforts were often ad hoc and not situated within a coherent framework for the Organization. Working with Dr Fabio Nascimbeni and colleagues, we mapped the current digital literacy policy and practice landscape; highlighted existing competence frameworks and how they could be adapted to UNICEF’s needs; surveyed the needs and efforts of UNICEF country offices (a first across the Organization); and offered policy and programme recommendations, including a new definition of digital literacy for UNICEF. Our resulting paper tells all.

Digital mis/disinformation and children

As with AI, mis/disinformation are current and crucially important topics — but the discourse offers little insight into how children are affected. In navigating the digital world, with their cognitive capacities still in development, children are particularly vulnerable to the risks of mis/disinformation. At the same time, they are capable of playing a role in actively countering its flow and in mitigating its adverse effects through online fact-checking and myth-busting. Working with Prof Philip N. Howard, Lisa-Maria Neudert and Nayana Prakash of the Oxford Internet Institute, we authored a report (and 10 Things you need to know) that go beyond simply trying to understand the phenomenon of false and misleading information, to explain how policymakers, civil society, tech companies and parents and caregivers can act to support children as they grow up in a digital world rife with mis/disinformation.

Thought leadership

Helping to sharing knowledge and steer discourse on key issues:

What’s my big idea?

Digital is only a force for good when it serves all of humanity’s interests, not just those of a privileged few. Meaningful technology use must be for everyone, provide opportunities for development and livelihoods, and support well-being. Technology cannot only be for those that can control it and afford it, it should not constrain opportunity and undermine well-being.

These are not new ideas, but what I have come to believe is that the best way to achieve meaningful digital inclusion is to focus on children and youth. A digital world that works for children works best for everyone. Children under 18 make up one-third of all internet users, and youth (here, 15-24 year olds) are the most online age cohort (globally, 71% use the internet, compared with 57% of the other age groups). And yet, despite being significant user groups, they are the unseen teens. Digital platforms are not sufficiently designed or regulated with or for them.

A focus on children and youth will force platform creators and digital regulators to be more conscious of a range of different user needs – not just privilege the adult experience. It will help them take online child protection more seriously, reduce digital surveillance of children, and think creatively and co-operatively about digital experiences that support well-being of children. It does not mean “dumbing down” the internet to the lowest common denominator — not every part of the internet is appropriate for children — but rather holding inclusion, protection and empowerment for all as guiding principles.

So far it has been an incredible journey at UNICEF: stimulating, challenging and rewarding, working with amazing people on issues that really impact on children. I look forward to continue to do work that is pioneering and relevant in the coming years.

New paper: Digital misinformation / disinformation and children

Mis/disinformation are current and crucially important topics — but the discourse offers little insight into how children are affected. In navigating the digital world, with their cognitive capacities still in development, children are particularly vulnerable to the risks of mis/disinformation. At the same time, they are capable of playing a role in actively countering its flow and in mitigating its adverse effects.

Working with Prof Philip N. Howard, Lisa-Maria Neudert and Nayana Prakash of the Oxford Internet Institute, we authored a report that goes beyond simply trying to understand the phenomenon of false and misleading information, to explain how policymakers, civil society, tech companies and parents and caregivers can act to support children as they grow up in a digital world rife with mis/disinformation.

Read the report

Read 10 Things you need to know

Keynote address: Why we need child-centred AI and how we can achieve it

I recently delivered one of the opening keynotes at the Beijing AI Conference in the stream on AI Ethics and Sustainable Development, along with distinguished guests Danit Gal (Cambridge University and previous Technology Advisor to the UN), Wendell Wallach (Yale University) and Arisa Ema (University of Tokyo).

I mostly presented the UNICEF work on policy guidance on AI for children – slides here. I tried to convey three key messages:

We need AI policies and systems to be child-centered
We have ways to do this
We all need to get involved

Children now feature in the Proposal for a Regulation on a European approach for AI

On Wednesday, 21 April, the European Commission released its Proposal for a Regulation on a European approach for Artificial Intelligence, the first legal attempt to govern AI. It is wide-ranging, ambitious, and grounded in human rights. The EC likely wants this to set the tone for regulating AI not just within the EU but globally, in the same way GDPR did for data privacy.

A draft version leaked a week before did not mention children once. The Office of Global Insight and Policy (OGIP) was contacted by UNICEF Brussels to see if we could quickly give reactive inputs. Because of our Policy Guidance on AI for Children, and with support from our rock star AI consultant, Prof Virginia Dignum, we could respond within a day. Last Monday UNICEF Brussels submitted the unsolicited inputs to the EC Commission. 5Rights Foundation also submitted their asks around children, as did others, I’m sure.  We were thrilled to see that the version released that Wednesday has 10 mentions of children – well done EC:

“Furthermore, as applicable in certain domains, the proposal will positively affect the rights of a number of special groups, such as … the rights of the child (Article 24)”

Prohibited practices of AI: “The prohibitions covers practices that have a significant potential to manipulate persons through subliminal techniques beyond their consciousness or exploit vulnerabilities of specific vulnerable groups such as children or persons with disabilities in order to materially distort their behaviour in a manner that is likely to cause them or another person psychological or physical harm.”

“The use of those systems for the purpose of law enforcement should therefore be prohibited, except in three exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for potential victims of crime, including missing children; …”

“it is important to highlight that children have specific rights as enshrined in Article 24 of the EU Charter and in the United Nations Convention on the Rights of the Child (further elaborated in the UNCRC General Comment No. 25 as regards the digital environment), both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being.”

“The following artificial intelligence practices shall be prohibited … the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives: (i) the targeted search for specific potential victims of crime, including missing children;”

“When implementing the risk management system described in paragraphs 1 to 7, specific consideration shall be given to whether the high-risk AI system is likely to be accessed by or have an impact on children.”

Note that the regulation will be discussed at the European Parliament and among Member States and it will take months before the final approval. We will continue to engage the process and advocate, with the support of the Ministry of Foreign Affairs, Finland (and Ambassador Jarmo Sareva), for the inclusion of children and their rights in this AI regulation.

Can education systems anticipate the challenges of AI? (IIEP-UNESCO strategic debate)

I was honoured to be a discussant in the IIEP-UNESCO strategic debate in Paris on the question Can education systems anticipate the challenges of AI?

Stuart Elliot, author of the OECD report ‘Computers and the Future of Skill Demand, was the main presenter with an exciting framework to help understand the impact of AI on skills and education using the OECD’s PIAAC data.

Stuart’s slides are here. My slides and the full video recording.

Voice for Development: Your Weekend Long Reads

While ICT4D innovates from the ground up, most tech we use comes from the top. Yes, it takes a little time for the prices of commercial services in Silicon Valley to drop sufficiently, and the tech to diffuse to the audiences we work with, but the internet and mobile have made that wait short indeed.

Next Big Wave After the Keyboard and Touch: Voice

One such innovation is natural language processing, which draws on AI and machine learning to attempt to understand human language communication and to react and respond appropriately.

While this is not a new field, the quality of understanding and speaking has improved dramatically in recent years. The Economist predicts that voice computing, which enables hands-off communication with machines, is the next and fundamental wave of human-machine interaction, after the keyboard and then touch.

The prediction is driven by tech advances as well as increasing uptake in the consumer market (note: in developed markets): last year Apple’s Siri was handling over 2bn commands a week, and 20% of Google searches on Android-powered handsets in America were input by voice.

Alexa Everywhere

Alexa is Amazon’s voice assistant that lives in Amazon devices like Echo and Dot. Well, actually, Alexa lives in the cloud and provides speech recognition and machine learning services to all Alexa-enabled devices.

Unlike Google and Apple, Amazon is wanting to open up Alexa and have it (her?) embedded into any products, not just those from Amazon. If you’re into manufacturer, you can now buy one of a range of Alexa Development Kits for a few hundred dollars to construct your own voice-controlled products.

Skills Skills Skills

While Amazon works hard to get Alexa into every home, car and device, you can in the meantime start creating Alexa skills. There’s a short Codecademy course on how to do this. It explains that Alexa provides a set of built-in capabilities, referred to as skills, that define how you can interact with the device. For example, Alexa’s built-in skills include playing music, reading the news, getting a weather forecast, and querying Wikipedia. So, you could say things like: Alexa, what’s the weather in Timbuktu.

Anyone can develop their own custom skills by using the Alexa Skills Kit (ASK). (The skills can only be used in the UK, US and Germany, presumably for now.) An Amazon user “enables” the skill after which it works on any of her Alexa-enabled devices. Et voilà, she simply says the wake phrase to access the skill. This is pretty cool.

What Does This Mean for ICT4D?

Is the day coming, not long from now, when machine-based voice assistants are ICT4D’s greatest helpers? Will it open doors of convenience for all and doors of inclusion for people with low digital skills or literacy? Hmmm. There’s a lot of ground to cover before that happens.

While natural language processing has come a looooong way, it’s far from perfect either. Comments about this abound — this one concerning Question of the Day, a popular Alexa skill:

Alexa sometimes does not hear the answer correctly, even though I try very hard to enunciate. It’s frustrating when I’ve gotten the answer right — not even by guessing, but actually knew it — and Alexa comes back and tells me I’ve gotten it wrong!

In ICT4D, there’s isn’t always room for error. What about sensitive content and interactions that can easily go awry? Is it likely that soon someone will say Alexa, send 100 dollars to my mother in the Philippines? What if she sends the money to the brother in New Orleans?

Other challenges include Alexa’s language range, cost, the need for online connectivity and, big one, privacy. There is a risk in being tied to one provider, one tech giant. This stuff should be based on open standards.

Still, it is interesting and exciting to see this move from Amazon and contemplate how it could affect ICT4D. What are your thoughts for how voice for development (V4D) could make a social impact?

Here’s a parting challenge to ICTWorks readers: Try out Amazon Skills and tell us whether it’s got legs for development? An ICT4D skill, if you will. (It can be something simple for now, not Alexa, eliminate world poverty).

Image: CC-BY-NC by Rob Albright

Five Traits of Low-literate Users: Your Weekend Long Reads

We know that the first step to good human-centered design is understanding your user. IDEO calls this having an empathy mindset, the “capacity to step into other people’s shoes, to understand their lives, and start to solve problems from their perspectives.”

Having empathy can be especially challenging in ICT4D since the people we develop solutions for often live in completely different worlds to us, literally and figuratively.

I’m currently drafting a set of guidelines for more inclusive design of digital solutions for low-literate and low-skilled people. (Your expert input on it will be requested soon!) There are many excellent guides to good ICT4D, and the point is not to duplicate efforts here. Rather, it is to focus the lens on the 750 million people who cannot read or write and the 2 billion people who are semi-literate. In other words, likely a significant portion of your target users.

Globally, the offline population is disproportionately rural, poor, elderly and female. They have limited education and low literacy. Of course people who are low-literate and low-skilled do not constitute a homogeneous group, and differences abound across and within communities.

Despite these variances, and while every user is unique, research has revealed certain traits that are common enough to pull out and be useful in developing empathy for this audience. Each has implications for user-centered design processes and the design of digital solutions (the subject of future posts).

Note: much of the research below comes from Indrani Medhi Thies and the teams she has worked with (including Kentaro Toyama) at Microsoft Research India, developing job boards, maps, agri video libraries and more, for low-literates. If you do nothing else, watch her presentation at HCI 2017, an excellent summary of twelve years of research.

Not Just an Inability to Read

Research suggests that low exposure to education means cognitive skills needed for digital interaction can be underdeveloped. For example, low-literate users can struggle with transferring learning from one setting to another, such as from instructional videos to implementation in real life. Secondly, conceptualising and navigating information hierarchies can be more challenging than for well educated users (another paper here).

Low-literate Users Are Scared and Sceptical of Tech

Unsurprisingly, low literate users are not confident in their use of ICTs. What this means is that they are scared of touching the tech for fear of breaking it. (There are many schools in low-income, rural areas where brand new donated computers are locked up so that nobody uses and damages them!)

Further, even if they don’t break it, they might be seen as not knowing how to use it, causing embarrassment. When they do use tech, they can be easily confused by the UI.

Low-literate users can lack awareness of what digital can deliver, mistrust the technology and doubt that it holds information relevant to their lives.

One of Multiple Users

Low-income people often live in close-knit communities. Social norms and hierarchies influence who has access to what technology, how information flows between community members and who is trusted.

Within families, devices are often shared. And when low-literates use the device it may be necessary to involve “infomediaries” to assist, such as read messages, navigate the UI or troubleshoot the tech. Infomediaries can also hinder the experience when their “filtering and funnelling decisions limit the low-literate users’ information-seeking behaviour.”

The implication is that the “target user” is really plural — the node and all the people around him/her. Your digital solution is really for multiple users and used in mediated scenarios.

Divided by Gender

Two thirds of the world’s illiterate population are women. They generally use fewer mobile services than men. In South Asia women are 38% less likely than men to own a mobile phone, and are therefore more likely to be “sharing” users. Cultural, social or religious norms can restrict digital access for women, deepening the gender digital divide. In short, for low-literate and low-income users, gender matters.

Driven by Motivation (Which Can Trump Bad UI)

While we often attribute successful digital usage to good UI, research has shown that motivation is a strong driver for task completion. Despite minimum technical knowledge, urban youth in India hungry for entertainment content traversed as many as 19 steps to Bluetooth music, videos and comedy clips between phones and PCs.

In terms of livelihoods and living, the desire to sell crops for more, have healthier children, access government grants or apply for a visa, are the motivators that we need to tap to engage low-literate users.

If “sufficient user motivation towards a goal turns UI barriers into mere speed bumps,” do we pay enough attention to how much our users want what we’re offering? This can make or break a project.

Image: © CC-BY-NC-ND by Simone D. McCourtie / World Bank