2022/23: My annual review

I have a tradition, loosely kept since 2009, of writing a short annual review — a looking back and ahead (something like what UNICEF calls a “strategic moment of reflection”). At the intersection of digital, children and policy, what have I done and how have I tried to provide thought leadership?

Biggest areas of interest: Working with amazing colleagues and experts, I’m analyzing issues that could profoundly impact the future of humanity, especially children and youth who are the largest online cohort and driving force of connectivity:

  • Achieving digital equality — how can we better address all the issues (including the non-tech ones) that prevent every child from being able to seize digital opportunities and avoid risks?
  • Shaping the next evolutionary step of the internet — will it see us moving into the virtual reality metaverse or us staying IRL but laden with wearable and embedded technologies in every aspect of our lives (or both)? How can we ensure AI best facilitates how we interact with information and each other? Our Office’s previous work on AI, data governance and personalized learning has been hugely valuable here.
  • Tracking neurotechnology — while it offers unprecedented health benefits for people, like helping those with paralysis move again, will it signal the death of privacy if our thoughts are no longer our own?

In his opening speech at the 2022 UN General Assembly, Secretary-General António Guterres called the “lack of guardrails around promising new technologies to heal disease, connect people and expand opportunity” a “crisis”. The need to unpack what frontier technologies mean for children, and to build those policy guardrails, is pressing today. Positioning UNICEF as a leading organization in this role to help policymakers get ahead of emerging issues is both critical and exciting.

Most impactful moment: Engaging the first cohort of UNICEF Youth Foresight Fellows, a group of bright and talented young futurists, to anticipate global trends. Their insights and ability to see opportunity in crisis were immensely instructive (and refreshing in the current climate of ‘techlash’).

Most brag-worthy: Being a member of the World Economic Forum’s Global Future Council on Artificial Intelligence for Humanity, its Metaverse Governance Working Group, and a contributing expert to MIT Sloan Management Review’s responsible AI initiative.

Most fun: Playing around with AI tools like Dall-E and ChatGPT that generate images and text (see below).

And in 2023 …

Mantra (inspired by the fellows): With young people, shaping the digital future we want beyond putting out fires in the internet we have.

Looking forward to: Working with colleagues in the newly merged UNICEF Innocenti – Global Office of Research and Foresight to bring together the best of research, foresight and policy to better anticipate and direct frontier technologies. Our ambition is nothing less than having children’s rights at the heart of global digital discourse and enabling a future-ready UNICEF. Contributing to the forthcoming Global Digital Compact will be an important moment for influence.

I was hoping for the original Grumpy Cat, but this is pretty cool
Quickly generated during a workshop with the Youth Foresight Fellows
Not bad, ChatGPT


As part of my job I get to read a lot, benefiting from links shared by colleagues and listed in great newsletters. Rather than just posting them on internal Slack channels, Outlook and sporadic tweets, I’m starting an experiment to share them here. This is a list of curated links at the intersection of digital, children and policy.

1. Google, Meta, and others will have to explain their algorithms under new EU legislation (The Verge)
Great summary of the Digital Services Act’s key issues, including that minors cannot be subject to targeted advertising

2. Does a Toddler Need an NFT? (NY Times)
“The new frontier of children’s entertainment is internet-native cartoon characters selling nonfungible tokens on social-media apps for tots”

3. Snap CEO Evan Spiegel thinks the metaverse is ‘ambiguous and hypothetical’ (The Verge)
“Our big bet is on the real world, and that people really enjoy spending time together in reality.” 250 million people engaging with AR every day in just the Snapchat application.

4. Inside the Metaverse Are You Safe? (Channel 4) (Video, only available in the UK)
“In the metaverse, reporter Yinka Bokinni encounters a thrilling new world, but also a dangerous one, in which some apps expose users to racism, sexually explicit behaviour, and even sexual assault”

5. Apple to roll out child safety feature that scans messages for nudity to UK iPhones (the Guardian)
“Feature that searches messages will go ahead after delays over privacy and safety concerns”

6. Declaration for the Future of the Internet (The White House)
“United States and 60 Global Partners Launch Declaration” – includes safety for children, data protection, fair competition and a trusted digital ecosystem

2 podcasts interviews on UNICEF’s AI work

I was recently interviewed on two podcasts about UNICEF’s AI work:

The Lid is On by UN News, along with Jasmina Byrne. It is short — 11 minutes — and has a distinctly UN angle to the conversation. A nice summary of the big issues around AI and children.

Mixer LI (5).png

Data Science Mixer podcast, which is designed for data scientists and takes a deep dive into well-curated data topics, often with a human interest angle. A bit longer — 32 minutes — this dives more into the technology/data aspect of AI and children.

Three years at UNICEF: Looking back

After over three years at UNICEF, it is time to reflect on achievements and learnings and write a “brag pack” (this looking back is a tradition of mine — see my previous reviews).

I’m the Digital Policy Specialist for UNICEF, based in New York in the Office of Global Insight and Policy (OGIP). The Office serves as an internal think-tank, investigating issues with implications for children, equipping the Organization to more effectively shape global discourse, and preparing it for the future by scanning the horizon for frontier issues and ways of working.

I have tried to do two things since joining UNICEF: focus on key emerging digital issues for children, such as AI, digital literacy, and mis/disinformation, and position the Organization as a thought leader on digital issues for children. Below are some highlights:

Project leadership and innovation on emerging digital issues

AI for children

While AI is a hot topic, not enough attention is paid to how it impacts on children in policies and systems (see the report, which I co-authored, that reviewed how little national AI strategies say about children). I thus helped set up and lead the AI for Children Policy Project, a 2-year initiative in partnership with the Ministry of Foreign Affairs (MFA), Finland, that aims to see more child-centred AI systems and policies in the world. Working with a stellar team (Melanie Penagos and consultants Prof Virginia Dignum, Dr Klara Pigmans and Eleonore Pauwels, and under the guidance of Jasmina Byrne and Laurence Chandy), I:

  • Developed the work plan for the project, raised the funds for it (largest external funding for OGIP) and manage the partnership with the MFA.
  • Co-authored the Policy Guidance on AI for Children (a world first).
  • Pioneered a user-centred design approach to policy development within the UN: first we held consultations with experts around the world to inform and ground the guidance, then we released an official draft version and held public consultations on it as well as — here’s the interesting bit — invited governments and companies to pilot it (acknowledging that we don’t have all the answers in moving from AI policy to practice). From the field learnings we wrote 8 case studies about what works and what doesn’t, which informed version 2.0 (non-draft) of the policy guidance – released a year later.
  • Have overseen the first UN global consultation with children on AI, led by rock star colleague Kate Pawelczyk, to inform the development of the guidance. Adolescent perspectives on AI documents the findings from nine workshops with 245 children in five countries. A major contribution here is the workshop methodology on how to consult children on AI.
  • Helped to grow and manage an external advisory group for the AI project, including the World Economic Forum, Berkman Klein Center for Internet & Society (Harvard University), IEEE Standards Association, PwC UK and Cetic.br.
  • Hosted the world’s first Global Forum on AI for Children with 450 participants to raise awareness of children and AI and help plot a better AI future.

Achievements: the Government of Scotland has officially adopted the draft policy guidance in its national AI strategy. The policy guidance was shortlisted as a promising responsible AI initiative by the Global Partnership on AI and the Future Society, nominated for a Harvard Kennedy School Tech Spotlight recognition, and our Office’s most popular download.

Teen workshop in São Paulo. Credit: (c) Leandro Martins and Ricardo Matsukawa/NIC Brazil

Digital literacy for children

While many excellent digital literacy initiatives were being driven at UNICEF, the efforts were often ad hoc and not situated within a coherent framework for the Organization. Working with Dr Fabio Nascimbeni and colleagues, we mapped the current digital literacy policy and practice landscape; highlighted existing competence frameworks and how they could be adapted to UNICEF’s needs; surveyed the needs and efforts of UNICEF country offices (a first across the Organization); and offered policy and programme recommendations, including a new definition of digital literacy for UNICEF. Our resulting paper tells all.

Digital mis/disinformation and children

As with AI, mis/disinformation are current and crucially important topics — but the discourse offers little insight into how children are affected. In navigating the digital world, with their cognitive capacities still in development, children are particularly vulnerable to the risks of mis/disinformation. At the same time, they are capable of playing a role in actively countering its flow and in mitigating its adverse effects through online fact-checking and myth-busting. Working with Prof Philip N. Howard, Lisa-Maria Neudert and Nayana Prakash of the Oxford Internet Institute, we authored a report (and 10 Things you need to know) that go beyond simply trying to understand the phenomenon of false and misleading information, to explain how policymakers, civil society, tech companies and parents and caregivers can act to support children as they grow up in a digital world rife with mis/disinformation.

Thought leadership

Helping to sharing knowledge and steer discourse on key issues:

What’s my big idea?

Digital is only a force for good when it serves all of humanity’s interests, not just those of a privileged few. Meaningful technology use must be for everyone, provide opportunities for development and livelihoods, and support well-being. Technology cannot only be for those that can control it and afford it, it should not constrain opportunity and undermine well-being.

These are not new ideas, but what I have come to believe is that the best way to achieve meaningful digital inclusion is to focus on children and youth. A digital world that works for children works best for everyone. Children under 18 make up one-third of all internet users, and youth (here, 15-24 year olds) are the most online age cohort (globally, 71% use the internet, compared with 57% of the other age groups). And yet, despite being significant user groups, they are the unseen teens. Digital platforms are not sufficiently designed or regulated with or for them.

A focus on children and youth will force platform creators and digital regulators to be more conscious of a range of different user needs – not just privilege the adult experience. It will help them take online child protection more seriously, reduce digital surveillance of children, and think creatively and co-operatively about digital experiences that support well-being of children. It does not mean “dumbing down” the internet to the lowest common denominator — not every part of the internet is appropriate for children — but rather holding inclusion, protection and empowerment for all as guiding principles.

So far it has been an incredible journey at UNICEF: stimulating, challenging and rewarding, working with amazing people on issues that really impact on children. I look forward to continue to do work that is pioneering and relevant in the coming years.

New paper: Digital misinformation / disinformation and children

Mis/disinformation are current and crucially important topics — but the discourse offers little insight into how children are affected. In navigating the digital world, with their cognitive capacities still in development, children are particularly vulnerable to the risks of mis/disinformation. At the same time, they are capable of playing a role in actively countering its flow and in mitigating its adverse effects.

Working with Prof Philip N. Howard, Lisa-Maria Neudert and Nayana Prakash of the Oxford Internet Institute, we authored a report that goes beyond simply trying to understand the phenomenon of false and misleading information, to explain how policymakers, civil society, tech companies and parents and caregivers can act to support children as they grow up in a digital world rife with mis/disinformation.

Read the report

Read 10 Things you need to know

Keynote address: Why we need child-centred AI and how we can achieve it

I recently delivered one of the opening keynotes at the Beijing AI Conference in the stream on AI Ethics and Sustainable Development, along with distinguished guests Danit Gal (Cambridge University and previous Technology Advisor to the UN), Wendell Wallach (Yale University) and Arisa Ema (University of Tokyo).

I mostly presented the UNICEF work on policy guidance on AI for children – slides here. I tried to convey three key messages:

We need AI policies and systems to be child-centered
We have ways to do this
We all need to get involved

Children now feature in the Proposal for a Regulation on a European approach for AI

On Wednesday, 21 April, the European Commission released its Proposal for a Regulation on a European approach for Artificial Intelligence, the first legal attempt to govern AI. It is wide-ranging, ambitious, and grounded in human rights. The EC likely wants this to set the tone for regulating AI not just within the EU but globally, in the same way GDPR did for data privacy.

A draft version leaked a week before did not mention children once. The Office of Global Insight and Policy (OGIP) was contacted by UNICEF Brussels to see if we could quickly give reactive inputs. Because of our Policy Guidance on AI for Children, and with support from our rock star AI consultant, Prof Virginia Dignum, we could respond within a day. Last Monday UNICEF Brussels submitted the unsolicited inputs to the EC Commission. 5Rights Foundation also submitted their asks around children, as did others, I’m sure.  We were thrilled to see that the version released that Wednesday has 10 mentions of children – well done EC:

“Furthermore, as applicable in certain domains, the proposal will positively affect the rights of a number of special groups, such as … the rights of the child (Article 24)”

Prohibited practices of AI: “The prohibitions covers practices that have a significant potential to manipulate persons through subliminal techniques beyond their consciousness or exploit vulnerabilities of specific vulnerable groups such as children or persons with disabilities in order to materially distort their behaviour in a manner that is likely to cause them or another person psychological or physical harm.”

“The use of those systems for the purpose of law enforcement should therefore be prohibited, except in three exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for potential victims of crime, including missing children; …”

“it is important to highlight that children have specific rights as enshrined in Article 24 of the EU Charter and in the United Nations Convention on the Rights of the Child (further elaborated in the UNCRC General Comment No. 25 as regards the digital environment), both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being.”

“The following artificial intelligence practices shall be prohibited … the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives: (i) the targeted search for specific potential victims of crime, including missing children;”

“When implementing the risk management system described in paragraphs 1 to 7, specific consideration shall be given to whether the high-risk AI system is likely to be accessed by or have an impact on children.”

Note that the regulation will be discussed at the European Parliament and among Member States and it will take months before the final approval. We will continue to engage the process and advocate, with the support of the Ministry of Foreign Affairs, Finland (and Ambassador Jarmo Sareva), for the inclusion of children and their rights in this AI regulation.

Can education systems anticipate the challenges of AI? (IIEP-UNESCO strategic debate)

I was honoured to be a discussant in the IIEP-UNESCO strategic debate in Paris on the question Can education systems anticipate the challenges of AI?

Stuart Elliot, author of the OECD report ‘Computers and the Future of Skill Demand, was the main presenter with an exciting framework to help understand the impact of AI on skills and education using the OECD’s PIAAC data.

Stuart’s slides are here. My slides and the full video recording.

The Pros and Cons of Digitization on Jobs – Your Weekend Long Reads

Digitization will have profound effects on the world of work. According to McKinsey, while technology will lift productivity and economic growth, up to 375 million people may need to switch occupations or upgrade their skills. Predictions such as this cause much optimism as well as anxiety and head scratching for policymakers, trainers and employees.

Should digitization impacts concern the ICT4D community? Yes, for two reasons. Firstly, there will be new opportunities for our work and, secondly, we may unwittingly become part of the negative impacts that are projected.

The issue of technology and the future of work is one of the hottest topics in development right now. Last year the OECD published the book Computers and the Future of Skill Demand. The Brookings Institute is about to release a book titled The Future of Work: Robots, AI, and Automation. In 2019 the World Bank’s flagship World Development Report will be on the changing nature of work (working draft here).

Two themes cut across all global discussions: firstly, that technology is changing the workplace by replacing some human jobs or, more often the case, some parts of a person’s job. Automation, robotics and AI are the most cited advances.

Secondly, this change is affecting the supply and demand of skills in an economy. New ways of working require new skills, which trainers won’t be providing to students. As the training institutions catch up and supply the market with newly skilled workers, there may be new and different demands again.

Skills mismatch between industry and education providers is not new, but the issue has become more exaggerated with accelerating technological change. What is undisputed is that digital skills are becoming essential for many people living and working today – and will only become more so in the future.

Opportunities for Work Improvement

The ICT4D community essentially uses technology for social good:

  • To empower community health workers (CHWs) with just-in-time information and digital data collection to hep them do their jobs better.
  • To develop farmers’ knowledge on what new crops to grow and when.
  • To transport lifesaving medical supplies faster using drones.
  • To analyse mobile phone records as a predictor of literacy levels, informing policy decisions.

In the process, we increase access to technology and – hopefully – provide necessary training. Digital technologies and skills are the tools of our trade and increased digitization is a rising tide that will lift all the boats, especially ours.

We often measure impact in positive behaviour change, increased quality of work or more efficient processes. What we mostly don’t think about is ICT4D’s role in the creation of new businesses and jobs.

A few years ago, when considering what the post-2015 agenda should entail, Richard Heeks proposed that “ICT4D needs to link to the growth and jobs agenda in a much larger and much more direct manner around ICTs and income growth, ICTs and productivity, and ICTs and job creation.”

Heeks noted that ICTs have a central role in all of these areas in the 21st century – but also that you would be hard-pressed to notice from the ICT4D domain.

As the global debate on ICTs, work and education work rages on, we need to engage and reflect more on our positive contribution. Supporting local tech entrepreneurs and teaching coding is one obvious example. There are many more.

Risks of Job Destruction

While the ICT4D field is different to the economic-driven marketplace (our operating motives are not usually revenue-based), we would be naive to think that our reliance on bringing technology solutions to development problems cannot have similar impacts to what is happening in the workplaces of the world. The underlying principle is the same: use technology to improve efficiency, increase efficacy and reduce redundancy.

Now we know that we need to be conscious of the potential negative effects of ICT4D. The principle of understanding the existing ecosystem says that we should:

Evaluate for intended and unintended outcomes, as well as other contributing factors that may account for results. Unintended consequences could be either positive or negative and could provide useful ecosystem insights to carry forward to future deployments.

But, frankly, there’s very little said about negative consequences. The default position in ICT4D is that good will be done, development will be effected. Taking this view can blind us to the reality of unintended negative consequences.

It is worth asking:

  • Has our intervention upended a livelihoods ecosystem in some way? Has it replaced agricultural extension workers? Or required them to retrain in new skills, but without having the learning opportunities to do so?
  • Have some jobs been destroyed, or at least some tasks of some jobs — that weren’t mirrored with the creation of new jobs? What has been the impact of that?

The classic case of job destruction, or at least job adjustment, by ICT4D is of the middle man/person who buys crops at deflated prices, essentially squeezing the farmer suppliers as much as possible.

An ICT4D team provides an m-agri service that offers actual market prices to these farmers, helping them to get a fairer price from the evil middle man/person. His or her working practise changes, or perhaps he or she is cut out of the value chain altogether, unable to earn enough from being a middle man/person.

Because getting a fair price is a just cause, we don’t really care about the middle man/person. But consider another example about an mHealth intervention that uses data analysis and AI to predict where a CHW is most needed on her circuit of villages. So accurate is the model that she now doesn’t need to visit every village on every round, only about half. The intervention means only half as many CHWs will be needed. 50% of the team is laid-off.

Automation and new technologies destroy and create jobs. The Gutenberg press put many hand copiers out of work but created an industry of typesetters. It is not yet clear what the net effect will be on employment numbers. But what is certain is major change, and ICT4D practitioners are active drivers of it.

Really Considering Unintended Consequences

If an ICT4D intervention does replace the activities of someone it’s trying to help, we need to think through the longer term impact on his or her role. In a worse case scenario, could it eventually put them out of a job? Doing a risk/benefit analysis, even a basic one, can help map out the potentials. Involving the users in the discussions is ideal.

One of the best approaches to mitigate against such a risk is to support lifelong learning. By helping people to retrain in new skills they can stay relevant.

Image: CC Nicolas Bertrand / Taimani Films / World Bank