#DigitalBits

As part of my job I get to read a lot, benefiting from links shared by colleagues and listed in great newsletters. Rather than just posting them on internal Slack channels, Outlook and sporadic tweets, I’m starting an experiment to share them here. This is a list of curated links at the intersection of digital, children and policy.

1. Google, Meta, and others will have to explain their algorithms under new EU legislation (The Verge)
Great summary of the Digital Services Act’s key issues, including that minors cannot be subject to targeted advertising

2. Does a Toddler Need an NFT? (NY Times)
“The new frontier of children’s entertainment is internet-native cartoon characters selling nonfungible tokens on social-media apps for tots”

3. Snap CEO Evan Spiegel thinks the metaverse is ‘ambiguous and hypothetical’ (The Verge)
“Our big bet is on the real world, and that people really enjoy spending time together in reality.” 250 million people engaging with AR every day in just the Snapchat application.

4. Inside the Metaverse Are You Safe? (Channel 4) (Video, only available in the UK)
“In the metaverse, reporter Yinka Bokinni encounters a thrilling new world, but also a dangerous one, in which some apps expose users to racism, sexually explicit behaviour, and even sexual assault”

5. Apple to roll out child safety feature that scans messages for nudity to UK iPhones (the Guardian)
“Feature that searches messages will go ahead after delays over privacy and safety concerns”

6. Declaration for the Future of the Internet (The White House)
“United States and 60 Global Partners Launch Declaration” – includes safety for children, data protection, fair competition and a trusted digital ecosystem

2 podcasts interviews on UNICEF’s AI work

I was recently interviewed on two podcasts about UNICEF’s AI work:

The Lid is On by UN News, along with Jasmina Byrne. It is short — 11 minutes — and has a distinctly UN angle to the conversation. A nice summary of the big issues around AI and children.

Mixer LI (5).png

Data Science Mixer podcast, which is designed for data scientists and takes a deep dive into well-curated data topics, often with a human interest angle. A bit longer — 32 minutes — this dives more into the technology/data aspect of AI and children.

Three years at UNICEF: Looking back

After over three years at UNICEF, it is time to reflect on achievements and learnings and write a “brag pack” (this looking back is a tradition of mine — see my previous reviews).

I’m the Digital Policy Specialist for UNICEF, based in New York in the Office of Global Insight and Policy (OGIP). The Office serves as an internal think-tank, investigating issues with implications for children, equipping the Organization to more effectively shape global discourse, and preparing it for the future by scanning the horizon for frontier issues and ways of working.

I have tried to do two things since joining UNICEF: focus on key emerging digital issues for children, such as AI, digital literacy, and mis/disinformation, and position the Organization as a thought leader on digital issues for children. Below are some highlights:

Project leadership and innovation on emerging digital issues

AI for children

While AI is a hot topic, not enough attention is paid to how it impacts on children in policies and systems (see the report, which I co-authored, that reviewed how little national AI strategies say about children). I thus helped set up and lead the AI for Children Policy Project, a 2-year initiative in partnership with the Ministry of Foreign Affairs (MFA), Finland, that aims to see more child-centred AI systems and policies in the world. Working with a stellar team (Melanie Penagos and consultants Prof Virginia Dignum, Dr Klara Pigmans and Eleonore Pauwels, and under the guidance of Jasmina Byrne and Laurence Chandy), I:

  • Developed the work plan for the project, raised the funds for it (largest external funding for OGIP) and manage the partnership with the MFA.
  • Co-authored the Policy Guidance on AI for Children (a world first).
  • Pioneered a user-centred design approach to policy development within the UN: first we held consultations with experts around the world to inform and ground the guidance, then we released an official draft version and held public consultations on it as well as — here’s the interesting bit — invited governments and companies to pilot it (acknowledging that we don’t have all the answers in moving from AI policy to practice). From the field learnings we wrote 8 case studies about what works and what doesn’t, which informed version 2.0 (non-draft) of the policy guidance – released a year later.
  • Have overseen the first UN global consultation with children on AI, led by rock star colleague Kate Pawelczyk, to inform the development of the guidance. Adolescent perspectives on AI documents the findings from nine workshops with 245 children in five countries. A major contribution here is the workshop methodology on how to consult children on AI.
  • Helped to grow and manage an external advisory group for the AI project, including the World Economic Forum, Berkman Klein Center for Internet & Society (Harvard University), IEEE Standards Association, PwC UK and Cetic.br.
  • Hosted the world’s first Global Forum on AI for Children with 450 participants to raise awareness of children and AI and help plot a better AI future.

Achievements: the Government of Scotland has officially adopted the draft policy guidance in its national AI strategy. The policy guidance was shortlisted as a promising responsible AI initiative by the Global Partnership on AI and the Future Society, nominated for a Harvard Kennedy School Tech Spotlight recognition, and our Office’s most popular download.

Youth workshop on artificial intelligence
Teen workshop in São Paulo. Credit: (c) Leandro Martins and Ricardo Matsukawa/NIC Brazil

Digital literacy for children

While many excellent digital literacy initiatives were being driven at UNICEF, the efforts were often ad hoc and not situated within a coherent framework for the Organization. Working with Dr Fabio Nascimbeni and colleagues, we mapped the current digital literacy policy and practice landscape; highlighted existing competence frameworks and how they could be adapted to UNICEF’s needs; surveyed the needs and efforts of UNICEF country offices (a first across the Organization); and offered policy and programme recommendations, including a new definition of digital literacy for UNICEF. Our resulting paper tells all.

Digital mis/disinformation and children

As with AI, mis/disinformation are current and crucially important topics — but the discourse offers little insight into how children are affected. In navigating the digital world, with their cognitive capacities still in development, children are particularly vulnerable to the risks of mis/disinformation. At the same time, they are capable of playing a role in actively countering its flow and in mitigating its adverse effects through online fact-checking and myth-busting. Working with Prof Philip N. Howard, Lisa-Maria Neudert and Nayana Prakash of the Oxford Internet Institute, we authored a report (and 10 Things you need to know) that go beyond simply trying to understand the phenomenon of false and misleading information, to explain how policymakers, civil society, tech companies and parents and caregivers can act to support children as they grow up in a digital world rife with mis/disinformation.

Thought leadership

Helping to sharing knowledge and steer discourse on key issues:

What’s my big idea?

Digital is only a force for good when it serves all of humanity’s interests, not just those of a privileged few. Meaningful technology use must be for everyone, provide opportunities for development and livelihoods, and support well-being. Technology cannot only be for those that can control it and afford it, it should not constrain opportunity and undermine well-being.

These are not new ideas, but what I have come to believe is that the best way to achieve meaningful digital inclusion is to focus on children and youth. A digital world that works for children works best for everyone. Children under 18 make up one-third of all internet users, and youth (here, 15-24 year olds) are the most online age cohort (globally, 71% use the internet, compared with 57% of the other age groups). And yet, despite being significant user groups, they are the unseen teens. Digital platforms are not sufficiently designed or regulated with or for them.

A focus on children and youth will force platform creators and digital regulators to be more conscious of a range of different user needs – not just privilege the adult experience. It will help them take online child protection more seriously, reduce digital surveillance of children, and think creatively and co-operatively about digital experiences that support well-being of children. It does not mean “dumbing down” the internet to the lowest common denominator — not every part of the internet is appropriate for children — but rather holding inclusion, protection and empowerment for all as guiding principles.

So far it has been an incredible journey at UNICEF: stimulating, challenging and rewarding, working with amazing people on issues that really impact on children. I look forward to continue to do work that is pioneering and relevant in the coming years.

New paper: Digital misinformation / disinformation and children

Mis/disinformation are current and crucially important topics — but the discourse offers little insight into how children are affected. In navigating the digital world, with their cognitive capacities still in development, children are particularly vulnerable to the risks of mis/disinformation. At the same time, they are capable of playing a role in actively countering its flow and in mitigating its adverse effects.

Working with Prof Philip N. Howard, Lisa-Maria Neudert and Nayana Prakash of the Oxford Internet Institute, we authored a report that goes beyond simply trying to understand the phenomenon of false and misleading information, to explain how policymakers, civil society, tech companies and parents and caregivers can act to support children as they grow up in a digital world rife with mis/disinformation.

Read the report

Read 10 Things you need to know

Keynote address: Why we need child-centred AI and how we can achieve it

I recently delivered one of the opening keynotes at the Beijing AI Conference in the stream on AI Ethics and Sustainable Development, along with distinguished guests Danit Gal (Cambridge University and previous Technology Advisor to the UN), Wendell Wallach (Yale University) and Arisa Ema (University of Tokyo).

I mostly presented the UNICEF work on policy guidance on AI for children – slides here. I tried to convey three key messages:

We need AI policies and systems to be child-centered
We have ways to do this
We all need to get involved

Children now feature in the Proposal for a Regulation on a European approach for AI

On Wednesday, 21 April, the European Commission released its Proposal for a Regulation on a European approach for Artificial Intelligence, the first legal attempt to govern AI. It is wide-ranging, ambitious, and grounded in human rights. The EC likely wants this to set the tone for regulating AI not just within the EU but globally, in the same way GDPR did for data privacy.

A draft version leaked a week before did not mention children once. The Office of Global Insight and Policy (OGIP) was contacted by UNICEF Brussels to see if we could quickly give reactive inputs. Because of our Policy Guidance on AI for Children, and with support from our rock star AI consultant, Prof Virginia Dignum, we could respond within a day. Last Monday UNICEF Brussels submitted the unsolicited inputs to the EC Commission. 5Rights Foundation also submitted their asks around children, as did others, I’m sure.  We were thrilled to see that the version released that Wednesday has 10 mentions of children – well done EC:

“Furthermore, as applicable in certain domains, the proposal will positively affect the rights of a number of special groups, such as … the rights of the child (Article 24)”

Prohibited practices of AI: “The prohibitions covers practices that have a significant potential to manipulate persons through subliminal techniques beyond their consciousness or exploit vulnerabilities of specific vulnerable groups such as children or persons with disabilities in order to materially distort their behaviour in a manner that is likely to cause them or another person psychological or physical harm.”

“The use of those systems for the purpose of law enforcement should therefore be prohibited, except in three exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for potential victims of crime, including missing children; …”

“it is important to highlight that children have specific rights as enshrined in Article 24 of the EU Charter and in the United Nations Convention on the Rights of the Child (further elaborated in the UNCRC General Comment No. 25 as regards the digital environment), both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being.”

“The following artificial intelligence practices shall be prohibited … the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives: (i) the targeted search for specific potential victims of crime, including missing children;”

“When implementing the risk management system described in paragraphs 1 to 7, specific consideration shall be given to whether the high-risk AI system is likely to be accessed by or have an impact on children.”

Note that the regulation will be discussed at the European Parliament and among Member States and it will take months before the final approval. We will continue to engage the process and advocate, with the support of the Ministry of Foreign Affairs, Finland (and Ambassador Jarmo Sareva), for the inclusion of children and their rights in this AI regulation.

Design Journey of a Mobile Learning Tool

The Design Journey: Creating a Mobile Test Prep Application in South Africa is an ICTworks post I wrote with Nicola du Toit. The post describes the development of a mobile-based test and exam revision tool for high school learners called X-kit Achieve! Mobile. Nicola was the UX Designer at Pearson and I was the project lead.

Can education systems anticipate the challenges of AI? (IIEP-UNESCO strategic debate)

I was honoured to be a discussant in the IIEP-UNESCO strategic debate in Paris on the question Can education systems anticipate the challenges of AI?

Stuart Elliot, author of the OECD report ‘Computers and the Future of Skill Demand, was the main presenter with an exciting framework to help understand the impact of AI on skills and education using the OECD’s PIAAC data.

Stuart’s slides are here. My slides and the full video recording.

The Pros and Cons of Digitization on Jobs – Your Weekend Long Reads

Digitization will have profound effects on the world of work. According to McKinsey, while technology will lift productivity and economic growth, up to 375 million people may need to switch occupations or upgrade their skills. Predictions such as this cause much optimism as well as anxiety and head scratching for policymakers, trainers and employees.

Should digitization impacts concern the ICT4D community? Yes, for two reasons. Firstly, there will be new opportunities for our work and, secondly, we may unwittingly become part of the negative impacts that are projected.

The issue of technology and the future of work is one of the hottest topics in development right now. Last year the OECD published the book Computers and the Future of Skill Demand. The Brookings Institute is about to release a book titled The Future of Work: Robots, AI, and Automation. In 2019 the World Bank’s flagship World Development Report will be on the changing nature of work (working draft here).

Two themes cut across all global discussions: firstly, that technology is changing the workplace by replacing some human jobs or, more often the case, some parts of a person’s job. Automation, robotics and AI are the most cited advances.

Secondly, this change is affecting the supply and demand of skills in an economy. New ways of working require new skills, which trainers won’t be providing to students. As the training institutions catch up and supply the market with newly skilled workers, there may be new and different demands again.

Skills mismatch between industry and education providers is not new, but the issue has become more exaggerated with accelerating technological change. What is undisputed is that digital skills are becoming essential for many people living and working today – and will only become more so in the future.

Opportunities for Work Improvement

The ICT4D community essentially uses technology for social good:

  • To empower community health workers (CHWs) with just-in-time information and digital data collection to hep them do their jobs better.
  • To develop farmers’ knowledge on what new crops to grow and when.
  • To transport lifesaving medical supplies faster using drones.
  • To analyse mobile phone records as a predictor of literacy levels, informing policy decisions.

In the process, we increase access to technology and – hopefully – provide necessary training. Digital technologies and skills are the tools of our trade and increased digitization is a rising tide that will lift all the boats, especially ours.

We often measure impact in positive behaviour change, increased quality of work or more efficient processes. What we mostly don’t think about is ICT4D’s role in the creation of new businesses and jobs.

A few years ago, when considering what the post-2015 agenda should entail, Richard Heeks proposed that “ICT4D needs to link to the growth and jobs agenda in a much larger and much more direct manner around ICTs and income growth, ICTs and productivity, and ICTs and job creation.”

Heeks noted that ICTs have a central role in all of these areas in the 21st century – but also that you would be hard-pressed to notice from the ICT4D domain.

As the global debate on ICTs, work and education work rages on, we need to engage and reflect more on our positive contribution. Supporting local tech entrepreneurs and teaching coding is one obvious example. There are many more.

Risks of Job Destruction

While the ICT4D field is different to the economic-driven marketplace (our operating motives are not usually revenue-based), we would be naive to think that our reliance on bringing technology solutions to development problems cannot have similar impacts to what is happening in the workplaces of the world. The underlying principle is the same: use technology to improve efficiency, increase efficacy and reduce redundancy.

Now we know that we need to be conscious of the potential negative effects of ICT4D. The principle of understanding the existing ecosystem says that we should:

Evaluate for intended and unintended outcomes, as well as other contributing factors that may account for results. Unintended consequences could be either positive or negative and could provide useful ecosystem insights to carry forward to future deployments.

But, frankly, there’s very little said about negative consequences. The default position in ICT4D is that good will be done, development will be effected. Taking this view can blind us to the reality of unintended negative consequences.

It is worth asking:

  • Has our intervention upended a livelihoods ecosystem in some way? Has it replaced agricultural extension workers? Or required them to retrain in new skills, but without having the learning opportunities to do so?
  • Have some jobs been destroyed, or at least some tasks of some jobs — that weren’t mirrored with the creation of new jobs? What has been the impact of that?

The classic case of job destruction, or at least job adjustment, by ICT4D is of the middle man/person who buys crops at deflated prices, essentially squeezing the farmer suppliers as much as possible.

An ICT4D team provides an m-agri service that offers actual market prices to these farmers, helping them to get a fairer price from the evil middle man/person. His or her working practise changes, or perhaps he or she is cut out of the value chain altogether, unable to earn enough from being a middle man/person.

Because getting a fair price is a just cause, we don’t really care about the middle man/person. But consider another example about an mHealth intervention that uses data analysis and AI to predict where a CHW is most needed on her circuit of villages. So accurate is the model that she now doesn’t need to visit every village on every round, only about half. The intervention means only half as many CHWs will be needed. 50% of the team is laid-off.

Automation and new technologies destroy and create jobs. The Gutenberg press put many hand copiers out of work but created an industry of typesetters. It is not yet clear what the net effect will be on employment numbers. But what is certain is major change, and ICT4D practitioners are active drivers of it.

Really Considering Unintended Consequences

If an ICT4D intervention does replace the activities of someone it’s trying to help, we need to think through the longer term impact on his or her role. In a worse case scenario, could it eventually put them out of a job? Doing a risk/benefit analysis, even a basic one, can help map out the potentials. Involving the users in the discussions is ideal.

One of the best approaches to mitigate against such a risk is to support lifelong learning. By helping people to retrain in new skills they can stay relevant.

Image: CC Nicolas Bertrand / Taimani Films / World Bank

Can ICT4D Have a Cambridge Analytica-Facebook Moment? Your Weekend Long Reads

Facebook ICT4D

Facebook currently has a Cambridge Analytica problem. It is under severe pressure to explain how 87 million users had their personal data leaked and offer assurances of how it will not happen again. Beyond the US, Cambridge Analytica has been a player in multiple elections in Kenya and Nigeria.

This month Mark Zuckerberg testified before the US Congress and the biggest revelation of that episode was that America’s lawmakers have very little understanding of how Facebook works, and missed a key opportunity to engage deeply with the problems at the heart of Facebook’s business model and practices.

Thanks to the overall weak line of questioning, Zuckerberg’s net worth rose $3 billion during the testimony.

Deleting Isn’t An Option
Users are outraged, some deleting their accounts in the #DeleteFacebook movement. It seems, though, that in general even while many people get angry, they don’t do much more than utter a tut tut.

It’s worth remembering that to actually delete your Facebook account is a privilege, as New York Times reporter Sheera Frenkel tweeted. “For much of the world, Facebook is the internet and only way to connect to family/friend/business.”

From an ICT4D perspective the people we serve, who count on us for knowing how the tech and the data works, need Facebook. And indeed, so do we in our ICT4D offerings through WhatsApp, Messenger and Groups.

Many ICT4D orgs continue to ride the wave of the stellar uptake of Facebook and its owned services, utilising the reach, communication and engagement opportunities these offer, for example, through Facebook Basics.

We Do No Harm, Right?
Can the ICT4D movement have its own Facebook-Cambridge Analytica moment? The answer is yes, of course, and to prevent, or at least delay it from happening we need to vigilantly focus on data privacy and interrogate the choices we make in the offering of our services.

Knowing that using external platforms that vacuum up data can be potentially hazardous, the ICT4D community needs to reaffirm its commitment to do no harm, to ensure data privacy and security.

We’re the good guys: we are transparent with individuals whose data are collected by explaining how our initiatives will use and protect their data; we protect their data; our consent forms are written in the local language and are easily understood by the individuals whose data are being collected.

Nice words, but do we really implement them?

How Careful Are We?
Below are a few questions to ponder in the context of Cambridge Analytica-Facebook.

  • Access: WIRED magazine shows you how to download and read your Facebook data. Does your app or service allows users to do the same?
  • Clarity: Come 25 May 2018, the General Data Protection Regulation (GDPR) will require any company serving EU citizens to be very clear about what data they are collecting and what it will be used for. Users will be able to have their data removed or changed, or demand an explanation of how its being used to profile them. This is a major law for the rights of the user (well done European Commission!): How do we do comply? How clear are our ethics research forms, or terms of use on websites? How about comics to explain Ts&Cs?
  • Recourse: Again, drawing on the GDPR (you can tell I’m a big fan), how easy is it for our users to contact us, request their data to be removed, ask for the algorithm that profiles them to be explained? Do we have the capacity to meet these demands?
  • Protection: Where is the data that you collect about users? What measures have you put in place to safeguard it?

Terms and conditions are long documents. If US users were to read every privacy policy on every website they visited in a year, it would take them 25 days to complete. Unsurprisingly, most people don’t read the damned things. How much less than can we expect someone who signs with their thumbprint to read such documents?

We really need to be very creative in solving these challenges.

How Are You Transparent and Safe?
So, how is your project practicing radical transparency? Have you had to explain your actions to your users, have you been requested to delete data? Pre-emptively, in what ways have you engaged the community to explain exactly what you are doing?

Please do share your experiences.

There is value in creating templates for radically understandable ethics forms, processes for data download and explanations.

While the scale of risk is lower for us than for Facebook, based on sheer number of affected users, the issues are no less grave. Perhaps in ICT4D, by often coming as non-profits and development agents and not as commercial entities, the issues of data protection are even more important than with Facebook. We come as people who are there to help. If we fail in doing no harm, how terrible is that!?

We need to make sure our house is in order before it’s too late.