Can education systems anticipate the challenges of AI? (IIEP-UNESCO strategic debate)

I was honoured to be a discussant in the IIEP-UNESCO strategic debate in Paris on the question Can education systems anticipate the challenges of AI?

Stuart Elliot, author of the OECD report ‘Computers and the Future of Skill Demand, was the main presenter with an exciting framework to help understand the impact of AI on skills and education using the OECD’s PIAAC data.

Stuart’s slides are here. My slides and the full video recording.

Algorithmic Accountability is Possible in ICT4D

As we saw recently, when it comes to big data for public services there needs to be algorithmic accountability. People need to understand not only what data is being used, but what analysis is being performed on it and for what purpose.

Further, complementing big data with thick, adjacent and lean data also helps to tell a more complete story of analysis. These posts piqued much interest and so this third and final instalment on data offers a social welfare case study of how to be transparent with algorithms.

A Predictive Data Tool for a Big Problem

The Allegheny County Department of Human Services (DHS), Pennsylvania, USA, screen calls about the welfare of local children. The DHS receives around 15,000 calls per year for a county of 1.2 million people. With limited resources to deal with this volume of calls, limited data to work with, and each decision a tough and important one to make, it is critical to prioritize the highest need cases for investigation.

To help, the Allegheny Family Screening Tool was developed. It’s a predictive-risk modeling algorithm built to make better use of data already available in order to help improve decision-making by social workers.

Drawing on a number of different data sources, including databases from local housing authorities, the criminal justice system and local school districts, for each call the tool produces a Family Screening Score. The score is a prediction of the likelihood of future abuse.

The tool is there to help analyse and connect a large number of data points to better inform human decisions. Importantly, the algorithm doesn’t replace clinical judgement by social workers – except when the score is at the highest levels, in which case the call must be investigated.

As the New York Times reports, before the tool 48% of the lowest-risk families were being flagged for investigation, while 27% of the highest-risk families were not. At best, decisions like this put an unnecessary strain on limited resources and, at worst, result in severe child abuse.

How to Be Algorithmically Accountable

Given the sensitivity of screening child welfare calls, the system had to be as robust and transparent as possible. Mozilla reports the ways in which the tool was designed, over multiple years, to be like this:

  • A rigorous public procurement process.
  • A public paper describing all data going into the algorithm.
  • Public meetings to explain the tool, where community members could ask questions, provide input and influence the process. Professor Rhema Vaithianathan is the rock star data storyteller on the project.
  • An independent ethical review of implementing, or failing to implement, a tool such as this.
  • A validation study.

The algorithm is open to scrutiny, owned by the county and constantly being reviewed for improvement. According to the Wall Street Journal the trailblazing approach and the tech are being watched with much interest by other counties.

It Takes Extreme Transparency

It takes boldness to build and use a tool in this way. Erin Dalton, a deputy director of the county’s DHS and leader of its data-analysis department, says that “nobody else is willing to be this transparent.” The exercise is obviously an expensive and time-consuming one, but it’s possible.

During recent discussions on AI at the World Bank the point was raised that because some big data analysis methods are opaque, policymakers may need a lot of convincing to use them. Policymakers may be afraid of the media fallout when algorithms get it badly wrong.

It’s not just the opaqueness, the whole data chain is complex. In education Michael Trucano of the World Bank asks: “What is the net impact on transparency within an education system when we advocate for open data but then analyze these data (and make related decisions) with the aid of ‘closed’ algorithms?”

In short, it’s complicated and it’s sensitive. A lot of convincing is needed for those at the top, and at the bottom. But, as Allegheny County DHS has shown, it’s possible. For ICT4D, their tool demonstrates that public-service algorithms can be developed ethically, openly and with the community.

Stanford University is currently examining the impact of the tool on the accuracy of decisions, overall referral rates and workload, and more. Like many others, we should keep a close watch on this case.

Every Big Data Algorithm Needs a Storyteller – Your Weekend Long Reads

The use of big data by public institutions is increasingly shaping peoples’ lives. In the USA, algorithms influence the criminal justice system through risk assessment and predictive policing systems, drive energy allocation and change educational system through new teacher evaluation tools.

The belief is that the data knows best, that you can’t argue with the math, and that the algorithms ensure the work of public agencies is more efficient and effective. And, often, we simply have to maintain this trust because nobody can examine the algorithms.

But what happens when – not if – the data works against us? What is the consequence of the algorithms being “black boxed” and outside of public scrutiny? Behind this are two implications for ICT4D.

The Data Don’t Lie, Right?

Data scientist and Harvard PhD in Mathematics, Cathy O’Neill, says that clever marketing has tricked us to be intimidated by algorithms, to make us trust and fear algorithms simply because, in general, we trust and fear math.

O’Neill’s 2016 book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, shows how when big data goes wrong teachers lose jobs, women don’t get promoted and global financial systems crash. Her key message: the era of blind faith in big data must end, and the black boxes must be opened.

Demand Algorithmic Accountability

It is very interesting, then, that New York City has a new law on the books to do just that and demand “algorithmic accountability” (presumably drawing on the Web Foundation’s report of the same name). According to MIT Technology Review, the city’s council passed America’s first bill to ban algorithmic discrimination in city government. The bill wants a task force to study how city agencies use algorithms and create a report on how to make algorithms more easily understandable to the public.

AI Now, a research institute at New York University focused on the social impact of AI, has offered a framework centered on what it calls Algorithmic Impact Assessments. Essentially, this calls for greater openness around algorithms, strengthening of agencies’ capacities to evaluate the systems they procure, and increased public opportunity to dispute the numbers and the math behind them.

Data Storytellers

So, what does this mean for ICT4D? Two things, based on our commitment to being transparent and accountable for the data we collect. Firstly, organisations that mine big data need to become interpreters of their algorithms. Someone on the data science team needs to be able to explain the math to the public.

Back in 2014 the UN Secretary General proposed that “communities of ‘information intermediaries’ should be fostered to develop new tools that can translate raw data into information for a broader constituency of non-technical potential users and enable citizens and other data users to provide feedback.” You’ve noticed the increase in jobs for data scientists and data visualisation designers, right?

But it goes beyond that. With every report and outcome that draws on big data, there needs to be a “how we got here” explanation. Not just making the data understandable, but the story behind that data. Maybe the data visualiser does this, but maybe there’s a new role of data storyteller in the making.

The UN Global Pulse principle says we should “design, carry out, report and document our activities with adequate accuracy and openness.” At the same time, Forbes says data storytelling is an essential skill. There is clearly a connection here. Design and UI thinking will be needed to make sure the heavy lifting behind the data scenes can be easily explained, like you would to your grandmother. Is this an impossible ask? Well, the alternative is simply not an option anymore.

Data Activists

Secondly, organisations that use someone else’s big data analysis – like many ICT4D orgs these days – need to take an activist approach. They need to ask questions about where the data comes from, what steps were taken to audit it for inherent bias, for an explanation of the “secret sauce” in the analysis. We need to demand algorithmic accountability” We are creators and arbiters of big data.

The issue extends beyond protecting user data and privacy, important as this is. It relates to transparency and comprehension. Now is the time, before it’s too late, to lay down the practices that ensure we all know how big data gets cooked up.

Image: CC by kris krüg

Voice for Development: Your Weekend Long Reads

While ICT4D innovates from the ground up, most tech we use comes from the top. Yes, it takes a little time for the prices of commercial services in Silicon Valley to drop sufficiently, and the tech to diffuse to the audiences we work with, but the internet and mobile have made that wait short indeed.

Next Big Wave After the Keyboard and Touch: Voice

One such innovation is natural language processing, which draws on AI and machine learning to attempt to understand human language communication and to react and respond appropriately.

While this is not a new field, the quality of understanding and speaking has improved dramatically in recent years. The Economist predicts that voice computing, which enables hands-off communication with machines, is the next and fundamental wave of human-machine interaction, after the keyboard and then touch.

The prediction is driven by tech advances as well as increasing uptake in the consumer market (note: in developed markets): last year Apple’s Siri was handling over 2bn commands a week, and 20% of Google searches on Android-powered handsets in America were input by voice.

Alexa Everywhere

Alexa is Amazon’s voice assistant that lives in Amazon devices like Echo and Dot. Well, actually, Alexa lives in the cloud and provides speech recognition and machine learning services to all Alexa-enabled devices.

Unlike Google and Apple, Amazon is wanting to open up Alexa and have it (her?) embedded into any products, not just those from Amazon. If you’re into manufacturer, you can now buy one of a range of Alexa Development Kits for a few hundred dollars to construct your own voice-controlled products.

Skills Skills Skills

While Amazon works hard to get Alexa into every home, car and device, you can in the meantime start creating Alexa skills. There’s a short Codecademy course on how to do this. It explains that Alexa provides a set of built-in capabilities, referred to as skills, that define how you can interact with the device. For example, Alexa’s built-in skills include playing music, reading the news, getting a weather forecast, and querying Wikipedia. So, you could say things like: Alexa, what’s the weather in Timbuktu.

Anyone can develop their own custom skills by using the Alexa Skills Kit (ASK). (The skills can only be used in the UK, US and Germany, presumably for now.) An Amazon user “enables” the skill after which it works on any of her Alexa-enabled devices. Et voilà, she simply says the wake phrase to access the skill. This is pretty cool.

What Does This Mean for ICT4D?

Is the day coming, not long from now, when machine-based voice assistants are ICT4D’s greatest helpers? Will it open doors of convenience for all and doors of inclusion for people with low digital skills or literacy? Hmmm. There’s a lot of ground to cover before that happens.

While natural language processing has come a looooong way, it’s far from perfect either. Comments about this abound — this one concerning Question of the Day, a popular Alexa skill:

Alexa sometimes does not hear the answer correctly, even though I try very hard to enunciate. It’s frustrating when I’ve gotten the answer right — not even by guessing, but actually knew it — and Alexa comes back and tells me I’ve gotten it wrong!

In ICT4D, there’s isn’t always room for error. What about sensitive content and interactions that can easily go awry? Is it likely that soon someone will say Alexa, send 100 dollars to my mother in the Philippines? What if she sends the money to the brother in New Orleans?

Other challenges include Alexa’s language range, cost, the need for online connectivity and, big one, privacy. There is a risk in being tied to one provider, one tech giant. This stuff should be based on open standards.

Still, it is interesting and exciting to see this move from Amazon and contemplate how it could affect ICT4D. What are your thoughts for how voice for development (V4D) could make a social impact?

Here’s a parting challenge to ICTWorks readers: Try out Amazon Skills and tell us whether it’s got legs for development? An ICT4D skill, if you will. (It can be something simple for now, not Alexa, eliminate world poverty).

Image: CC-BY-NC by Rob Albright

Artificial Intelligence in Education: Your Weekend Long Reads


Continuing the focus on artificial intelligence (AI), this weekend looks at it in education. In general, there are many fanciful AI in Ed possibilities proposed to help people teach and learn, some of which are genuinely exciting and others that just look much like today.

One encouraging consensus from the readings below is that, while there is concern that AI and robots will ultimately take over certain human jobs, teachers are safe. The role relies too much on the skills that AI is not good at, such as creativity and emotional intelligence.

An Argument for AI in Education

A 2016 report (two-page summary) from Pearson and University College London’s Knowledge Lab offers a very readable and coherent argument for AI in education. It describes what is possible today, for example one-on-one digital tutoring to every student, and what is potentially possible in the future, such as lifelong learning companions powered by AI that can accompany and support individual learners throughout their studies – in and beyond school. Or, one day, there could be new forms of assessment that measure learning while it is taking place, shaping the learning experience in real time. It also proposes three actions to help us get from here to there.

AI and People, Not AI Instead of People

There is an argument that rather than focusing solely on building more intelligent AI to take humans out of the loop, we should focus just as much on intelligence amplification/augmentation. This is the use of technology – including AI – to provide people with information that helps them make better decisions and learn more effectively. So, for instance, rather than automating the grading of student essays, some researchers are focusing on how they can provide intelligent feedback to students that helps them better assess their own writing.

The “Human Touch” as Value Proposition

At Online Educa Berlin last month, I heard Dr. Tarek R. Besold, lecturer in Data Science at City, University of London, talk about AI in Ed (my rough notes are here). He built on the idea that we need to think more carefully about what AI does well and what humans do well.

For example, AI can provide intelligent tutoring, but only on well-defined, narrow domains for which we have lots of data. Learning analytics can analyse learner behaviour and teacher activities … so as to identify individual needs and preferences to inform human intervention. Humans, while inefficient at searching, sorting and mining data, for example, are good at understanding, empathy and relationships.

In fact, of all the sectors McKinsey & Company examined in a report on where machines could replace humans, the technical feasibility of automation is lowest in education, at least for now. Why? Because the essence of teaching is deep expertise and complex interactions with other people, things that AI are not yet good at. Besold proposed the “human touch” as our value proposition.

Figuring out how humans and AI can bring out the best in each other to improve education, now that is an exciting proposal. Actually creating this teacher-machine symbiosis in the classroom will be a major challenge, though, given the perception of job loss from technology.

The Future of AI Will Be Female

Emotional intelligence is increasingly in demand in the workplace, and will only be more so in the future when AI will have replaced predicable, repetitive jobs. This means that cultivating emotional intelligence and social skills should be critical components of education today. But there’s a fascinating angle here: in general, women score much higher than men in emotional intelligence. Thus, Quartz claims, women are far better prepared for an AI future.

Image: © CC-BY-NC-ND by Ericsson

Artificial Intelligence: Your Weekend Long Reads

Artificial intelligence (AI) was one of the hottest topics of 2017. A Gartner “mega trend,” their research director, Mike J. Walker, proposed that “AI technologies will be the most disruptive class of technologies over the next 10 years due to radical computational power, near-endless amounts of data and unprecedented advances in deep neural networks.”

But as much as it is trendy and bursting with promise, it is also controversial, overhyped and misunderstood. In fact, it has yet to enjoy a widely accepted definition.

AI underpins many of Gartner’s emerging technologies on its 2017 hype cycle. However, smart robots, deep learning and machine learning were all cresting the Peak of Inflated Expectations. Of course, after that comes the Trough of Disillusionment. Collectively they will take two to ten years to reach the Plateau of Productivity.

AI is both a long game and already in our lives. Your Amazon or Netflix recommendations are partly AI-based. So is speech recognition and translation,  such as in Google Home and Google Translate. But, as you know from using these services, they are far from perfect. Closer to ICT4D, within monitoring and evaluation we know the opportunities and limitations of AI.

In 2018 we can expect to hear a lot more about AI, along with promises and disappointments. Almost anyone who’s software has an algorithm will claim they’re harnessing AI. There will suddenly be more adaptive, intelligent platforms in edtech, and more talk of smart robots and AI hollowing out the global job market.

While there will be some truth to the AI claims and powerful new platforms, we need to learn to read between the lines. The potential of AI is exciting and will be realised over the coming years and decades, but in varying degrees and unevenly spread. For now, a balanced view is needed to discern between what is hype or on the long horizon, and what can we use today for greater social impact. Only in this way can we fully get to grips with the technological, social and ethical impact of AI. Below are a few articles to get our interest piqued in 2018.

The Next Fifteen Years

To get the big picture, an excellent place to start is the Stanford University report Artificial Intelligence and Life in 2030. A panel of experts focussed the AI lens on eight domains they considered most salient: transportation; service robots; healthcare; education; low-resource communities; public safety and security; employment and workplace; and entertainment. In each of these domains, the report both reflects on progress in the past fifteen years and anticipates developments in the coming fifteen years.

AI for Good

Last year the ITU hosted the AI for Good Global Summit, which brought together a host of international NGOs, UN bodies, academia and the private sector to consider the opportunities and limitations of AI for good. The conference report offers a summary of the key takeaways and applications cited in the event. A number of webcasts are also available.

AI Moves into the Cloud

While most ICT4D tech outfits simply don’t have access to the computing power and expertise to fully utilise AI, this is starting to change. In 2017, AI floated into the cloud. Amazon, Google and Microsoft have introduced large-scale cloud-based AI. This includes open-source AI software as well as AI services for turning speech in audio files into time-stamped text, translating between various languages and tracking people, activities, and objects in video. I’m looking forward to seeing these tools used in ICT4D soon.

Growing Up with Alexa

Considering the interaction between her four-year-old niece and Amazon Echo’s Alexa, a reporter asked the following question: What will it do to kids to have digital butlers they can boss around? What is the impact of growing up with Alexa? Will it make kids better adjusted and educated — or the opposite? This piece offers interesting questions on the social impact of AI on children.

The Ethical Dimension

The World Commission on the Ethics of Scientific Knowledge and Technology of UNESCO (COMEST) last year released a report on the ethical issues surrounding the use of contemporary robotic technologies — underpinned by AI — in society (there is a 2-minute video summary). The bottom line: some decisions always require meaningful human control.

Amidst the growing role of robots in our world there are new responsibilities for humans to ensure that people and machines can live in productive co-existence. As AI impacts our world in greater ways, the ethical dimension will equally become more important, bringing philosophers, technologists and policy-makers around the same table. Being in the ICT4D space, our role as technologists and development agents will be critical here.

Image: © CC-BY-SA by Silver Blue