Every Big Data Algorithm Needs a Storyteller – Your Weekend Long Reads

The use of big data by public institutions is increasingly shaping peoples’ lives. In the USA, algorithms influence the criminal justice system through risk assessment and predictive policing systems, drive energy allocation and change educational system through new teacher evaluation tools.

The belief is that the data knows best, that you can’t argue with the math, and that the algorithms ensure the work of public agencies is more efficient and effective. And, often, we simply have to maintain this trust because nobody can examine the algorithms.

But what happens when – not if – the data works against us? What is the consequence of the algorithms being “black boxed” and outside of public scrutiny? Behind this are two implications for ICT4D.

The Data Don’t Lie, Right?

Data scientist and Harvard PhD in Mathematics, Cathy O’Neill, says that clever marketing has tricked us to be intimidated by algorithms, to make us trust and fear algorithms simply because, in general, we trust and fear math.

O’Neill’s 2016 book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, shows how when big data goes wrong teachers lose jobs, women don’t get promoted and global financial systems crash. Her key message: the era of blind faith in big data must end, and the black boxes must be opened.

Demand Algorithmic Accountability

It is very interesting, then, that New York City has a new law on the books to do just that and demand “algorithmic accountability” (presumably drawing on the Web Foundation’s report of the same name). According to MIT Technology Review, the city’s council passed America’s first bill to ban algorithmic discrimination in city government. The bill wants a task force to study how city agencies use algorithms and create a report on how to make algorithms more easily understandable to the public.

AI Now, a research institute at New York University focused on the social impact of AI, has offered a framework centered on what it calls Algorithmic Impact Assessments. Essentially, this calls for greater openness around algorithms, strengthening of agencies’ capacities to evaluate the systems they procure, and increased public opportunity to dispute the numbers and the math behind them.

Data Storytellers

So, what does this mean for ICT4D? Two things, based on our commitment to being transparent and accountable for the data we collect. Firstly, organisations that mine big data need to become interpreters of their algorithms. Someone on the data science team needs to be able to explain the math to the public.

Back in 2014 the UN Secretary General proposed that “communities of ‘information intermediaries’ should be fostered to develop new tools that can translate raw data into information for a broader constituency of non-technical potential users and enable citizens and other data users to provide feedback.” You’ve noticed the increase in jobs for data scientists and data visualisation designers, right?

But it goes beyond that. With every report and outcome that draws on big data, there needs to be a “how we got here” explanation. Not just making the data understandable, but the story behind that data. Maybe the data visualiser does this, but maybe there’s a new role of data storyteller in the making.

The UN Global Pulse principle says we should “design, carry out, report and document our activities with adequate accuracy and openness.” At the same time, Forbes says data storytelling is an essential skill. There is clearly a connection here. Design and UI thinking will be needed to make sure the heavy lifting behind the data scenes can be easily explained, like you would to your grandmother. Is this an impossible ask? Well, the alternative is simply not an option anymore.

Data Activists

Secondly, organisations that use someone else’s big data analysis – like many ICT4D orgs these days – need to take an activist approach. They need to ask questions about where the data comes from, what steps were taken to audit it for inherent bias, for an explanation of the “secret sauce” in the analysis. We need to demand algorithmic accountability” We are creators and arbiters of big data.

The issue extends beyond protecting user data and privacy, important as this is. It relates to transparency and comprehension. Now is the time, before it’s too late, to lay down the practices that ensure we all know how big data gets cooked up.

Image: CC by kris krüg

Across Africa the Feature Phone is Not Dead – Your Weekend Long Reads


Quartz Africa reports that last year feature phones took back market share from smartphones in Africa. The market share of smartphones fell to 39% in 2017 (from 45%), while feature phones rose to 61% (from 55%).

Quartz Africa sees the reasons as likely to be twofold: first, the growth of big markets, like Ethiopia and DR Congo, which until recently have had relatively low penetration. Second, low price.

Transsion, a little-known Chinese handset manufacturer, now sells more phones than any other company in Africa. It’s three big brands outnumber Samsung’s market share there. The devices are cheap and appealing for new users.

The FT reports that Transsion’s phones are specifically designed for the African market: they have multiple sim-card slots, camera software adapted to better snap darker skin tones, and speakers with enhanced bass (seriously). Many of the feature phone models have messaging apps. The batteries remain on standby for up to 13 days!

What does this mean? That you should freeze your flashy new app project? No! There’s no need to stop planning and developing for a smartphone-enabled Africa. The trend is clear: smartphones become cheaper over time and their uptake increases.

But we know that in Africa, especially, mobile usage is unevenly distributed and these stats are a good reminder that the non-smartphone user base is still huge. Many of us need to remain true to that reality if we want our ICT to be 4D.

The age old question – which mobile channel should we focus on? – has not gone away. And the answer still remains the same: it depends. What is your service? What devices do your users have? What are their usage preferences? Do they have data coverage and, if yes, can they afford data?

Low tech, like IVR and radio, can be beautiful and extremely effective. In a meta-study of education initiatives in Africa, the Brookings Institute found that most technology-based innovations utilize existing tools in new ways. They give Eneza Education as an example, which built its service on SMS (even though there is now an Android app available).

At the same time, apps are certainly rising in the development sector. While not in Africa, the Inventory of Digital Technologies for Resilience in Asia-Pacific found apps to be the dominant channel. From my own experience I’m seeing more apps, often as one part of a mix of delivery channels.

A forthcoming case study in the UNESCO-Pearson initiative is MOPA, a platform for participatory monitoring of waste management services in Maputo, Mozambique. Citizens report issues via USSD, website and, most recently, via Android app.

Usage patterns show that 96% of reports are still sent through USSD, 3% via mobile app, and only 1% through the website. Given that specific user base, and the quick-and-dirty nature of the transaction, it’s not surprising that USSD is a clear winner.

Another example of a channel mix is Fundza, the South African mobile novel library. It started life as a mobisite and now also has an app, which largely provides a window into the same content just in a nice Android skin.

The app is used by less than 1% of users, with the mobisite taking the lion’s share of traffic (via feature phone and smartphone). Fundza is also on Free Basics, where the breakdown is quite different: 65% mobisite, 45% app (perhaps pointing to the benefits of being bundled into someone else’s very well-marketed app).

There are many reasons why individual apps may or may not succeed, and these examples are not meant to downplay their utility. Overall, the world is going to smartphones.

However, the bottom line is that you should not write off the humble feature phone in Africa just yet. It does old tech very well, internet messaging and the mobile web, which for many ICT4D projects is still their bread and butter access channel.

In ICT4D We’re Principled, But Are We Practiced Enough? – Your Weekend Long Reads

Last month CO.DESIGN published the 10 New Principles of Good Design (thanks Air-bel Center for the link). The article, which is based on a set of industrial design principles from the 1970s, makes for important reading.

According to the author, and concerning commercial digital solutions, 2017 was “a year of reckoning for the design community. UX became a weapon, AI posed new challenges, and debate erupted over once rock-solid design paradigms.” What is most interesting — and wonderful to boot — is that many of the “new” principles we, the ICT4D community, have endorsed for years.

Good Design is Transparent

For example, the article calls for transparency in design. Apparently today, “amid a string of high-profile data breaches and opaque algorithms that threaten the very bedrock of democracy, consumers have grown wary of slick interfaces that hide their inner workings.”

We know that user-centered design is participatory and that we should expose the important parts of digital solutions to our users. We believe in telling our users what we’ll do with their data.

Good Design Considers Broad Consequences and is Mindful of Systems

The article warns that in focusing on the immediate needs of users, user-friendly design often fails to consider long-term consequences. “Take Facebook’s echo chamber, Airbnb’s deleterious impact on affordable housing,” as examples. Not for us: we understand the existing ecosystem, are conscious of long-term consequences and design for sustainability.

A Little History Lesson

Today we have principles for sectors — such as refugees, health and government (US or UK version?); for cross-cutting themes — such as identity, gender and mobile money; for research; and the grand daddy of them all, for digital development.

These principles have been developed over a long time. Fifteen years go I wrote a literature survey on the best practices of ICT4D projects. It was based on the work of then research pioneer, Bridges.org, drawing on a range of projects from the early 2000s.

In my paper, Bridges.org put forward seven habits of highly effective ICT-enabled development initiatives. By 2007 the list had grown to 12 habits — many of which didn’t look that different from today’s principles.

Do We Practice What We Preach?

But if these principles are not new to us, are we practicing them enough? Don’t get me wrong, the ICT4D community has come a long way in enlisting tech for social good, and the lessons — many learned the hard way — have matured our various guidelines and recommendations. But should we be further down the line by now?

The principles mostly outline what we should do, and some work has been done on the how side, to help us move from principles to practice. But I think that we need to do more to unpack the why don’t we aspect.

Consider this data point from a recent Brookings Institute report Can We Leapfrog: The Potential of Education Innovations to Rapidly Accelerate Progress (more on this report in a future post). Brookings analysed almost 3,000 education innovations around the world (not all tech-based, just so you know) and found that:

… only 16 percent of cataloged interventions regularly use data to drive learning and program outcomes. In fact, most innovations share no information about their data practices.

We know that we should be data-driven and share our practices. So what is going on here? Do the project managers behind these interventions not know that they should do these things? Do they not have the capacity in their teams? Do they not want to because they believe it exposes their non-compliance with such principles? Or perhaps they feel data is their competitive edge and they should hide their practices?

Time for ‘Fess Faires?

Fail faires are an excellent way to share what we tried and what didn’t work. But what about ‘Fess Faires, where we confess why we can’t or — shock horror — won’t follow certain principles. Maybe it’s not our fault, like funding cycles that ICT4D startups can’t survive. But maybe we should be honest and say we won’t collaborate because the funding pie is too small.

If fail faires are more concerned with operational issues, then ‘fess faires look at structural barriers. We need to ask these big questions in safe spaces. Many ICT4D interventions are concerned with behavior change. If we’re to change our own behavior we need to be open about why we do or don’t do things.

Good Design is Honest

So, on the one hand we really can pat ourselves on the back. We’ve had good design principles for almost twenty years. The level of adherence to them has increased, and they have matured over time.

On the other hand, there is still much work to be done. We need to deeply interrogate why we don’t always practice our principles, honestly and openly. Only in this way will we really pursue a key new principle: good design is honest.

Why Digital Skills Really Matter for ICT4D – Your Weekend Long Reads

In an increasingly online world, people need digital skills to work and live productively. One of the major barriers to digital uptake is a lack of these skills.

Across Africa, seven in ten people who don’t use the Internet say they just don’t know how to use it. This is not only a developing country problem: 44% of the European Union population has low or no (19%) digital skills!

It is no surprise, therefore, that the theme for this year’s UNESCO Mobile Learning Week is “Skills for a connected world”. (It runs from 26-30 March in Paris — don’t miss it!)

Global Target for Digital Skills

At Davos last month, the UN Broadband Commission set global broadband targets to bring online the 3.8 billion people not yet connected to the Internet. Goal 4 is that by 2025: 60% of youth and adults should have achieved at least a minimum level of proficiency in sustainable digital skills.

(I’m not quite sure what the difference is between digital skills and sustainable digital skills.) Having a target such as this is good for focusing global efforts towards skilling up.

The Spectrum of Digital Skills

Digital skills is a broad term. While definitions vary, the Broadband Commission report proposes seeing digital skills and competences on a spectrum, including: 

  • Basic functional digital skills, which allow users to access and conduct basic operations on digital technologies;
  • Generic digital skills, which include using digital technologies in meaningful and beneficial ways, such as content creation and online collaboration; and 
  • Higher-level skills, which mean using digital technology in empowering and transformative ways, for example for software development. These skills include 21st century skills and critical digital literacies.

Beyond skills, digital competences include awareness and attitudes concerning technology use. Most of the people served in ICT4D projects fall into the first and second categories. Understanding where your users are and need to be is important, and a spectrum lens helps in that exercise.

Why Skills Really Matter

Beyond the global stats, goals and definitions, why should you really care about the digital skills of your users, other than that they know enough to navigate your IVR menu or your app?

The answers come from the GSMA’s recent pilot evaluation of its Mobile Internet Skills Training Toolkit (MISTT), implemented last year in Rwanda.

Over 300 sales agents from Tigo, the mobile network operator, were trained on MISTT, and they in turn trained over 83,000 customers. The evaluation found that MISTT training:

  • Gives users confidence and helps them overcome the belief that “the Internet is not for me”;
  • Has the potential to help customers move beyond application “islands” — and get them using more applications/services;
  • Has a ripple effect, as customers are training other people on what they have learned (a study in Cape Town also found this); and
  • Increased data usage among trained customers, which led to increased data revenues for Tigo.

In short, more digital skills (beyond just what you need from your users) presents the opportunity for increased engagement, higher numbers of users and, if services are paid-for or data drives revenue, greater earnings. Now those are compelling ICT4D motivators.

Skills as Strategy

Therefore, we need to see skills development as one of the core components of our:

  • Product development strategy (leveraging users who can interact more deeply with features);
  • Growth strategy (leveraging users who train and recruit other users);
  • Revenue strategy (leveraging users who click, share, and maybe even buy).

But what about the cost, you might wonder? As Alex Smith of the GSMA points out, with the data revenues, for Tigo the MISTT pilot returned the investment within a month and saw an ROI of 240% within a quarter. That’s for a mobile operator — it would be fascinating to measure ROI for non-profits.

To get training, the Mobile Information Literacy Curriculum from TASCHA is also work checking out, as is the older GSMA Mobile Literacy Toolkit.

Image: CC by Lau Rey

 

Creating Killer ICT4D Content – Your Weekend Long Reads

Creating killer content is critical to ICT4D success. One of the major barriers to digital uptake is a lack of incentives to go online because of a lack of relevant or attractive content.

This weekend we look at resources for creating great content, drawing on lessons from the mhealth and mAgri sectors. If you are not an mhealth or mAgri practitioner, don’t stop reading now. While professions and sectors like to silo, in reality the ICT4D fields overlap enormously. For example, does a programme that educates nurses for improved obstetrics practices fall under mhealth or meducation? The details may differ, but the approaches, lessons and tech might as well be the same. From each sector there is much to learn and transfer to other m-sectors.

Let’s Get Practical and Make Some Content

Not long ago Dr. Peris Kagotho left medical practice to focus on mhealth. Since then she has successfully categorized, edited and contextualized over 10,000 health tips Kenyans. In a four-part blog series, she highlights techniques and learnings for effective and impactful content development. Read about the prerequisites for quality mhealth content; the principles of behaviour change messaging; creating content that is fit for purpose; and scheduling content for impactful delivery.

Making Content Meaningful Without Re-inventing the Wheel

While there is apparently an abundance of openly accessible health content, this alone is insufficient to make the world healthy and happy. The Knowledge for Health (K4Health) project knows the importance of providing the content in the appropriate context and the language of the people who will use it.

K4Health and USAID have therefore created a guide to adapting existing global health content for different audiences with the goal of expanding the reach, usefulness, and use of evidence-based global health content. Fantastic.

+ The Nutrition Knowledge Bank is an open access library of free to use nutrituion content.

Lessons on Content Placement, Format, Data and More

The Talking Book, a ruggedized audio player and recorder by Literacy Bridge, offers agricultural, health and livelihoods education to deep rural communities in four African countries. The UNESCO-Pearson case study on the project highlights key content development approaches and lessons, drawn from over ten years of experience. For example, it’s important not to overload users with too much content; the fist few messages in a content category get played the most, so those are the best slots for the most important messages; and these rural audiences prefer content as songs and dramas over lectures. The content strategy is highly data-driven.

Content Isn’t Delivered in a Vacuum

In 2016, the Government of India launched a nation-wide mobile health programme called ‘Kilkari’ to benefit 10 million new and expecting mothers by providing audio-based maternal and child health messages on a weekly basis. The service was designed by BBC Media Action and the GSMA case study describes its evolution, learnings and best practices, covering content and more. It is useful to zoom out and see the bigger picture of an mhealth initiative, and how content forms one part of the whole.

Image: CC by TTCMobile

Voice for Development: Your Weekend Long Reads

While ICT4D innovates from the ground up, most tech we use comes from the top. Yes, it takes a little time for the prices of commercial services in Silicon Valley to drop sufficiently, and the tech to diffuse to the audiences we work with, but the internet and mobile have made that wait short indeed.

Next Big Wave After the Keyboard and Touch: Voice

One such innovation is natural language processing, which draws on AI and machine learning to attempt to understand human language communication and to react and respond appropriately.

While this is not a new field, the quality of understanding and speaking has improved dramatically in recent years. The Economist predicts that voice computing, which enables hands-off communication with machines, is the next and fundamental wave of human-machine interaction, after the keyboard and then touch.

The prediction is driven by tech advances as well as increasing uptake in the consumer market (note: in developed markets): last year Apple’s Siri was handling over 2bn commands a week, and 20% of Google searches on Android-powered handsets in America were input by voice.

Alexa Everywhere

Alexa is Amazon’s voice assistant that lives in Amazon devices like Echo and Dot. Well, actually, Alexa lives in the cloud and provides speech recognition and machine learning services to all Alexa-enabled devices.

Unlike Google and Apple, Amazon is wanting to open up Alexa and have it (her?) embedded into any products, not just those from Amazon. If you’re into manufacturer, you can now buy one of a range of Alexa Development Kits for a few hundred dollars to construct your own voice-controlled products.

Skills Skills Skills

While Amazon works hard to get Alexa into every home, car and device, you can in the meantime start creating Alexa skills. There’s a short Codecademy course on how to do this. It explains that Alexa provides a set of built-in capabilities, referred to as skills, that define how you can interact with the device. For example, Alexa’s built-in skills include playing music, reading the news, getting a weather forecast, and querying Wikipedia. So, you could say things like: Alexa, what’s the weather in Timbuktu.

Anyone can develop their own custom skills by using the Alexa Skills Kit (ASK). (The skills can only be used in the UK, US and Germany, presumably for now.) An Amazon user “enables” the skill after which it works on any of her Alexa-enabled devices. Et voilà, she simply says the wake phrase to access the skill. This is pretty cool.

What Does This Mean for ICT4D?

Is the day coming, not long from now, when machine-based voice assistants are ICT4D’s greatest helpers? Will it open doors of convenience for all and doors of inclusion for people with low digital skills or literacy? Hmmm. There’s a lot of ground to cover before that happens.

While natural language processing has come a looooong way, it’s far from perfect either. Comments about this abound — this one concerning Question of the Day, a popular Alexa skill:

Alexa sometimes does not hear the answer correctly, even though I try very hard to enunciate. It’s frustrating when I’ve gotten the answer right — not even by guessing, but actually knew it — and Alexa comes back and tells me I’ve gotten it wrong!

In ICT4D, there’s isn’t always room for error. What about sensitive content and interactions that can easily go awry? Is it likely that soon someone will say Alexa, send 100 dollars to my mother in the Philippines? What if she sends the money to the brother in New Orleans?

Other challenges include Alexa’s language range, cost, the need for online connectivity and, big one, privacy. There is a risk in being tied to one provider, one tech giant. This stuff should be based on open standards.

Still, it is interesting and exciting to see this move from Amazon and contemplate how it could affect ICT4D. What are your thoughts for how voice for development (V4D) could make a social impact?

Here’s a parting challenge to ICTWorks readers: Try out Amazon Skills and tell us whether it’s got legs for development? An ICT4D skill, if you will. (It can be something simple for now, not Alexa, eliminate world poverty).

Image: CC-BY-NC by Rob Albright

Five Traits of Low-literate Users: Your Weekend Long Reads

We know that the first step to good human-centered design is understanding your user. IDEO calls this having an empathy mindset, the “capacity to step into other people’s shoes, to understand their lives, and start to solve problems from their perspectives.”

Having empathy can be especially challenging in ICT4D since the people we develop solutions for often live in completely different worlds to us, literally and figuratively.

I’m currently drafting a set of guidelines for more inclusive design of digital solutions for low-literate and low-skilled people. (Your expert input on it will be requested soon!) There are many excellent guides to good ICT4D, and the point is not to duplicate efforts here. Rather, it is to focus the lens on the 750 million people who cannot read or write and the 2 billion people who are semi-literate. In other words, likely a significant portion of your target users.

Globally, the offline population is disproportionately rural, poor, elderly and female. They have limited education and low literacy. Of course people who are low-literate and low-skilled do not constitute a homogeneous group, and differences abound across and within communities.

Despite these variances, and while every user is unique, research has revealed certain traits that are common enough to pull out and be useful in developing empathy for this audience. Each has implications for user-centered design processes and the design of digital solutions (the subject of future posts).

Note: much of the research below comes from Indrani Medhi Thies and the teams she has worked with (including Kentaro Toyama) at Microsoft Research India, developing job boards, maps, agri video libraries and more, for low-literates. If you do nothing else, watch her presentation at HCI 2017, an excellent summary of twelve years of research.

Not Just an Inability to Read

Research suggests that low exposure to education means cognitive skills needed for digital interaction can be underdeveloped. For example, low-literate users can struggle with transferring learning from one setting to another, such as from instructional videos to implementation in real life. Secondly, conceptualising and navigating information hierarchies can be more challenging than for well educated users (another paper here).

Low-literate Users Are Scared and Sceptical of Tech

Unsurprisingly, low literate users are not confident in their use of ICTs. What this means is that they are scared of touching the tech for fear of breaking it. (There are many schools in low-income, rural areas where brand new donated computers are locked up so that nobody uses and damages them!)

Further, even if they don’t break it, they might be seen as not knowing how to use it, causing embarrassment. When they do use tech, they can be easily confused by the UI.

Low-literate users can lack awareness of what digital can deliver, mistrust the technology and doubt that it holds information relevant to their lives.

One of Multiple Users

Low-income people often live in close-knit communities. Social norms and hierarchies influence who has access to what technology, how information flows between community members and who is trusted.

Within families, devices are often shared. And when low-literates use the device it may be necessary to involve “infomediaries” to assist, such as read messages, navigate the UI or troubleshoot the tech. Infomediaries can also hinder the experience when their “filtering and funnelling decisions limit the low-literate users’ information-seeking behaviour.”

The implication is that the “target user” is really plural — the node and all the people around him/her. Your digital solution is really for multiple users and used in mediated scenarios.

Divided by Gender

Two thirds of the world’s illiterate population are women. They generally use fewer mobile services than men. In South Asia women are 38% less likely than men to own a mobile phone, and are therefore more likely to be “sharing” users. Cultural, social or religious norms can restrict digital access for women, deepening the gender digital divide. In short, for low-literate and low-income users, gender matters.

Driven by Motivation (Which Can Trump Bad UI)

While we often attribute successful digital usage to good UI, research has shown that motivation is a strong driver for task completion. Despite minimum technical knowledge, urban youth in India hungry for entertainment content traversed as many as 19 steps to Bluetooth music, videos and comedy clips between phones and PCs.

In terms of livelihoods and living, the desire to sell crops for more, have healthier children, access government grants or apply for a visa, are the motivators that we need to tap to engage low-literate users.

If “sufficient user motivation towards a goal turns UI barriers into mere speed bumps,” do we pay enough attention to how much our users want what we’re offering? This can make or break a project.

Image: © CC-BY-NC-ND by Simone D. McCourtie / World Bank

Artificial Intelligence in Education: Your Weekend Long Reads


Continuing the focus on artificial intelligence (AI), this weekend looks at it in education. In general, there are many fanciful AI in Ed possibilities proposed to help people teach and learn, some of which are genuinely exciting and others that just look much like today.

One encouraging consensus from the readings below is that, while there is concern that AI and robots will ultimately take over certain human jobs, teachers are safe. The role relies too much on the skills that AI is not good at, such as creativity and emotional intelligence.

An Argument for AI in Education

A 2016 report (two-page summary) from Pearson and University College London’s Knowledge Lab offers a very readable and coherent argument for AI in education. It describes what is possible today, for example one-on-one digital tutoring to every student, and what is potentially possible in the future, such as lifelong learning companions powered by AI that can accompany and support individual learners throughout their studies – in and beyond school. Or, one day, there could be new forms of assessment that measure learning while it is taking place, shaping the learning experience in real time. It also proposes three actions to help us get from here to there.

AI and People, Not AI Instead of People

There is an argument that rather than focusing solely on building more intelligent AI to take humans out of the loop, we should focus just as much on intelligence amplification/augmentation. This is the use of technology – including AI – to provide people with information that helps them make better decisions and learn more effectively. So, for instance, rather than automating the grading of student essays, some researchers are focusing on how they can provide intelligent feedback to students that helps them better assess their own writing.

The “Human Touch” as Value Proposition

At Online Educa Berlin last month, I heard Dr. Tarek R. Besold, lecturer in Data Science at City, University of London, talk about AI in Ed (my rough notes are here). He built on the idea that we need to think more carefully about what AI does well and what humans do well.

For example, AI can provide intelligent tutoring, but only on well-defined, narrow domains for which we have lots of data. Learning analytics can analyse learner behaviour and teacher activities … so as to identify individual needs and preferences to inform human intervention. Humans, while inefficient at searching, sorting and mining data, for example, are good at understanding, empathy and relationships.

In fact, of all the sectors McKinsey & Company examined in a report on where machines could replace humans, the technical feasibility of automation is lowest in education, at least for now. Why? Because the essence of teaching is deep expertise and complex interactions with other people, things that AI are not yet good at. Besold proposed the “human touch” as our value proposition.

Figuring out how humans and AI can bring out the best in each other to improve education, now that is an exciting proposal. Actually creating this teacher-machine symbiosis in the classroom will be a major challenge, though, given the perception of job loss from technology.

The Future of AI Will Be Female

Emotional intelligence is increasingly in demand in the workplace, and will only be more so in the future when AI will have replaced predicable, repetitive jobs. This means that cultivating emotional intelligence and social skills should be critical components of education today. But there’s a fascinating angle here: in general, women score much higher than men in emotional intelligence. Thus, Quartz claims, women are far better prepared for an AI future.

Image: © CC-BY-NC-ND by Ericsson

Artificial Intelligence: Your Weekend Long Reads

Artificial intelligence (AI) was one of the hottest topics of 2017. A Gartner “mega trend,” their research director, Mike J. Walker, proposed that “AI technologies will be the most disruptive class of technologies over the next 10 years due to radical computational power, near-endless amounts of data and unprecedented advances in deep neural networks.”

But as much as it is trendy and bursting with promise, it is also controversial, overhyped and misunderstood. In fact, it has yet to enjoy a widely accepted definition.

AI underpins many of Gartner’s emerging technologies on its 2017 hype cycle. However, smart robots, deep learning and machine learning were all cresting the Peak of Inflated Expectations. Of course, after that comes the Trough of Disillusionment. Collectively they will take two to ten years to reach the Plateau of Productivity.

AI is both a long game and already in our lives. Your Amazon or Netflix recommendations are partly AI-based. So is speech recognition and translation,  such as in Google Home and Google Translate. But, as you know from using these services, they are far from perfect. Closer to ICT4D, within monitoring and evaluation we know the opportunities and limitations of AI.

In 2018 we can expect to hear a lot more about AI, along with promises and disappointments. Almost anyone who’s software has an algorithm will claim they’re harnessing AI. There will suddenly be more adaptive, intelligent platforms in edtech, and more talk of smart robots and AI hollowing out the global job market.

While there will be some truth to the AI claims and powerful new platforms, we need to learn to read between the lines. The potential of AI is exciting and will be realised over the coming years and decades, but in varying degrees and unevenly spread. For now, a balanced view is needed to discern between what is hype or on the long horizon, and what can we use today for greater social impact. Only in this way can we fully get to grips with the technological, social and ethical impact of AI. Below are a few articles to get our interest piqued in 2018.

The Next Fifteen Years

To get the big picture, an excellent place to start is the Stanford University report Artificial Intelligence and Life in 2030. A panel of experts focussed the AI lens on eight domains they considered most salient: transportation; service robots; healthcare; education; low-resource communities; public safety and security; employment and workplace; and entertainment. In each of these domains, the report both reflects on progress in the past fifteen years and anticipates developments in the coming fifteen years.

AI for Good

Last year the ITU hosted the AI for Good Global Summit, which brought together a host of international NGOs, UN bodies, academia and the private sector to consider the opportunities and limitations of AI for good. The conference report offers a summary of the key takeaways and applications cited in the event. A number of webcasts are also available.

AI Moves into the Cloud

While most ICT4D tech outfits simply don’t have access to the computing power and expertise to fully utilise AI, this is starting to change. In 2017, AI floated into the cloud. Amazon, Google and Microsoft have introduced large-scale cloud-based AI. This includes open-source AI software as well as AI services for turning speech in audio files into time-stamped text, translating between various languages and tracking people, activities, and objects in video. I’m looking forward to seeing these tools used in ICT4D soon.

Growing Up with Alexa

Considering the interaction between her four-year-old niece and Amazon Echo’s Alexa, a reporter asked the following question: What will it do to kids to have digital butlers they can boss around? What is the impact of growing up with Alexa? Will it make kids better adjusted and educated — or the opposite? This piece offers interesting questions on the social impact of AI on children.

The Ethical Dimension

The World Commission on the Ethics of Scientific Knowledge and Technology of UNESCO (COMEST) last year released a report on the ethical issues surrounding the use of contemporary robotic technologies — underpinned by AI — in society (there is a 2-minute video summary). The bottom line: some decisions always require meaningful human control.

Amidst the growing role of robots in our world there are new responsibilities for humans to ensure that people and machines can live in productive co-existence. As AI impacts our world in greater ways, the ethical dimension will equally become more important, bringing philosophers, technologists and policy-makers around the same table. Being in the ICT4D space, our role as technologists and development agents will be critical here.

Image: © CC-BY-SA by Silver Blue

Fake News – Weekend Long Reads

We live in an era, according to the Economist, that is post-truth. Especially in politics, this time sees “a reliance on assertions that ‘feel true’ but have no basis in fact.” In 2016, post-truth was the Oxford Dictionaries Word of the Year.

Untruths have always been with us, but the internet is the medium that changed everything. The scale with which “alternative facts“, untruths and blatant lies can be created and spread — by people and algorithms — can, for the first time ever, threaten democracy and social cohesion at a global scale.

For those of us who have, for a long time, believed in the power of the internet to break down barriers between people and cultures, foster dialogue, have a sharpening effect on truth through increased transparency and access to information, post-truth’s most dangerous weapon, “fake news“, is a bitter pill to swallow. While fake news has been around since the late 19th century, it is now a headline phenomenon, the Collins’ Word of the Year for 2017. What happened to the grand internet dream of the democratisation of knowledge?

All of us have a duty to engage with these complex issues, to understand them, take a position, and reclaim the dream. Most importantly, we need to constantly question whether the digital tools we built, and continue to build, are part of the problem.

The Birth of a Word

It is useful to go back only a year and a half to remind ourselves how fake news became a household word. WIRED’s article traces the the birth — and, it claims — the death of it. How did it die? It quickly became so diluted in meaning, so claimed by those shouting the loudest, that it has become meaningless in many ways.

Fake News, or Information Disorder?

In an attempt to bring structure to the discussions, the Council of Europe produced a report on what it calls information disorder. The authors refrain from using the term fake news, for two reasons. First, they believe it is “woefully inadequate” to describe a very complex issue, and, secondly, it has been appropriated by politicians to slam any news or organisation they find disagreeable, thus becoming a mechanism for repression — what the New York Times calls “a cudgel for strongmen”.

The authors introduce a new conceptual framework for examining information disorder, identifying three different types:

  • Mis-information is when false information is shared, but no harm is meant. (According to Open University research, misinformation is rife among refugee populations.)
  • Dis-information is when false information is knowingly shared to cause harm.
  • Mal-information is when genuine information is shared to cause harm, often by moving information designed to stay private into the public sphere.

The report concludes with excellent recommendations for technology companies, as well as a range of other stakeholders. If the report is too long for you, be sure just to read the recommendations.

Fight It With Software

Tom Wheeler at the Brookings Institute offers a history of information sharing, control and news curation. He laments that today the “algorithms that decide our news feed are programmed to prioritize user attention over truth to optimize for engagement, which means optimizing for outrage, anger and awe.” But, he proposes: “it was software algorithms that put us in this situation, and it is software algorithms that can get us out of it.”

The idea is “public interest algorithms” that interface with social network platforms to, at an aggregate level, track information sources, spread and influence. Such software could help public interest groups monitor social media in the same way they do for broadcast media.

Fight It With Education

While I believe in the idea of software as the solution, the Wheeler article seems to miss a key point: information spread is a dance between algorithms and people. Every like, share and comment by you and me feeds the beast. Without us, the algorithm starves.

We need to change the way we behave online; media and information literacy are crucial to this. There are many excellent resources for teens, adults and teachers to help us all be more circumspect online. I like the Five Key Questions That Can Change the World (from 2005!)

Want To Understand It Better? Fake Some

Finally, long before fake news become popular, in 2008, Professor T. Mills Kelly got his students at George Mason University to create fake Wikipedia pages to teach them the fallibility of the internet. At Google’s Newsgeist unconference last month, a similar exercise involved the strategising of a fake news campaign aimed at discrediting a certain US politician. Both instances force us to get into the minds of fakesters and how to use the internet to spread the badness. While creating fake Wikipedia pages doesn’t help the internet information pollution problem, the heart of the exercises are useful — perhaps they should be part of media literacy curricula?

Thanks to Guy Berger for suggesting some of these articles.

Image: © CC-BY-NC .jeff.