Why Digital Skills Really Matter for ICT4D – Your Weekend Long Reads

In an increasingly online world, people need digital skills to work and live productively. One of the major barriers to digital uptake is a lack of these skills.

Across Africa, seven in ten people who don’t use the Internet say they just don’t know how to use it. This is not only a developing country problem: 44% of the European Union population has low or no (19%) digital skills!

It is no surprise, therefore, that the theme for this year’s UNESCO Mobile Learning Week is “Skills for a connected world”. (It runs from 26-30 March in Paris — don’t miss it!)

Global Target for Digital Skills

At Davos last month, the UN Broadband Commission set global broadband targets to bring online the 3.8 billion people not yet connected to the Internet. Goal 4 is that by 2025: 60% of youth and adults should have achieved at least a minimum level of proficiency in sustainable digital skills.

(I’m not quite sure what the difference is between digital skills and sustainable digital skills.) Having a target such as this is good for focusing global efforts towards skilling up.

The Spectrum of Digital Skills

Digital skills is a broad term. While definitions vary, the Broadband Commission report proposes seeing digital skills and competences on a spectrum, including: 

  • Basic functional digital skills, which allow users to access and conduct basic operations on digital technologies;
  • Generic digital skills, which include using digital technologies in meaningful and beneficial ways, such as content creation and online collaboration; and 
  • Higher-level skills, which mean using digital technology in empowering and transformative ways, for example for software development. These skills include 21st century skills and critical digital literacies.

Beyond skills, digital competences include awareness and attitudes concerning technology use. Most of the people served in ICT4D projects fall into the first and second categories. Understanding where your users are and need to be is important, and a spectrum lens helps in that exercise.

Why Skills Really Matter

Beyond the global stats, goals and definitions, why should you really care about the digital skills of your users, other than that they know enough to navigate your IVR menu or your app?

The answers come from the GSMA’s recent pilot evaluation of its Mobile Internet Skills Training Toolkit (MISTT), implemented last year in Rwanda.

Over 300 sales agents from Tigo, the mobile network operator, were trained on MISTT, and they in turn trained over 83,000 customers. The evaluation found that MISTT training:

  • Gives users confidence and helps them overcome the belief that “the Internet is not for me”;
  • Has the potential to help customers move beyond application “islands” — and get them using more applications/services;
  • Has a ripple effect, as customers are training other people on what they have learned (a study in Cape Town also found this); and
  • Increased data usage among trained customers, which led to increased data revenues for Tigo.

In short, more digital skills (beyond just what you need from your users) presents the opportunity for increased engagement, higher numbers of users and, if services are paid-for or data drives revenue, greater earnings. Now those are compelling ICT4D motivators.

Skills as Strategy

Therefore, we need to see skills development as one of the core components of our:

  • Product development strategy (leveraging users who can interact more deeply with features);
  • Growth strategy (leveraging users who train and recruit other users);
  • Revenue strategy (leveraging users who click, share, and maybe even buy).

But what about the cost, you might wonder? As Alex Smith of the GSMA points out, with the data revenues, for Tigo the MISTT pilot returned the investment within a month and saw an ROI of 240% within a quarter. That’s for a mobile operator — it would be fascinating to measure ROI for non-profits.

To get training, the Mobile Information Literacy Curriculum from TASCHA is also work checking out, as is the older GSMA Mobile Literacy Toolkit.

Image: CC by Lau Rey

 

Advertisements

Creating Killer ICT4D Content – Your Weekend Long Reads

Creating killer content is critical to ICT4D success. One of the major barriers to digital uptake is a lack of incentives to go online because of a lack of relevant or attractive content.

This weekend we look at resources for creating great content, drawing on lessons from the mhealth and mAgri sectors. If you are not an mhealth or mAgri practitioner, don’t stop reading now. While professions and sectors like to silo, in reality the ICT4D fields overlap enormously. For example, does a programme that educates nurses for improved obstetrics practices fall under mhealth or meducation? The details may differ, but the approaches, lessons and tech might as well be the same. From each sector there is much to learn and transfer to other m-sectors.

Let’s Get Practical and Make Some Content

Not long ago Dr. Peris Kagotho left medical practice to focus on mhealth. Since then she has successfully categorized, edited and contextualized over 10,000 health tips Kenyans. In a four-part blog series, she highlights techniques and learnings for effective and impactful content development. Read about the prerequisites for quality mhealth content; the principles of behaviour change messaging; creating content that is fit for purpose; and scheduling content for impactful delivery.

Making Content Meaningful Without Re-inventing the Wheel

While there is apparently an abundance of openly accessible health content, this alone is insufficient to make the world healthy and happy. The Knowledge for Health (K4Health) project knows the importance of providing the content in the appropriate context and the language of the people who will use it.

K4Health and USAID have therefore created a guide to adapting existing global health content for different audiences with the goal of expanding the reach, usefulness, and use of evidence-based global health content. Fantastic.

+ The Nutrition Knowledge Bank is an open access library of free to use nutrituion content.

Lessons on Content Placement, Format, Data and More

The Talking Book, a ruggedized audio player and recorder by Literacy Bridge, offers agricultural, health and livelihoods education to deep rural communities in four African countries. The UNESCO-Pearson case study on the project highlights key content development approaches and lessons, drawn from over ten years of experience. For example, it’s important not to overload users with too much content; the fist few messages in a content category get played the most, so those are the best slots for the most important messages; and these rural audiences prefer content as songs and dramas over lectures. The content strategy is highly data-driven.

Content Isn’t Delivered in a Vacuum

In 2016, the Government of India launched a nation-wide mobile health programme called ‘Kilkari’ to benefit 10 million new and expecting mothers by providing audio-based maternal and child health messages on a weekly basis. The service was designed by BBC Media Action and the GSMA case study describes its evolution, learnings and best practices, covering content and more. It is useful to zoom out and see the bigger picture of an mhealth initiative, and how content forms one part of the whole.

Image: CC by TTCMobile

Voice for Development: Your Weekend Long Reads

While ICT4D innovates from the ground up, most tech we use comes from the top. Yes, it takes a little time for the prices of commercial services in Silicon Valley to drop sufficiently, and the tech to diffuse to the audiences we work with, but the internet and mobile have made that wait short indeed.

Next Big Wave After the Keyboard and Touch: Voice

One such innovation is natural language processing, which draws on AI and machine learning to attempt to understand human language communication and to react and respond appropriately.

While this is not a new field, the quality of understanding and speaking has improved dramatically in recent years. The Economist predicts that voice computing, which enables hands-off communication with machines, is the next and fundamental wave of human-machine interaction, after the keyboard and then touch.

The prediction is driven by tech advances as well as increasing uptake in the consumer market (note: in developed markets): last year Apple’s Siri was handling over 2bn commands a week, and 20% of Google searches on Android-powered handsets in America were input by voice.

Alexa Everywhere

Alexa is Amazon’s voice assistant that lives in Amazon devices like Echo and Dot. Well, actually, Alexa lives in the cloud and provides speech recognition and machine learning services to all Alexa-enabled devices.

Unlike Google and Apple, Amazon is wanting to open up Alexa and have it (her?) embedded into any products, not just those from Amazon. If you’re into manufacturer, you can now buy one of a range of Alexa Development Kits for a few hundred dollars to construct your own voice-controlled products.

Skills Skills Skills

While Amazon works hard to get Alexa into every home, car and device, you can in the meantime start creating Alexa skills. There’s a short Codecademy course on how to do this. It explains that Alexa provides a set of built-in capabilities, referred to as skills, that define how you can interact with the device. For example, Alexa’s built-in skills include playing music, reading the news, getting a weather forecast, and querying Wikipedia. So, you could say things like: Alexa, what’s the weather in Timbuktu.

Anyone can develop their own custom skills by using the Alexa Skills Kit (ASK). (The skills can only be used in the UK, US and Germany, presumably for now.) An Amazon user “enables” the skill after which it works on any of her Alexa-enabled devices. Et voilà, she simply says the wake phrase to access the skill. This is pretty cool.

What Does This Mean for ICT4D?

Is the day coming, not long from now, when machine-based voice assistants are ICT4D’s greatest helpers? Will it open doors of convenience for all and doors of inclusion for people with low digital skills or literacy? Hmmm. There’s a lot of ground to cover before that happens.

While natural language processing has come a looooong way, it’s far from perfect either. Comments about this abound — this one concerning Question of the Day, a popular Alexa skill:

Alexa sometimes does not hear the answer correctly, even though I try very hard to enunciate. It’s frustrating when I’ve gotten the answer right — not even by guessing, but actually knew it — and Alexa comes back and tells me I’ve gotten it wrong!

In ICT4D, there’s isn’t always room for error. What about sensitive content and interactions that can easily go awry? Is it likely that soon someone will say Alexa, send 100 dollars to my mother in the Philippines? What if she sends the money to the brother in New Orleans?

Other challenges include Alexa’s language range, cost, the need for online connectivity and, big one, privacy. There is a risk in being tied to one provider, one tech giant. This stuff should be based on open standards.

Still, it is interesting and exciting to see this move from Amazon and contemplate how it could affect ICT4D. What are your thoughts for how voice for development (V4D) could make a social impact?

Here’s a parting challenge to ICTWorks readers: Try out Amazon Skills and tell us whether it’s got legs for development? An ICT4D skill, if you will. (It can be something simple for now, not Alexa, eliminate world poverty).

Image: CC-BY-NC by Rob Albright

Five Traits of Low-literate Users: Your Weekend Long Reads

We know that the first step to good human-centered design is understanding your user. IDEO calls this having an empathy mindset, the “capacity to step into other people’s shoes, to understand their lives, and start to solve problems from their perspectives.”

Having empathy can be especially challenging in ICT4D since the people we develop solutions for often live in completely different worlds to us, literally and figuratively.

I’m currently drafting a set of guidelines for more inclusive design of digital solutions for low-literate and low-skilled people. (Your expert input on it will be requested soon!) There are many excellent guides to good ICT4D, and the point is not to duplicate efforts here. Rather, it is to focus the lens on the 750 million people who cannot read or write and the 2 billion people who are semi-literate. In other words, likely a significant portion of your target users.

Globally, the offline population is disproportionately rural, poor, elderly and female. They have limited education and low literacy. Of course people who are low-literate and low-skilled do not constitute a homogeneous group, and differences abound across and within communities.

Despite these variances, and while every user is unique, research has revealed certain traits that are common enough to pull out and be useful in developing empathy for this audience. Each has implications for user-centered design processes and the design of digital solutions (the subject of future posts).

Note: much of the research below comes from Indrani Medhi Thies and the teams she has worked with (including Kentaro Toyama) at Microsoft Research India, developing job boards, maps, agri video libraries and more, for low-literates. If you do nothing else, watch her presentation at HCI 2017, an excellent summary of twelve years of research.

Not Just an Inability to Read

Research suggests that low exposure to education means cognitive skills needed for digital interaction can be underdeveloped. For example, low-literate users can struggle with transferring learning from one setting to another, such as from instructional videos to implementation in real life. Secondly, conceptualising and navigating information hierarchies can be more challenging than for well educated users (another paper here).

Low-literate Users Are Scared and Sceptical of Tech

Unsurprisingly, low literate users are not confident in their use of ICTs. What this means is that they are scared of touching the tech for fear of breaking it. (There are many schools in low-income, rural areas where brand new donated computers are locked up so that nobody uses and damages them!)

Further, even if they don’t break it, they might be seen as not knowing how to use it, causing embarrassment. When they do use tech, they can be easily confused by the UI.

Low-literate users can lack awareness of what digital can deliver, mistrust the technology and doubt that it holds information relevant to their lives.

One of Multiple Users

Low-income people often live in close-knit communities. Social norms and hierarchies influence who has access to what technology, how information flows between community members and who is trusted.

Within families, devices are often shared. And when low-literates use the device it may be necessary to involve “infomediaries” to assist, such as read messages, navigate the UI or troubleshoot the tech. Infomediaries can also hinder the experience when their “filtering and funnelling decisions limit the low-literate users’ information-seeking behaviour.”

The implication is that the “target user” is really plural — the node and all the people around him/her. Your digital solution is really for multiple users and used in mediated scenarios.

Divided by Gender

Two thirds of the world’s illiterate population are women. They generally use fewer mobile services than men. In South Asia women are 38% less likely than men to own a mobile phone, and are therefore more likely to be “sharing” users. Cultural, social or religious norms can restrict digital access for women, deepening the gender digital divide. In short, for low-literate and low-income users, gender matters.

Driven by Motivation (Which Can Trump Bad UI>

While we often attribute successful digital usage to good UI, research has shown that motivation is a strong driver for task completion. Despite minimum technical knowledge, urban youth in India hungry for entertainment content traversed as many as 19 steps to Bluetooth music, videos and comedy clips between phones and PCs.

In terms of livelihoods and living, the desire to sell crops for more, have healthier children, access government grants or apply for a visa, are the motivators that we need to tap to engage low-literate users.

If “sufficient user motivation towards a goal turns UI barriers into mere speed bumps,” do we pay enough attention to how much our users want what we’re offering? This can make or break a project.

Image: © CC-BY-NC-ND by Simone D. McCourtie / World Bank

Artificial Intelligence in Education: Your Weekend Long Reads


Continuing the focus on artificial intelligence (AI), this weekend looks at it in education. In general, there are many fanciful AI in Ed possibilities proposed to help people teach and learn, some of which are genuinely exciting and others that just look much like today.

One encouraging consensus from the readings below is that, while there is concern that AI and robots will ultimately take over certain human jobs, teachers are safe. The role relies too much on the skills that AI is not good at, such as creativity and emotional intelligence.

An Argument for AI in Education

A 2016 report (two-page summary) from Pearson and University College London’s Knowledge Lab offers a very readable and coherent argument for AI in education. It describes what is possible today, for example one-on-one digital tutoring to every student, and what is potentially possible in the future, such as lifelong learning companions powered by AI that can accompany and support individual learners throughout their studies – in and beyond school. Or, one day, there could be new forms of assessment that measure learning while it is taking place, shaping the learning experience in real time. It also proposes three actions to help us get from here to there.

AI and People, Not AI Instead of People

There is an argument that rather than focusing solely on building more intelligent AI to take humans out of the loop, we should focus just as much on intelligence amplification/augmentation. This is the use of technology – including AI – to provide people with information that helps them make better decisions and learn more effectively. So, for instance, rather than automating the grading of student essays, some researchers are focusing on how they can provide intelligent feedback to students that helps them better assess their own writing.

The “Human Touch” as Value Proposition

At Online Educa Berlin last month, I heard Dr. Tarek R. Besold, lecturer in Data Science at City, University of London, talk about AI in Ed (my rough notes are here). He built on the idea that we need to think more carefully about what AI does well and what humans do well.

For example, AI can provide intelligent tutoring, but only on well-defined, narrow domains for which we have lots of data. Learning analytics can analyse learner behaviour and teacher activities … so as to identify individual needs and preferences to inform human intervention. Humans, while inefficient at searching, sorting and mining data, for example, are good at understanding, empathy and relationships.

In fact, of all the sectors McKinsey & Company examined in a report on where machines could replace humans, the technical feasibility of automation is lowest in education, at least for now. Why? Because the essence of teaching is deep expertise and complex interactions with other people, things that AI are not yet good at. Besold proposed the “human touch” as our value proposition.

Figuring out how humans and AI can bring out the best in each other to improve education, now that is an exciting proposal. Actually creating this teacher-machine symbiosis in the classroom will be a major challenge, though, given the perception of job loss from technology.

The Future of AI Will Be Female

Emotional intelligence is increasingly in demand in the workplace, and will only be more so in the future when AI will have replaced predicable, repetitive jobs. This means that cultivating emotional intelligence and social skills should be critical components of education today. But there’s a fascinating angle here: in general, women score much higher than men in emotional intelligence. Thus, Quartz claims, women are far better prepared for an AI future.

Image: © CC-BY-NC-ND by Ericsson

Artificial Intelligence: Your Weekend Long Reads

Artificial intelligence (AI) was one of the hottest topics of 2017. A Gartner “mega trend,” their research director, Mike J. Walker, proposed that “AI technologies will be the most disruptive class of technologies over the next 10 years due to radical computational power, near-endless amounts of data and unprecedented advances in deep neural networks.”

But as much as it is trendy and bursting with promise, it is also controversial, overhyped and misunderstood. In fact, it has yet to enjoy a widely accepted definition.

AI underpins many of Gartner’s emerging technologies on its 2017 hype cycle. However, smart robots, deep learning and machine learning were all cresting the Peak of Inflated Expectations. Of course, after that comes the Trough of Disillusionment. Collectively they will take two to ten years to reach the Plateau of Productivity.

AI is both a long game and already in our lives. Your Amazon or Netflix recommendations are partly AI-based. So is speech recognition and translation,  such as in Google Home and Google Translate. But, as you know from using these services, they are far from perfect. Closer to ICT4D, within monitoring and evaluation we know the opportunities and limitations of AI.

In 2018 we can expect to hear a lot more about AI, along with promises and disappointments. Almost anyone who’s software has an algorithm will claim they’re harnessing AI. There will suddenly be more adaptive, intelligent platforms in edtech, and more talk of smart robots and AI hollowing out the global job market.

While there will be some truth to the AI claims and powerful new platforms, we need to learn to read between the lines. The potential of AI is exciting and will be realised over the coming years and decades, but in varying degrees and unevenly spread. For now, a balanced view is needed to discern between what is hype or on the long horizon, and what can we use today for greater social impact. Only in this way can we fully get to grips with the technological, social and ethical impact of AI. Below are a few articles to get our interest piqued in 2018.

The Next Fifteen Years

To get the big picture, an excellent place to start is the Stanford University report Artificial Intelligence and Life in 2030. A panel of experts focussed the AI lens on eight domains they considered most salient: transportation; service robots; healthcare; education; low-resource communities; public safety and security; employment and workplace; and entertainment. In each of these domains, the report both reflects on progress in the past fifteen years and anticipates developments in the coming fifteen years.

AI for Good

Last year the ITU hosted the AI for Good Global Summit, which brought together a host of international NGOs, UN bodies, academia and the private sector to consider the opportunities and limitations of AI for good. The conference report offers a summary of the key takeaways and applications cited in the event. A number of webcasts are also available.

AI Moves into the Cloud

While most ICT4D tech outfits simply don’t have access to the computing power and expertise to fully utilise AI, this is starting to change. In 2017, AI floated into the cloud. Amazon, Google and Microsoft have introduced large-scale cloud-based AI. This includes open-source AI software as well as AI services for turning speech in audio files into time-stamped text, translating between various languages and tracking people, activities, and objects in video. I’m looking forward to seeing these tools used in ICT4D soon.

Growing Up with Alexa

Considering the interaction between her four-year-old niece and Amazon Echo’s Alexa, a reporter asked the following question: What will it do to kids to have digital butlers they can boss around? What is the impact of growing up with Alexa? Will it make kids better adjusted and educated — or the opposite? This piece offers interesting questions on the social impact of AI on children.

The Ethical Dimension

The World Commission on the Ethics of Scientific Knowledge and Technology of UNESCO (COMEST) last year released a report on the ethical issues surrounding the use of contemporary robotic technologies — underpinned by AI — in society (there is a 2-minute video summary). The bottom line: some decisions always require meaningful human control.

Amidst the growing role of robots in our world there are new responsibilities for humans to ensure that people and machines can live in productive co-existence. As AI impacts our world in greater ways, the ethical dimension will equally become more important, bringing philosophers, technologists and policy-makers around the same table. Being in the ICT4D space, our role as technologists and development agents will be critical here.

Image: © CC-BY-SA by Silver Blue

Fake News – Weekend Long Reads

We live in an era, according to the Economist, that is post-truth. Especially in politics, this time sees “a reliance on assertions that ‘feel true’ but have no basis in fact.” In 2016, post-truth was the Oxford Dictionaries Word of the Year.

Untruths have always been with us, but the internet is the medium that changed everything. The scale with which “alternative facts“, untruths and blatant lies can be created and spread — by people and algorithms — can, for the first time ever, threaten democracy and social cohesion at a global scale.

For those of us who have, for a long time, believed in the power of the internet to break down barriers between people and cultures, foster dialogue, have a sharpening effect on truth through increased transparency and access to information, post-truth’s most dangerous weapon, “fake news“, is a bitter pill to swallow. While fake news has been around since the late 19th century, it is now a headline phenomenon, the Collins’ Word of the Year for 2017. What happened to the grand internet dream of the democratisation of knowledge?

All of us have a duty to engage with these complex issues, to understand them, take a position, and reclaim the dream. Most importantly, we need to constantly question whether the digital tools we built, and continue to build, are part of the problem.

The Birth of a Word

It is useful to go back only a year and a half to remind ourselves how fake news became a household word. WIRED’s article traces the the birth — and, it claims — the death of it. How did it die? It quickly became so diluted in meaning, so claimed by those shouting the loudest, that it has become meaningless in many ways.

Fake News, or Information Disorder?

In an attempt to bring structure to the discussions, the Council of Europe produced a report on what it calls information disorder. The authors refrain from using the term fake news, for two reasons. First, they believe it is “woefully inadequate” to describe a very complex issue, and, secondly, it has been appropriated by politicians to slam any news or organisation they find disagreeable, thus becoming a mechanism for repression — what the New York Times calls “a cudgel for strongmen”.

The authors introduce a new conceptual framework for examining information disorder, identifying three different types:

  • Mis-information is when false information is shared, but no harm is meant. (According to Open University research, misinformation is rife among refugee populations.)
  • Dis-information is when false information is knowingly shared to cause harm.
  • Mal-information is when genuine information is shared to cause harm, often by moving information designed to stay private into the public sphere.

The report concludes with excellent recommendations for technology companies, as well as a range of other stakeholders. If the report is too long for you, be sure just to read the recommendations.

Fight It With Software

Tom Wheeler at the Brookings Institute offers a history of information sharing, control and news curation. He laments that today the “algorithms that decide our news feed are programmed to prioritize user attention over truth to optimize for engagement, which means optimizing for outrage, anger and awe.” But, he proposes: “it was software algorithms that put us in this situation, and it is software algorithms that can get us out of it.”

The idea is “public interest algorithms” that interface with social network platforms to, at an aggregate level, track information sources, spread and influence. Such software could help public interest groups monitor social media in the same way they do for broadcast media.

Fight It With Education

While I believe in the idea of software as the solution, the Wheeler article seems to miss a key point: information spread is a dance between algorithms and people. Every like, share and comment by you and me feeds the beast. Without us, the algorithm starves.

We need to change the way we behave online; media and information literacy are crucial to this. There are many excellent resources for teens, adults and teachers to help us all be more circumspect online. I like the Five Key Questions That Can Change the World (from 2005!)

Want To Understand It Better? Fake Some

Finally, long before fake news become popular, in 2008, Professor T. Mills Kelly got his students at George Mason University to create fake Wikipedia pages to teach them the fallibility of the internet. At Google’s Newsgeist unconference last month, a similar exercise involved the strategising of a fake news campaign aimed at discrediting a certain US politician. Both instances force us to get into the minds of fakesters and how to use the internet to spread the badness. While creating fake Wikipedia pages doesn’t help the internet information pollution problem, the heart of the exercises are useful — perhaps they should be part of media literacy curricula?

Thanks to Guy Berger for suggesting some of these articles.

Image: © CC-BY-NC .jeff.