In ICT4D We’re Principled, But Are We Practiced Enough? – Your Weekend Long Reads

Last month CO.DESIGN published the 10 New Principles of Good Design (thanks Air-bel Center for the link). The article, which is based on a set of industrial design principles from the 1970s, makes for important reading.

According to the author, and concerning commercial digital solutions, 2017 was “a year of reckoning for the design community. UX became a weapon, AI posed new challenges, and debate erupted over once rock-solid design paradigms.” What is most interesting — and wonderful to boot — is that many of the “new” principles we, the ICT4D community, have endorsed for years.

Good Design is Transparent

For example, the article calls for transparency in design. Apparently today, “amid a string of high-profile data breaches and opaque algorithms that threaten the very bedrock of democracy, consumers have grown wary of slick interfaces that hide their inner workings.”

We know that user-centered design is participatory and that we should expose the important parts of digital solutions to our users. We believe in telling our users what we’ll do with their data.

Good Design Considers Broad Consequences and is Mindful of Systems

The article warns that in focusing on the immediate needs of users, user-friendly design often fails to consider long-term consequences. “Take Facebook’s echo chamber, Airbnb’s deleterious impact on affordable housing,” as examples. Not for us: we understand the existing ecosystem, are conscious of long-term consequences and design for sustainability.

A Little History Lesson

Today we have principles for sectors — such as refugees, health and government (US or UK version?); for cross-cutting themes — such as identity, gender and mobile money; for research; and the grand daddy of them all, for digital development.

These principles have been developed over a long time. Fifteen years go I wrote a literature survey on the best practices of ICT4D projects. It was based on the work of then research pioneer, Bridges.org, drawing on a range of projects from the early 2000s.

In my paper, Bridges.org put forward seven habits of highly effective ICT-enabled development initiatives. By 2007 the list had grown to 12 habits — many of which didn’t look that different from today’s principles.

Do We Practice What We Preach?

But if these principles are not new to us, are we practicing them enough? Don’t get me wrong, the ICT4D community has come a long way in enlisting tech for social good, and the lessons — many learned the hard way — have matured our various guidelines and recommendations. But should we be further down the line by now?

The principles mostly outline what we should do, and some work has been done on the how side, to help us move from principles to practice. But I think that we need to do more to unpack the why don’t we aspect.

Consider this data point from a recent Brookings Institute report Can We Leapfrog: The Potential of Education Innovations to Rapidly Accelerate Progress (more on this report in a future post). Brookings analysed almost 3,000 education innovations around the world (not all tech-based, just so you know) and found that:

… only 16 percent of cataloged interventions regularly use data to drive learning and program outcomes. In fact, most innovations share no information about their data practices.

We know that we should be data-driven and share our practices. So what is going on here? Do the project managers behind these interventions not know that they should do these things? Do they not have the capacity in their teams? Do they not want to because they believe it exposes their non-compliance with such principles? Or perhaps they feel data is their competitive edge and they should hide their practices?

Time for ‘Fess Faires?

Fail faires are an excellent way to share what we tried and what didn’t work. But what about ‘Fess Faires, where we confess why we can’t or — shock horror — won’t follow certain principles. Maybe it’s not our fault, like funding cycles that ICT4D startups can’t survive. But maybe we should be honest and say we won’t collaborate because the funding pie is too small.

If fail faires are more concerned with operational issues, then ‘fess faires look at structural barriers. We need to ask these big questions in safe spaces. Many ICT4D interventions are concerned with behavior change. If we’re to change our own behavior we need to be open about why we do or don’t do things.

Good Design is Honest

So, on the one hand we really can pat ourselves on the back. We’ve had good design principles for almost twenty years. The level of adherence to them has increased, and they have matured over time.

On the other hand, there is still much work to be done. We need to deeply interrogate why we don’t always practice our principles, honestly and openly. Only in this way will we really pursue a key new principle: good design is honest.

Why Digital Skills Really Matter for ICT4D – Your Weekend Long Reads

In an increasingly online world, people need digital skills to work and live productively. One of the major barriers to digital uptake is a lack of these skills.

Across Africa, seven in ten people who don’t use the Internet say they just don’t know how to use it. This is not only a developing country problem: 44% of the European Union population has low or no (19%) digital skills!

It is no surprise, therefore, that the theme for this year’s UNESCO Mobile Learning Week is “Skills for a connected world”. (It runs from 26-30 March in Paris — don’t miss it!)

Global Target for Digital Skills

At Davos last month, the UN Broadband Commission set global broadband targets to bring online the 3.8 billion people not yet connected to the Internet. Goal 4 is that by 2025: 60% of youth and adults should have achieved at least a minimum level of proficiency in sustainable digital skills.

(I’m not quite sure what the difference is between digital skills and sustainable digital skills.) Having a target such as this is good for focusing global efforts towards skilling up.

The Spectrum of Digital Skills

Digital skills is a broad term. While definitions vary, the Broadband Commission report proposes seeing digital skills and competences on a spectrum, including: 

  • Basic functional digital skills, which allow users to access and conduct basic operations on digital technologies;
  • Generic digital skills, which include using digital technologies in meaningful and beneficial ways, such as content creation and online collaboration; and 
  • Higher-level skills, which mean using digital technology in empowering and transformative ways, for example for software development. These skills include 21st century skills and critical digital literacies.

Beyond skills, digital competences include awareness and attitudes concerning technology use. Most of the people served in ICT4D projects fall into the first and second categories. Understanding where your users are and need to be is important, and a spectrum lens helps in that exercise.

Why Skills Really Matter

Beyond the global stats, goals and definitions, why should you really care about the digital skills of your users, other than that they know enough to navigate your IVR menu or your app?

The answers come from the GSMA’s recent pilot evaluation of its Mobile Internet Skills Training Toolkit (MISTT), implemented last year in Rwanda.

Over 300 sales agents from Tigo, the mobile network operator, were trained on MISTT, and they in turn trained over 83,000 customers. The evaluation found that MISTT training:

  • Gives users confidence and helps them overcome the belief that “the Internet is not for me”;
  • Has the potential to help customers move beyond application “islands” — and get them using more applications/services;
  • Has a ripple effect, as customers are training other people on what they have learned (a study in Cape Town also found this); and
  • Increased data usage among trained customers, which led to increased data revenues for Tigo.

In short, more digital skills (beyond just what you need from your users) presents the opportunity for increased engagement, higher numbers of users and, if services are paid-for or data drives revenue, greater earnings. Now those are compelling ICT4D motivators.

Skills as Strategy

Therefore, we need to see skills development as one of the core components of our:

  • Product development strategy (leveraging users who can interact more deeply with features);
  • Growth strategy (leveraging users who train and recruit other users);
  • Revenue strategy (leveraging users who click, share, and maybe even buy).

But what about the cost, you might wonder? As Alex Smith of the GSMA points out, with the data revenues, for Tigo the MISTT pilot returned the investment within a month and saw an ROI of 240% within a quarter. That’s for a mobile operator — it would be fascinating to measure ROI for non-profits.

To get training, the Mobile Information Literacy Curriculum from TASCHA is also work checking out, as is the older GSMA Mobile Literacy Toolkit.

Image: CC by Lau Rey

 

Creating Killer ICT4D Content – Your Weekend Long Reads

Creating killer content is critical to ICT4D success. One of the major barriers to digital uptake is a lack of incentives to go online because of a lack of relevant or attractive content.

This weekend we look at resources for creating great content, drawing on lessons from the mhealth and mAgri sectors. If you are not an mhealth or mAgri practitioner, don’t stop reading now. While professions and sectors like to silo, in reality the ICT4D fields overlap enormously. For example, does a programme that educates nurses for improved obstetrics practices fall under mhealth or meducation? The details may differ, but the approaches, lessons and tech might as well be the same. From each sector there is much to learn and transfer to other m-sectors.

Let’s Get Practical and Make Some Content

Not long ago Dr. Peris Kagotho left medical practice to focus on mhealth. Since then she has successfully categorized, edited and contextualized over 10,000 health tips Kenyans. In a four-part blog series, she highlights techniques and learnings for effective and impactful content development. Read about the prerequisites for quality mhealth content; the principles of behaviour change messaging; creating content that is fit for purpose; and scheduling content for impactful delivery.

Making Content Meaningful Without Re-inventing the Wheel

While there is apparently an abundance of openly accessible health content, this alone is insufficient to make the world healthy and happy. The Knowledge for Health (K4Health) project knows the importance of providing the content in the appropriate context and the language of the people who will use it.

K4Health and USAID have therefore created a guide to adapting existing global health content for different audiences with the goal of expanding the reach, usefulness, and use of evidence-based global health content. Fantastic.

+ The Nutrition Knowledge Bank is an open access library of free to use nutrituion content.

Lessons on Content Placement, Format, Data and More

The Talking Book, a ruggedized audio player and recorder by Literacy Bridge, offers agricultural, health and livelihoods education to deep rural communities in four African countries. The UNESCO-Pearson case study on the project highlights key content development approaches and lessons, drawn from over ten years of experience. For example, it’s important not to overload users with too much content; the fist few messages in a content category get played the most, so those are the best slots for the most important messages; and these rural audiences prefer content as songs and dramas over lectures. The content strategy is highly data-driven.

Content Isn’t Delivered in a Vacuum

In 2016, the Government of India launched a nation-wide mobile health programme called ‘Kilkari’ to benefit 10 million new and expecting mothers by providing audio-based maternal and child health messages on a weekly basis. The service was designed by BBC Media Action and the GSMA case study describes its evolution, learnings and best practices, covering content and more. It is useful to zoom out and see the bigger picture of an mhealth initiative, and how content forms one part of the whole.

Image: CC by TTCMobile

Voice for Development: Your Weekend Long Reads

While ICT4D innovates from the ground up, most tech we use comes from the top. Yes, it takes a little time for the prices of commercial services in Silicon Valley to drop sufficiently, and the tech to diffuse to the audiences we work with, but the internet and mobile have made that wait short indeed.

Next Big Wave After the Keyboard and Touch: Voice

One such innovation is natural language processing, which draws on AI and machine learning to attempt to understand human language communication and to react and respond appropriately.

While this is not a new field, the quality of understanding and speaking has improved dramatically in recent years. The Economist predicts that voice computing, which enables hands-off communication with machines, is the next and fundamental wave of human-machine interaction, after the keyboard and then touch.

The prediction is driven by tech advances as well as increasing uptake in the consumer market (note: in developed markets): last year Apple’s Siri was handling over 2bn commands a week, and 20% of Google searches on Android-powered handsets in America were input by voice.

Alexa Everywhere

Alexa is Amazon’s voice assistant that lives in Amazon devices like Echo and Dot. Well, actually, Alexa lives in the cloud and provides speech recognition and machine learning services to all Alexa-enabled devices.

Unlike Google and Apple, Amazon is wanting to open up Alexa and have it (her?) embedded into any products, not just those from Amazon. If you’re into manufacturer, you can now buy one of a range of Alexa Development Kits for a few hundred dollars to construct your own voice-controlled products.

Skills Skills Skills

While Amazon works hard to get Alexa into every home, car and device, you can in the meantime start creating Alexa skills. There’s a short Codecademy course on how to do this. It explains that Alexa provides a set of built-in capabilities, referred to as skills, that define how you can interact with the device. For example, Alexa’s built-in skills include playing music, reading the news, getting a weather forecast, and querying Wikipedia. So, you could say things like: Alexa, what’s the weather in Timbuktu.

Anyone can develop their own custom skills by using the Alexa Skills Kit (ASK). (The skills can only be used in the UK, US and Germany, presumably for now.) An Amazon user “enables” the skill after which it works on any of her Alexa-enabled devices. Et voilà, she simply says the wake phrase to access the skill. This is pretty cool.

What Does This Mean for ICT4D?

Is the day coming, not long from now, when machine-based voice assistants are ICT4D’s greatest helpers? Will it open doors of convenience for all and doors of inclusion for people with low digital skills or literacy? Hmmm. There’s a lot of ground to cover before that happens.

While natural language processing has come a looooong way, it’s far from perfect either. Comments about this abound — this one concerning Question of the Day, a popular Alexa skill:

Alexa sometimes does not hear the answer correctly, even though I try very hard to enunciate. It’s frustrating when I’ve gotten the answer right — not even by guessing, but actually knew it — and Alexa comes back and tells me I’ve gotten it wrong!

In ICT4D, there’s isn’t always room for error. What about sensitive content and interactions that can easily go awry? Is it likely that soon someone will say Alexa, send 100 dollars to my mother in the Philippines? What if she sends the money to the brother in New Orleans?

Other challenges include Alexa’s language range, cost, the need for online connectivity and, big one, privacy. There is a risk in being tied to one provider, one tech giant. This stuff should be based on open standards.

Still, it is interesting and exciting to see this move from Amazon and contemplate how it could affect ICT4D. What are your thoughts for how voice for development (V4D) could make a social impact?

Here’s a parting challenge to ICTWorks readers: Try out Amazon Skills and tell us whether it’s got legs for development? An ICT4D skill, if you will. (It can be something simple for now, not Alexa, eliminate world poverty).

Image: CC-BY-NC by Rob Albright

Five Traits of Low-literate Users: Your Weekend Long Reads

We know that the first step to good human-centered design is understanding your user. IDEO calls this having an empathy mindset, the “capacity to step into other people’s shoes, to understand their lives, and start to solve problems from their perspectives.”

Having empathy can be especially challenging in ICT4D since the people we develop solutions for often live in completely different worlds to us, literally and figuratively.

I’m currently drafting a set of guidelines for more inclusive design of digital solutions for low-literate and low-skilled people. (Your expert input on it will be requested soon!) There are many excellent guides to good ICT4D, and the point is not to duplicate efforts here. Rather, it is to focus the lens on the 750 million people who cannot read or write and the 2 billion people who are semi-literate. In other words, likely a significant portion of your target users.

Globally, the offline population is disproportionately rural, poor, elderly and female. They have limited education and low literacy. Of course people who are low-literate and low-skilled do not constitute a homogeneous group, and differences abound across and within communities.

Despite these variances, and while every user is unique, research has revealed certain traits that are common enough to pull out and be useful in developing empathy for this audience. Each has implications for user-centered design processes and the design of digital solutions (the subject of future posts).

Note: much of the research below comes from Indrani Medhi Thies and the teams she has worked with (including Kentaro Toyama) at Microsoft Research India, developing job boards, maps, agri video libraries and more, for low-literates. If you do nothing else, watch her presentation at HCI 2017, an excellent summary of twelve years of research.

Not Just an Inability to Read

Research suggests that low exposure to education means cognitive skills needed for digital interaction can be underdeveloped. For example, low-literate users can struggle with transferring learning from one setting to another, such as from instructional videos to implementation in real life. Secondly, conceptualising and navigating information hierarchies can be more challenging than for well educated users (another paper here).

Low-literate Users Are Scared and Sceptical of Tech

Unsurprisingly, low literate users are not confident in their use of ICTs. What this means is that they are scared of touching the tech for fear of breaking it. (There are many schools in low-income, rural areas where brand new donated computers are locked up so that nobody uses and damages them!)

Further, even if they don’t break it, they might be seen as not knowing how to use it, causing embarrassment. When they do use tech, they can be easily confused by the UI.

Low-literate users can lack awareness of what digital can deliver, mistrust the technology and doubt that it holds information relevant to their lives.

One of Multiple Users

Low-income people often live in close-knit communities. Social norms and hierarchies influence who has access to what technology, how information flows between community members and who is trusted.

Within families, devices are often shared. And when low-literates use the device it may be necessary to involve “infomediaries” to assist, such as read messages, navigate the UI or troubleshoot the tech. Infomediaries can also hinder the experience when their “filtering and funnelling decisions limit the low-literate users’ information-seeking behaviour.”

The implication is that the “target user” is really plural — the node and all the people around him/her. Your digital solution is really for multiple users and used in mediated scenarios.

Divided by Gender

Two thirds of the world’s illiterate population are women. They generally use fewer mobile services than men. In South Asia women are 38% less likely than men to own a mobile phone, and are therefore more likely to be “sharing” users. Cultural, social or religious norms can restrict digital access for women, deepening the gender digital divide. In short, for low-literate and low-income users, gender matters.

Driven by Motivation (Which Can Trump Bad UI)

While we often attribute successful digital usage to good UI, research has shown that motivation is a strong driver for task completion. Despite minimum technical knowledge, urban youth in India hungry for entertainment content traversed as many as 19 steps to Bluetooth music, videos and comedy clips between phones and PCs.

In terms of livelihoods and living, the desire to sell crops for more, have healthier children, access government grants or apply for a visa, are the motivators that we need to tap to engage low-literate users.

If “sufficient user motivation towards a goal turns UI barriers into mere speed bumps,” do we pay enough attention to how much our users want what we’re offering? This can make or break a project.

Image: © CC-BY-NC-ND by Simone D. McCourtie / World Bank

Artificial Intelligence in Education: Your Weekend Long Reads


Continuing the focus on artificial intelligence (AI), this weekend looks at it in education. In general, there are many fanciful AI in Ed possibilities proposed to help people teach and learn, some of which are genuinely exciting and others that just look much like today.

One encouraging consensus from the readings below is that, while there is concern that AI and robots will ultimately take over certain human jobs, teachers are safe. The role relies too much on the skills that AI is not good at, such as creativity and emotional intelligence.

An Argument for AI in Education

A 2016 report (two-page summary) from Pearson and University College London’s Knowledge Lab offers a very readable and coherent argument for AI in education. It describes what is possible today, for example one-on-one digital tutoring to every student, and what is potentially possible in the future, such as lifelong learning companions powered by AI that can accompany and support individual learners throughout their studies – in and beyond school. Or, one day, there could be new forms of assessment that measure learning while it is taking place, shaping the learning experience in real time. It also proposes three actions to help us get from here to there.

AI and People, Not AI Instead of People

There is an argument that rather than focusing solely on building more intelligent AI to take humans out of the loop, we should focus just as much on intelligence amplification/augmentation. This is the use of technology – including AI – to provide people with information that helps them make better decisions and learn more effectively. So, for instance, rather than automating the grading of student essays, some researchers are focusing on how they can provide intelligent feedback to students that helps them better assess their own writing.

The “Human Touch” as Value Proposition

At Online Educa Berlin last month, I heard Dr. Tarek R. Besold, lecturer in Data Science at City, University of London, talk about AI in Ed (my rough notes are here). He built on the idea that we need to think more carefully about what AI does well and what humans do well.

For example, AI can provide intelligent tutoring, but only on well-defined, narrow domains for which we have lots of data. Learning analytics can analyse learner behaviour and teacher activities … so as to identify individual needs and preferences to inform human intervention. Humans, while inefficient at searching, sorting and mining data, for example, are good at understanding, empathy and relationships.

In fact, of all the sectors McKinsey & Company examined in a report on where machines could replace humans, the technical feasibility of automation is lowest in education, at least for now. Why? Because the essence of teaching is deep expertise and complex interactions with other people, things that AI are not yet good at. Besold proposed the “human touch” as our value proposition.

Figuring out how humans and AI can bring out the best in each other to improve education, now that is an exciting proposal. Actually creating this teacher-machine symbiosis in the classroom will be a major challenge, though, given the perception of job loss from technology.

The Future of AI Will Be Female

Emotional intelligence is increasingly in demand in the workplace, and will only be more so in the future when AI will have replaced predicable, repetitive jobs. This means that cultivating emotional intelligence and social skills should be critical components of education today. But there’s a fascinating angle here: in general, women score much higher than men in emotional intelligence. Thus, Quartz claims, women are far better prepared for an AI future.

Image: © CC-BY-NC-ND by Ericsson

Artificial Intelligence: Your Weekend Long Reads

Artificial intelligence (AI) was one of the hottest topics of 2017. A Gartner “mega trend,” their research director, Mike J. Walker, proposed that “AI technologies will be the most disruptive class of technologies over the next 10 years due to radical computational power, near-endless amounts of data and unprecedented advances in deep neural networks.”

But as much as it is trendy and bursting with promise, it is also controversial, overhyped and misunderstood. In fact, it has yet to enjoy a widely accepted definition.

AI underpins many of Gartner’s emerging technologies on its 2017 hype cycle. However, smart robots, deep learning and machine learning were all cresting the Peak of Inflated Expectations. Of course, after that comes the Trough of Disillusionment. Collectively they will take two to ten years to reach the Plateau of Productivity.

AI is both a long game and already in our lives. Your Amazon or Netflix recommendations are partly AI-based. So is speech recognition and translation,  such as in Google Home and Google Translate. But, as you know from using these services, they are far from perfect. Closer to ICT4D, within monitoring and evaluation we know the opportunities and limitations of AI.

In 2018 we can expect to hear a lot more about AI, along with promises and disappointments. Almost anyone who’s software has an algorithm will claim they’re harnessing AI. There will suddenly be more adaptive, intelligent platforms in edtech, and more talk of smart robots and AI hollowing out the global job market.

While there will be some truth to the AI claims and powerful new platforms, we need to learn to read between the lines. The potential of AI is exciting and will be realised over the coming years and decades, but in varying degrees and unevenly spread. For now, a balanced view is needed to discern between what is hype or on the long horizon, and what can we use today for greater social impact. Only in this way can we fully get to grips with the technological, social and ethical impact of AI. Below are a few articles to get our interest piqued in 2018.

The Next Fifteen Years

To get the big picture, an excellent place to start is the Stanford University report Artificial Intelligence and Life in 2030. A panel of experts focussed the AI lens on eight domains they considered most salient: transportation; service robots; healthcare; education; low-resource communities; public safety and security; employment and workplace; and entertainment. In each of these domains, the report both reflects on progress in the past fifteen years and anticipates developments in the coming fifteen years.

AI for Good

Last year the ITU hosted the AI for Good Global Summit, which brought together a host of international NGOs, UN bodies, academia and the private sector to consider the opportunities and limitations of AI for good. The conference report offers a summary of the key takeaways and applications cited in the event. A number of webcasts are also available.

AI Moves into the Cloud

While most ICT4D tech outfits simply don’t have access to the computing power and expertise to fully utilise AI, this is starting to change. In 2017, AI floated into the cloud. Amazon, Google and Microsoft have introduced large-scale cloud-based AI. This includes open-source AI software as well as AI services for turning speech in audio files into time-stamped text, translating between various languages and tracking people, activities, and objects in video. I’m looking forward to seeing these tools used in ICT4D soon.

Growing Up with Alexa

Considering the interaction between her four-year-old niece and Amazon Echo’s Alexa, a reporter asked the following question: What will it do to kids to have digital butlers they can boss around? What is the impact of growing up with Alexa? Will it make kids better adjusted and educated — or the opposite? This piece offers interesting questions on the social impact of AI on children.

The Ethical Dimension

The World Commission on the Ethics of Scientific Knowledge and Technology of UNESCO (COMEST) last year released a report on the ethical issues surrounding the use of contemporary robotic technologies — underpinned by AI — in society (there is a 2-minute video summary). The bottom line: some decisions always require meaningful human control.

Amidst the growing role of robots in our world there are new responsibilities for humans to ensure that people and machines can live in productive co-existence. As AI impacts our world in greater ways, the ethical dimension will equally become more important, bringing philosophers, technologists and policy-makers around the same table. Being in the ICT4D space, our role as technologists and development agents will be critical here.

Image: © CC-BY-SA by Silver Blue

Fake News – Weekend Long Reads

We live in an era, according to the Economist, that is post-truth. Especially in politics, this time sees “a reliance on assertions that ‘feel true’ but have no basis in fact.” In 2016, post-truth was the Oxford Dictionaries Word of the Year.

Untruths have always been with us, but the internet is the medium that changed everything. The scale with which “alternative facts“, untruths and blatant lies can be created and spread — by people and algorithms — can, for the first time ever, threaten democracy and social cohesion at a global scale.

For those of us who have, for a long time, believed in the power of the internet to break down barriers between people and cultures, foster dialogue, have a sharpening effect on truth through increased transparency and access to information, post-truth’s most dangerous weapon, “fake news“, is a bitter pill to swallow. While fake news has been around since the late 19th century, it is now a headline phenomenon, the Collins’ Word of the Year for 2017. What happened to the grand internet dream of the democratisation of knowledge?

All of us have a duty to engage with these complex issues, to understand them, take a position, and reclaim the dream. Most importantly, we need to constantly question whether the digital tools we built, and continue to build, are part of the problem.

The Birth of a Word

It is useful to go back only a year and a half to remind ourselves how fake news became a household word. WIRED’s article traces the the birth — and, it claims — the death of it. How did it die? It quickly became so diluted in meaning, so claimed by those shouting the loudest, that it has become meaningless in many ways.

Fake News, or Information Disorder?

In an attempt to bring structure to the discussions, the Council of Europe produced a report on what it calls information disorder. The authors refrain from using the term fake news, for two reasons. First, they believe it is “woefully inadequate” to describe a very complex issue, and, secondly, it has been appropriated by politicians to slam any news or organisation they find disagreeable, thus becoming a mechanism for repression — what the New York Times calls “a cudgel for strongmen”.

The authors introduce a new conceptual framework for examining information disorder, identifying three different types:

  • Mis-information is when false information is shared, but no harm is meant. (According to Open University research, misinformation is rife among refugee populations.)
  • Dis-information is when false information is knowingly shared to cause harm.
  • Mal-information is when genuine information is shared to cause harm, often by moving information designed to stay private into the public sphere.

The report concludes with excellent recommendations for technology companies, as well as a range of other stakeholders. If the report is too long for you, be sure just to read the recommendations.

Fight It With Software

Tom Wheeler at the Brookings Institute offers a history of information sharing, control and news curation. He laments that today the “algorithms that decide our news feed are programmed to prioritize user attention over truth to optimize for engagement, which means optimizing for outrage, anger and awe.” But, he proposes: “it was software algorithms that put us in this situation, and it is software algorithms that can get us out of it.”

The idea is “public interest algorithms” that interface with social network platforms to, at an aggregate level, track information sources, spread and influence. Such software could help public interest groups monitor social media in the same way they do for broadcast media.

Fight It With Education

While I believe in the idea of software as the solution, the Wheeler article seems to miss a key point: information spread is a dance between algorithms and people. Every like, share and comment by you and me feeds the beast. Without us, the algorithm starves.

We need to change the way we behave online; media and information literacy are crucial to this. There are many excellent resources for teens, adults and teachers to help us all be more circumspect online. I like the Five Key Questions That Can Change the World (from 2005!)

Want To Understand It Better? Fake Some

Finally, long before fake news become popular, in 2008, Professor T. Mills Kelly got his students at George Mason University to create fake Wikipedia pages to teach them the fallibility of the internet. At Google’s Newsgeist unconference last month, a similar exercise involved the strategising of a fake news campaign aimed at discrediting a certain US politician. Both instances force us to get into the minds of fakesters and how to use the internet to spread the badness. While creating fake Wikipedia pages doesn’t help the internet information pollution problem, the heart of the exercises are useful — perhaps they should be part of media literacy curricula?

Thanks to Guy Berger for suggesting some of these articles.

Image: © CC-BY-NC .jeff.

Online Educa Berlin 2017 – rough notes

I recently attended my first Online Educa Berlin conference and found it to be very interesting. With over 2,000 attendees there are enough sessions for you to really dive into whatever is your particular edtech passion. There are also a large number of exhibitors. The focus of the event is largely US and European, but for me this was a breath of fresh air after almost always attending developing country events.

I presented the key findings from the forthcoming landscape review Digital Inclusion for Low-skilled and Low-literate People.

Below are my rough notes from the event, with key takeaways in highlight.

Learning and Working Alongside AI in Everyday Life

Donald Clark, Plan B Learning, UK

  • Recommended book: Janesville: An American Story.
  • 47% of jobs will be automated in next 20 years — claimed by Frey and Osborne, 2013. He says it’s not true.
  • Top 10 market cap companies in the world: 8 of 10 use AI or tech.
  • AI is already in our lives. We have all watched a Netflix show or bought a book on Amazon because of a software-based recommendation.
  • AI in learning:
    • One of the biggest uses of AI in ed is to check for plagiarism. We can do more.
    • Coursera uses AI for online assessmenta (face recognition).
    • Check out: Wildfire (automated learning content creation), PhotoMath (Scan a maths problem for an
      instant result plus working out), Cogbooks (advanced adaptive learning platform).
    • Opportunity: AI can analyse and assess data without bias (unlike humans).
  • AI affects what we should teach, how we teach it, why we teach it. We need to rethink the education offering in the age of AI.

Tarek R. Besold, City, University of London, UK

Key message: AI is useful, but not everything is AI and AI is not good at all things. We need to think more carefully about what AI does well, what humans do well, and how we can work together.

  • Not all tech is AI, e.g. VR is not AI.
  • Intelligent tutoring only works well on well-defined, narrow domains for which we have lots of data.
  • Learning analytics is best used to track learner and teacher activities so as to identify individual needs and preferences to inform human intervention.
  • He pushes back against the popular call for all young people to learning coding. He says they don’t need to all learn programming, but rather logical thinking, procedural thinking, reasoning.
  • AI will not create equal access to education because of inequality in ICT infrastructure.
  • AI is good at taking over the “declarative knowledge” part of teaching, which can give human teachers/educators more time to focus on skills and the social aspects.
  • See the “human touch” as a value proposition beyond AI.
  • In automation, AI can take over mechanistic and repetitive tasks, giving human workers time to focus on decision-making, creative tasks.
  • Impact of AI on labour market: We need a societal decision: less workers or shorter working week for everyone (we can push back at tech companies)?
  • Must read for the AI-savvy decision maker: Artificial Intelligence and Life in 2030 (Stanford University report).

I asked: If AI should augment teaching and learning, with both humans and AI having strengths, how do we move AI into education (that could perceive it as a threat)?

  • Donald: Practical level: introduce spaced learning, adaptive learning, content creation to demonstrate the benefit.
  • Tarek: Political level: take the market approach out of education. Ensure humans will not lose jobs because of technology, shift societal perspectives on putting humans first.

Exhibitor: 360AI provides Artificial Intelligence building blocks delivered as APIs, aimed at accelerating the development of innovative teaching and learning products. 

LMS

  • Rethinking Learning Management Systems as Next Generation Digital Learning Environments (NGDLE) — see this article.
    Not an LMS but an LMX (Learning Method Experience).
  • It can be difficult to choose the right standards. Below is what Marieke de Wit, SURFnet B.V. shared:

LTI standard apparently rather poor on documentation right now.

Jeff Merriman, Associate Director of MIT’s Office of Educational Innovation and Technology and co-founder of the DXtera Institute:

  • MIT has a growing open-source Educational Infrastructure Service (EIS).
  • No UI, they are “headless”.
  • Integration challenge is huge.
  • How can chatbots interface with a LMS? Use existing software, example Slack, as the LMX. Chatbot integration is then back-ended by, for example, an assessment service.

 

Chatbots in teaching and learning

Donald Clark, Plan B Learning, UK

  • Learning bots:
    • Onboarding bots: chiefonboarding
    • Find stuff: Invisible LMS. Engagement not management
    • Learner engagement: differ.chat
    • Learner support: Deakin’s campus genie student services / IBM and “Jill Watson”
    • Teach courses: Duolingo bots for language learning
    • Practice dfficult learner’ bot for teachers
    • Well-being: Woebot
  • 7 interface benefits:
    • Natural, easy to use interface
    • Frictionless interface
    • Less cognitive overload
    • In line with current online interfaces
    • Suitable for younger audiences
    • Less formal but still structured
    • Presentation separate from AI drivers

 

Emerging Technology to Develop Learner Engagement and Increase Impact on Language Learning Outcomes

Geoff Stead, Cambridge Assessment English, UK

  • Cambridge English Beta
    • “Curious about how to shape our future products? Cambridge English Beta is the place to find out about our latest digital developments and get early access to trial versions of our English language learning products.”
  • Quiz Your English live challenge
    • Free game
    • 2.5m games played
    • 70k players
    • Top players play over 7k games per month
    • 70% of installed users drop off in first week
    • Features:
      • Social clues, people challenging you
      • Leaderboard
      • Next steps
  • The Digital Teacher
    • Resources to help you build your confidence and develop the skills you need to take your next step in digital language teaching.
  • Cambridge English MOOCs
    • 6 Moocs, run 14 times
    • 132,000 active students
    • Partner with FutureLearn (UNESCO is also a partner)
    • Successes:
      • Lots of use of video clips of real teachers, real learners and real lessons
      • Tasks to accompany each video
      • Community of learners

Chris Cavey, British Council

  • British Council MOOCs
    • 13 MOOCs run 50 times
    • Again, using video for teaching
    • Lots of tasks: what do think of? Tell us about your day?
    • Facebook Live sessions. Lots of discussion and interaction during MOOC. At end, send students to FB group. For BC their FB group has 200,000 users. Throughout, lots of social media interaction and sharing.
    • MOOCs do not assess language skills, they help prepare users for traditional assessments. It is a spring board and platform for peer sharing and learning.  MOOCs can measure time on task, engagement level, etc.

 

UNV e-Campus

https://learning.unv.org/

  • Moodle-based.
  • Mandatory training (ethics, volunteering, etc.) as well as supplementary courses, e.g. language learning, life skills, business development (courses are bought by UNV for the users).
  • Engagement: Communities-of-practice, remote coaching, online chat events, webinars.

 

Myths And Facts About the Future of Schooling

Pasi Sahlberg, director general of the Centre for International Mobility in Helsinki, Finland

He compared …

Unsuccessful education policies (Global Education Reform Movement) – England, USA, Australia, Chile:

  • Competition
  • Standardisation
  • De-professionalisation (anyone can become a teacher, as long as you love children)
  • Test-based accountability
  • Market-based privatization

Successful education policies (Global Education Reform Movement) – Japan, Canada, Estonia, Finland:

  • Co-operation
  • Encourage risk-taking and creativity
  • Professionalism
  • Trust-based responsibility
  • Equitable public education for all

Next predicted indicator group:

  • Health and well-being of children, not only equity and excellence

 

Longevity, Learning, Technology

Abigail Trafford, author and leader in the movement to fight ageism, USA:

  • Not years at the end, but healthy decades in the middle.
  • 50s and 60s: second adolescence. What do I want to do?
  • We need infrastructure to help older people for their next career, e.g. training, learnerships, internships.
  • Wide open opportunity for new curriculum development and part-time learning and work.
  • Over 1/3 of US is over 50.
  • Brain is plastic, it keeps learning.
  • Older people are better at analysis and strategy; younger better at quick learning and short term memory.
  • A long way to go to mainstream education and learning across the lifespan.
  • Old think: 9-5 work until retirement.
  • New think: 24/7 production and services.
  • We all need to expose and fight ageism.

Six Digital Inclusion Takeaways – Your Weekend Long Reads

UNESCO, in partnership with Pearson, has released ten case studies of digital solutions that are inclusive for people with low skills and low literacy, helping them to participate in the knowledge society in innovative ways. Of interest to UNESCO and Pearson is how through technology use, users’ skills are developed and, ultimately, their livelihoods are improved.

The case studies, authored by Dr Nathan Castillo and myself, span sectors such as health, agriculture, the environment and civic participation. Each case study reveals how the inclusive digital solutions were designed with users, the skills needed to effectively use the solutions, the reach and result of usage and, most importantly, key lessons learned and recommendations. The case studies are rich in detail and make for stimulating reading.

After releasing all fourteen case studies – the last four coming at UNESCO Mobile Learning Week 2018 – UNESCO and Pearson will then develop a set of guidelines for more inclusive digital development. In the meantime, below are six takeaways that will hopefully inform your ICT4D journey to greater inclusion.

Skills Benchmarking is Important

A key argument of the UNESCO-Pearson work is that, while good examples of user-centred design exist, not enough attention is given to users’ digital skills and literacy, present and future. In addition to designing around users’ needs, benchmarking their capabilities means we can see users as learners and create solutions that suit them today, but also help them develop skills that can use a richer feature set tomorrow. More features equals more complex interactions, increased possibility for learning and deeper usage, and potential revenue for solution providers. Understanding user capabilities also means that the right training can be delivered. Benchmarking can happen through specific assessments and also by using international frameworks, such as DigComp2.1: The Digital Competence Framework for Citizens.

Medic Mobile is an integrated mobile system for improving maternal and neonatal health. While it operates in twenty-three countries, the case study focuses on the rural Nepal implementation. The community health workers (CHWs) — trusted members in the local human social network — that use the system on the ground have needed initial and ongoing training.

Medic Mobile routinely runs pre- and post-training skills tests. Post-test results from a training conducted with 500 CHWs revealed the strongest overall gains in the more complex mobile phone operations that CHWs initially struggled with most. There were 40–45 per cent gains in the ability to use SMS functions including retrieving specific SMSs and accessing the phones inbox.

By benchmarking the users pre- and post-training, Medic Mobile is able to track development. It also informs their practise of pairing low-literate with higher-literate CHWs, to provide peer support to each other.

Basic Usage, Rich Data

Even though end users are low skilled and low literate, and interfacing with appropriately simple solutions, doesn’t exclude the opportunity for data collection and complex analysis by solution providers. By tracking farmer usage of each of the Crop Specific Mobile Apps in rural India, the company behind it can identify in which districts farmers need to diversify their crops, where they are diversifying but need guidance, and where new disease outbreaks are likely happening. Such usage data can be sent to the cloud via SMS, if needed, to ensure collection in low-connectivity districts. The farmers thus become rich data sources for interventions triggered at a district- or state-level by government – and in the process create a potential revenue stream for the solution provider holding the analysed data.

Users unwittingly informing digital interventions is not new: through Liking or posting on Facebook, they inform the algorithms for targeted advertising. However, in this case the users are particularly low literate, and such real-time data gathering has not been possible before. Previously, extension workers would be relied upon to gather local information, but the process would be slow.

Another example is Khushi Baby, a digital service in India that supports effective tracking of maternal and child healthcare data by CHWs – often low-literate and with low digital skills. Mothers are also users as they ensure their baby’s wear their medical records in the form of a digital necklace. As data is collected, it is aggregated and analysed for district-level decision-making related to health administration. Low-literate users are active participants in data generation for programmatic and policy interventions — in real time.

Each of the three user groups: mothers, CHWs and district officials, interface with appropriately designed technology: wearable necklaces, mobile data collection apps and web-based dashboards, respectively.

Let the Tech Help With Quality Control for Inclusion of Low-skilled and Low-literate Users

In some of the case studies low-skilled and low-literate users are active participants in mHealth support interventions. How do we know that they are not mistakenly doing harm? The tech helps.

hearScreen™ allows anyone with very limited training and the app and headphone set to conduct hearing tests (in developing countries there is a dearth of trained professionals to ensure that all children receive such tests). By sending false positives to the person administering the test (the screener), and tracking whether he or she records these as legitimate responses from the patient, an individual screener quality index is created. The index acts as a measure for quality control and system reports inform supervisors about screeners that need further training.

The Chipatala cha pa Foni (CCPF) health information service, delivered in Malawi via a call centre and text messages, allows supervisors to monitor the quality of hotline operators. At least ten calls per operator are reviewed and scored and, if needed, an individualised improvement plan is developed.

Content (Testing) is King

In 1996 Bill Gates famously said: Content is king. (How about queen?!) At the time he wouldn’t have been thinking of low-skilled and low-literate users. And yet, for these groups, content is even more important than for others. It needs to be perfect: understandable, accessible, context-specific and, often, actionable. Tone, voice, perspective, message length and medium are all important.

In fact, he should have said, content testing is king. In almost every case study  there is a solid focus on ensuring that the delivered content is appropriate. The 3-2-1 Service by HNI and Viamo, which offers a range of audio and text content in fourteen countries, is based on rigorous and ongoing content testing. For HNI, an “a-ha” moment came when they realised their target audience in Zambia couldn’t read the health SMSs being sent. Illiteracy gave rise to the addition of the audio service.

Low-literate Users Can Also Be Content Creators

For people from the developed world the general picture of digital content creation is the teen producing Youtube videos, the amateur expert updating Wikipedia pages, or the teacher creating openly licensed interactive lessons for her class. But in rural Ghana or media-dark (read: internet- or radio-free) parts of India, the case studies reveal digital content creation in very different forms and by people with very low or no literacy.

In Ghana, the Talking Book audio device allows rural farmers to not only browse and listen to livelihoods content, but to record and share their own content. In India, Mobile Vaani is an audio-based community-media platform for offline populations, accessed and added to with even basic mobile phones for community mobilisation and social campaigns.

I have noted this before, eight years ago, when seeing low-literate teens in South Africa comment on mobile novels from their phones. What is interesting is how the case study users, like the teens, do not fit the traditional content creator persona.

Leverage Infomediaries and Build Local Capacity

Low-skilled and low-literate users, more than others, encounter and use technology with the help of intermediaries, or as ICTWorks calls them, infomediaries. MIRA Channel, which seeks to improve maternal and child mortality rates in rural India, Afghanistan and Uganda, struggled with the limited experience of mothers with using mobile phones. Their target audience just didn’t have the necessary, even if simple, tech skills.

The adolescent children of the mothers, who generally had more experience in using mobile phones, were enlisted to assist in training and support when using MIRA Channel. In fact, as a result a health programme directed at adolescent girls was developed.

Nano Ganesh allows even low-literate farmers to remotely control their irrigation water pumps via mobile phone, saving water and electricity, and reducing soil erosion. The pump devices need to be installed and maintained — rural farmers and local technicians are trained for this purpose. The technicians provide on-the-ground support and earn wages in the process. They, in turn, are supported remotely via Skype and live video from the Nano Ganesh service centre, and via offline training videos. Digital support skills are embedded within the community.

Mobile Vaani has also grown through a model that is firmly community-based. Because the content is hyper-local, a network of local clubs with community reporters ensures that awareness raising, training, support and curation of user-generated content happens by and with the community.

Working through a human network seems to be the only way to genuinely win the trust of the local users, provide ongoing support and ensure communal ownership. Digital solutions serving low-literate and low-skilled populations cannot operate outside of the community. Indeed, you could argue that the success of m-PESA is not the tech, but rather it’s human agent network that registers and manages user activity.

Collectively the case studies hold many more insights, so dive in and start reading the 171 page pack.

Image: © ZMQ/Hilmi Quraishi of MIRA Channel