Tag Archives: Principles

Every Big Data Algorithm Needs a Storyteller – Your Weekend Long Reads

The use of big data by public institutions is increasingly shaping peoples’ lives. In the USA, algorithms influence the criminal justice system through risk assessment and predictive policing systems, drive energy allocation and change educational system through new teacher evaluation tools.

The belief is that the data knows best, that you can’t argue with the math, and that the algorithms ensure the work of public agencies is more efficient and effective. And, often, we simply have to maintain this trust because nobody can examine the algorithms.

But what happens when – not if – the data works against us? What is the consequence of the algorithms being “black boxed” and outside of public scrutiny? Behind this are two implications for ICT4D.

The Data Don’t Lie, Right?

Data scientist and Harvard PhD in Mathematics, Cathy O’Neill, says that clever marketing has tricked us to be intimidated by algorithms, to make us trust and fear algorithms simply because, in general, we trust and fear math.

O’Neill’s 2016 book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, shows how when big data goes wrong teachers lose jobs, women don’t get promoted and global financial systems crash. Her key message: the era of blind faith in big data must end, and the black boxes must be opened.

Demand Algorithmic Accountability

It is very interesting, then, that New York City has a new law on the books to do just that and demand “algorithmic accountability” (presumably drawing on the Web Foundation’s report of the same name). According to MIT Technology Review, the city’s council passed America’s first bill to ban algorithmic discrimination in city government. The bill wants a task force to study how city agencies use algorithms and create a report on how to make algorithms more easily understandable to the public.

AI Now, a research institute at New York University focused on the social impact of AI, has offered a framework centered on what it calls Algorithmic Impact Assessments. Essentially, this calls for greater openness around algorithms, strengthening of agencies’ capacities to evaluate the systems they procure, and increased public opportunity to dispute the numbers and the math behind them.

Data Storytellers

So, what does this mean for ICT4D? Two things, based on our commitment to being transparent and accountable for the data we collect. Firstly, organisations that mine big data need to become interpreters of their algorithms. Someone on the data science team needs to be able to explain the math to the public.

Back in 2014 the UN Secretary General proposed that “communities of ‘information intermediaries’ should be fostered to develop new tools that can translate raw data into information for a broader constituency of non-technical potential users and enable citizens and other data users to provide feedback.” You’ve noticed the increase in jobs for data scientists and data visualisation designers, right?

But it goes beyond that. With every report and outcome that draws on big data, there needs to be a “how we got here” explanation. Not just making the data understandable, but the story behind that data. Maybe the data visualiser does this, but maybe there’s a new role of data storyteller in the making.

The UN Global Pulse principle says we should “design, carry out, report and document our activities with adequate accuracy and openness.” At the same time, Forbes says data storytelling is an essential skill. There is clearly a connection here. Design and UI thinking will be needed to make sure the heavy lifting behind the data scenes can be easily explained, like you would to your grandmother. Is this an impossible ask? Well, the alternative is simply not an option anymore.

Data Activists

Secondly, organisations that use someone else’s big data analysis – like many ICT4D orgs these days – need to take an activist approach. They need to ask questions about where the data comes from, what steps were taken to audit it for inherent bias, for an explanation of the “secret sauce” in the analysis. We need to demand algorithmic accountability” We are creators and arbiters of big data.

The issue extends beyond protecting user data and privacy, important as this is. It relates to transparency and comprehension. Now is the time, before it’s too late, to lay down the practices that ensure we all know how big data gets cooked up.

Image: CC by kris krüg


In ICT4D We’re Principled, But Are We Practiced Enough? – Your Weekend Long Reads

Last month CO.DESIGN published the 10 New Principles of Good Design (thanks Air-bel Center for the link). The article, which is based on a set of industrial design principles from the 1970s, makes for important reading.

According to the author, and concerning commercial digital solutions, 2017 was “a year of reckoning for the design community. UX became a weapon, AI posed new challenges, and debate erupted over once rock-solid design paradigms.” What is most interesting — and wonderful to boot — is that many of the “new” principles we, the ICT4D community, have endorsed for years.

Good Design is Transparent

For example, the article calls for transparency in design. Apparently today, “amid a string of high-profile data breaches and opaque algorithms that threaten the very bedrock of democracy, consumers have grown wary of slick interfaces that hide their inner workings.”

We know that user-centered design is participatory and that we should expose the important parts of digital solutions to our users. We believe in telling our users what we’ll do with their data.

Good Design Considers Broad Consequences and is Mindful of Systems

The article warns that in focusing on the immediate needs of users, user-friendly design often fails to consider long-term consequences. “Take Facebook’s echo chamber, Airbnb’s deleterious impact on affordable housing,” as examples. Not for us: we understand the existing ecosystem, are conscious of long-term consequences and design for sustainability.

A Little History Lesson

Today we have principles for sectors — such as refugees, health and government (US or UK version?); for cross-cutting themes — such as identity, gender and mobile money; for research; and the grand daddy of them all, for digital development.

These principles have been developed over a long time. Fifteen years go I wrote a literature survey on the best practices of ICT4D projects. It was based on the work of then research pioneer, Bridges.org, drawing on a range of projects from the early 2000s.

In my paper, Bridges.org put forward seven habits of highly effective ICT-enabled development initiatives. By 2007 the list had grown to 12 habits — many of which didn’t look that different from today’s principles.

Do We Practice What We Preach?

But if these principles are not new to us, are we practicing them enough? Don’t get me wrong, the ICT4D community has come a long way in enlisting tech for social good, and the lessons — many learned the hard way — have matured our various guidelines and recommendations. But should we be further down the line by now?

The principles mostly outline what we should do, and some work has been done on the how side, to help us move from principles to practice. But I think that we need to do more to unpack the why don’t we aspect.

Consider this data point from a recent Brookings Institute report Can We Leapfrog: The Potential of Education Innovations to Rapidly Accelerate Progress (more on this report in a future post). Brookings analysed almost 3,000 education innovations around the world (not all tech-based, just so you know) and found that:

… only 16 percent of cataloged interventions regularly use data to drive learning and program outcomes. In fact, most innovations share no information about their data practices.

We know that we should be data-driven and share our practices. So what is going on here? Do the project managers behind these interventions not know that they should do these things? Do they not have the capacity in their teams? Do they not want to because they believe it exposes their non-compliance with such principles? Or perhaps they feel data is their competitive edge and they should hide their practices?

Time for ‘Fess Faires?

Fail faires are an excellent way to share what we tried and what didn’t work. But what about ‘Fess Faires, where we confess why we can’t or — shock horror — won’t follow certain principles. Maybe it’s not our fault, like funding cycles that ICT4D startups can’t survive. But maybe we should be honest and say we won’t collaborate because the funding pie is too small.

If fail faires are more concerned with operational issues, then ‘fess faires look at structural barriers. We need to ask these big questions in safe spaces. Many ICT4D interventions are concerned with behavior change. If we’re to change our own behavior we need to be open about why we do or don’t do things.

Good Design is Honest

So, on the one hand we really can pat ourselves on the back. We’ve had good design principles for almost twenty years. The level of adherence to them has increased, and they have matured over time.

On the other hand, there is still much work to be done. We need to deeply interrogate why we don’t always practice our principles, honestly and openly. Only in this way will we really pursue a key new principle: good design is honest.