Algorithmic Accountability is Possible in ICT4D

As we saw recently, when it comes to big data for public services there needs to be algorithmic accountability. People need to understand not only what data is being used, but what analysis is being performed on it and for what purpose.

Further, complementing big data with thick, adjacent and lean data also helps to tell a more complete story of analysis. These posts piqued much interest and so this third and final instalment on data offers a social welfare case study of how to be transparent with algorithms.

A Predictive Data Tool for a Big Problem

The Allegheny County Department of Human Services (DHS), Pennsylvania, USA, screen calls about the welfare of local children. The DHS receives around 15,000 calls per year for a county of 1.2 million people. With limited resources to deal with this volume of calls, limited data to work with, and each decision a tough and important one to make, it is critical to prioritize the highest need cases for investigation.

To help, the Allegheny Family Screening Tool was developed. It’s a predictive-risk modeling algorithm built to make better use of data already available in order to help improve decision-making by social workers.

Drawing on a number of different data sources, including databases from local housing authorities, the criminal justice system and local school districts, for each call the tool produces a Family Screening Score. The score is a prediction of the likelihood of future abuse.

The tool is there to help analyse and connect a large number of data points to better inform human decisions. Importantly, the algorithm doesn’t replace clinical judgement by social workers – except when the score is at the highest levels, in which case the call must be investigated.

As the New York Times reports, before the tool 48% of the lowest-risk families were being flagged for investigation, while 27% of the highest-risk families were not. At best, decisions like this put an unnecessary strain on limited resources and, at worst, result in severe child abuse.

How to Be Algorithmically Accountable

Given the sensitivity of screening child welfare calls, the system had to be as robust and transparent as possible. Mozilla reports the ways in which the tool was designed, over multiple years, to be like this:

  • A rigorous public procurement process.
  • A public paper describing all data going into the algorithm.
  • Public meetings to explain the tool, where community members could ask questions, provide input and influence the process. Professor Rhema Vaithianathan is the rock star data storyteller on the project.
  • An independent ethical review of implementing, or failing to implement, a tool such as this.
  • A validation study.

The algorithm is open to scrutiny, owned by the county and constantly being reviewed for improvement. According to the Wall Street Journal the trailblazing approach and the tech are being watched with much interest by other counties.

It Takes Extreme Transparency

It takes boldness to build and use a tool in this way. Erin Dalton, a deputy director of the county’s DHS and leader of its data-analysis department, says that “nobody else is willing to be this transparent.” The exercise is obviously an expensive and time-consuming one, but it’s possible.

During recent discussions on AI at the World Bank the point was raised that because some big data analysis methods are opaque, policymakers may need a lot of convincing to use them. Policymakers may be afraid of the media fallout when algorithms get it badly wrong.

It’s not just the opaqueness, the whole data chain is complex. In education Michael Trucano of the World Bank asks: “What is the net impact on transparency within an education system when we advocate for open data but then analyze these data (and make related decisions) with the aid of ‘closed’ algorithms?”

In short, it’s complicated and it’s sensitive. A lot of convincing is needed for those at the top, and at the bottom. But, as Allegheny County DHS has shown, it’s possible. For ICT4D, their tool demonstrates that public-service algorithms can be developed ethically, openly and with the community.

Stanford University is currently examining the impact of the tool on the accuracy of decisions, overall referral rates and workload, and more. Like many others, we should keep a close watch on this case.

3 Data Types Every ICT4D Organization Needs – Your Weekend Long Reads

After five years researching the effectiveness of non-profit organizations (NPOs) in the USA, Stanford University lecturer Kathleen Kelly Janus found that while 75% of NPOs collect data, only 6% feel they are using it effectively. (Just to be clear, these were not all tech organizations.)

She suggests the reason is because they don’t have a data culture. In other words, they need to cultivate “a deep, organization-wide comfort level with using metrics to maximize social impact.” Or, in ICT4D speak, they need to be data-driven.

Perhaps NPOs feel that if they start collecting, analysing and using big data, that need will be satisfied. But one cloud server of big data does not a data culture make. While big data can be a powerful tool for development, there are three other data types that could significantly improve the impact of any ICT4D intervention.

Thick data

Technology ethnographer, Tricia Wang, warns us about the dangers of only looking to big data for the answers, of only trusting large sets of quantitative data without a human perspective. She proposes that big data must be supplemented with “thick data,” which is qualitative data gathered by spending time with people.

Big data excels at quantifying very specific environments – like delivery logistics or genetic code – and doing so at scale. But humans are complex and so are the changing contexts in which they live (especially true for ICT4D constituents). Big data can miss the nuances of the human factor and portray an incomplete picture.

As a real-life example, in 2009 Wang joined Nokia to try to understand the mobile phone market in China. She observed, talked to, and lived amongst low-income people and quickly realised that – despite their financial constraints – they were aspiring to own a smartphone. Some of them would spend half of their monthly income to buy one.

But the sample was small, the data not big, and Nokia was not convinced. Nokia’s own big data was not telling the full story – it was missing thick data, which led to catastrophic consequences for the company.

Adjacent data

Sometimes there is value in overlaying data from other sources onto your own to provide new insights. Let’s call this “adjacent data”. Janus provides the case of Row New York, an organization that pairs rigorous athletic training with tutoring and other academic support to empower youth from under-resourced communities.

To measure success, Row started by tracking metrics like the number of participants, growth, and fitness levels. But how could they track determination or “grit” – attributes of resilient people?

They started recording both attendance and daily weather conditions to show which students were still showing up to row even when it was 4C degrees and raining. “Those indicators of grit tracked with students who were demonstrating academic and life success, proving that [Row’s] intervention was improving those students’ outcomes.”

Pinpointing adjacent data requires thinking outside of the box. Maybe reading Malcom Gladwell or Freakonomics will provide creative inspiration for finding those hidden data connectors.

Lean data

Lastly, there is a real risk in just hoovering up every possible data point in the hope that the answers to increased impact and operational efficiencies will emerge. That’s not referring only to the data security and privacy risks related to the sponge approach. Rather, that’s because it’s easy to drown in data.

Most ICT4D initiatives don’t have the tech or the people to meaningfully process the stuff. Too much data can overwhelm, not reveal insights. The challenge is gathering just enough data, just the data we need – let’s call this the “lean data”. When it comes to data, more is not better, just right is better. In fact, big data can be lean. It’s not about quantity but rather selectiveness.

Lean data is defined by the goals of the initiative and its success metrics. Measure enough to meet those needs. When I was head of mobile at Pearson South Africa’s Innovation Lab, we were developing an assessment app for high school learners called X-kit Achieve Mobile.

With the team we brainstormed the data we needed to serve our goals and those of the student and teacher users. We threw in quite a lot of extra bits based on “Hmm, that would be cool to know, let’s put it in a dashboard.”

The company was also preparing to report publicly on its educational impact, so certain data points were being collected by all digital products. Having a common data dictionary and reporting matrix is something worth considering if you’re implementing more than one product.

After building the app we only really used about 20% of all the reports and dashboards. Only as we iterated did we discover new reports that we actually needed. The fact is that data is seductive, it brings out the hoarder in all of us. We should resist and only take what we need

So, perhaps the path to building a data culture is to always have thick data, be creative about using adjacent data, and keep all data lean.

Image: CC by janholmquist