Image Image Image Image Image Image Image Image Image

Mar 29, 2017

What do Netflix movie recommendations have in common with how government is making big decisions?

March 29, 2017

As people share more and more information online, algorithms are increasingly used by public and private institutions to leverage that information and quickly analyze it with the aim of providing more efficient and optimal decision-making. Alongside the rise of Big Data, policymakers should consider how this growing reliance on algorithms impacts key objectives.

Many popular businesses, particularly those enabled by the rise of the digital economy, rely heavily on algorithms. For instance, Netflix uses algorithms to recommend movies and Tinder uses them to match potential romantic partners. In the case of Netflix, data is collected on a range of factors, ranging from specific viewing habits to the time of day – which can all be input into algorithms and used to develop recommendations for customers.

During last year’s U.S. election, media headlines were suddenly alive with stories about concerns over the power of algorithms to affect voter decisions. Social media such as Facebook and Twitter were criticized for contributing to the polarization of the electorate by using algorithms to filter and serve up content to users based on their activity and preferences. According to critics, this exposed voters only to certain viewpoints and excluded others, narrowing down the range of information accessible to users and also potentially making it much easier to spread false or misleading information.

Algorithms can offer significant improvements in efficiency and optimization, but placing too much confidence in them also carries considerable risk. Algorithms are essentially a series of instructions for processing data in ways that will accomplish specific tasks. While these instructions can be as simple as sorting a series of numbers from smallest to largest, they can also get quite complex and opaque – especially when dealing with significant amounts of data. If data is not sufficiently reliable or if variables are not chosen carefully enough, important context can be left out or overlooked – which can have far-reaching effects.

Emerging technologies that will have an impact on public policy will likely rely heavily on algorithms, such as the split-second decisions made by autonomous vehicles. And in a growing list of complex policy areas, including public health surveillance, prescription drug monitoring, law enforcement and immigration, governments across Canada are experimenting with algorithms to make decisions.

Policymakers need to think carefully about oversight to ensure that algorithms don’t – inadvertently or otherwise – allow for discrimination, inaccuracies or manipulation. Algorithms are often touted by governments for their innovative potential, but there has generally been less discussion by policymakers in Canada about how to mitigate negative impacts. Nevertheless, there are indications that interest may be growing in examining the effects of algorithms. For instance, the CRTC has been involved in consultations about how algorithms impact the way Canadian audiences connect with content. Ontario’s Office of the Information and Privacy Commissioner has also noted some privacy issues associated with Big Data and the use of algorithms.

Policymakers should now begin to think more about the impacts of an increased reliance on algorithms, including the way they could undermine key policy objectives – from within and outside of government – particularly regarding fairness and equity, transparency and accountability.

Fairness and equity

While the concept of using data-based algorithms to make decisions implies a certain level of objectivity, research has found that algorithms can discriminate and, in some cases, unintentionally reinforce human biases and prejudices. For instance, an investigation by ProPublica found that an algorithm used to predict the likelihood of reoffending (and thereby impacting sentencing) was not only unreliable, but also biased against black defendants compared to white defendants. Another study found that the ads associated with Google search results can often be biased and discriminatory, with “racially associated” name results more likely to be accompanied by ads that appear to link certain names with criminal backgrounds.

Algorithms are only as effective as the data on which they rely. Therefore, biases in data collection can carry over into results produced by algorithms. Nevertheless, efforts are underway at companies like Google to develop systems that can identify biases in algorithms. There are other indications that data used in algorithms can be tested to ensure the information isn’t discriminatory. In the U.S., the Association for Computing Machinery recently released a series of principles aimed to address potential biases that may emerge when deploying algorithms.

Like any system, what goes into them will determine the outcomes algorithms produce. Or, as engineers like to say, “garbage in, garbage out.”

Transparency

A lack of transparency can limit public trust in institutions. In the case of algorithms, transparency is critical to ensure they aren’t being misused, misapplied or manipulated. This is especially important because the data algorithms use may not always be reliable and problematic practices may lay hidden within complex algorithms that are considered proprietary. For instance, one study found that Uber’s unwillingness to reveal how its algorithms determine pricing made it susceptible to manipulation (e.g., drivers working together to create surges). Meanwhile, Google’s algorithm for determining search results is a closely guarded secret with significant commercial value. Therefore, algorithms can often be “black boxes” as a result of reluctance to share their details.

Governments have also been hesitant to provide information on the algorithms they use. In some cases, official requests to access information on specific algorithms have not resulted in disclosure. In the U.S., responses by governments regarding access to specific information about algorithms was limited for a variety of reasons, ranging from a lack of understanding about what was being requested to outright denial on the grounds that algorithms did not qualify for public disclosure. However, some governments are more open about the algorithms they use. France, for example, has made the source code for calculating taxes and benefits publicly available. One suggestion to ensure transparency of algorithms without revealing proprietary information involves using an independent third-party, comparable to Canada’s Privacy Commissioner’s Office.

Accountability

Algorithms are likely to play larger roles very soon – not only regarding government decision-making, but in a myriad of daily human activities. However, there are currently few accountability mechanisms to identify potential problems and provide remedies when they arise.

We now know it’s possible for algorithms not only to discriminate, but also to provide completely incorrect results. Algorithms used by Centrelink – a company in Australia that provides welfare services – erroneously issued welfare recipients with bogus debt notices. The algorithm reportedly did not take into account nuances in its analysis of tax data in relation to income. There are several examples where algorithms used by governments produced incorrect results. As a result, people have had drivers’ licences revoked, been denied disability benefits and been falsely identified as criminals.

High-profile algorithm failures, like one that created a major but brief stock market crash in 2010, have led to questions about who is responsible for ensuring algorithms work effectively. Such examples have also led to suggestions to add algorithms to existing transparency mechanisms, to add them to audits, and for new calls for regulation. Despite concerns about potentially hindering innovation, regulations are expected to be on the way in some countries, including the U.S.

Algorithms have enormous potential to improve daily life for consumers and citizens. But among the many technological developments that have arisen in recent decades, algorithms are particularly vulnerable to the flaws that underpin the very human inputs on which they depend. Like any system, what goes into them will determine the outcomes algorithms produce. Or, as engineers like to say, “garbage in, garbage out.”

More related to this topic

Author

Sara Ditta

Release Date

Mar 29, 2017

Related Reading