In Tech We Trust?

Stripe Partners
9 min readAug 6, 2020

--

It has been often said that technology, particularly emerging technologies such as AI, will be inhibited by a lack of public trust. As one example, Roger Taylor, the chair of the UK’s Centre for Data Ethics and Innovation, stated that low levels of trust are a barrier to the effective use of AI tools. Failure to address this, he said, would make it harder to operate an effective, free, and open society.¹

Perhaps we might question this assumption, however, that it’s a good thing for people to adopt these emerging technologies. In response, I point to the 2019 report by the European Commission’s High Level Expert Group on AI (AI HLEG) entitled ‘Ethics Guidelines for Trustworthy Artificial Intelligence’.² This report argued that AI can aid human flourishing and help achieve the UN’s Sustainable Development Goals. The Centre for Data Innovation has also argued that AI can “boost competitiveness, increase productivity, protect national security, and help solve societal challenges”.³ Despite these benefits, the AI HLEG believe that it is the issue of trust that will most challenge the adoption of AI.

Trust is firmly on the agenda, but the question is: trust in who? And if there is a trust crisis, what is to be done about it?

Before answering these questions, I will first explore the conditions in which people invest their trust.

What are the conditions for trust?

The Edelman Trust Barometer is an online survey that measures trust in institutions (i.e. business, government, media, and NGOs) across 28 markets. In their 2020 report they found that:

“People today grant their trust based on two distinct attributes: competence (delivering on promises) and ethical behavior (doing the right thing and working to improve society).”

This can serve as a framework for understanding when people grant trust, at least to institutions.

An institution is trusted when:

(1) it is competent, and

(2) it behaves ethically.

With this in mind, I will now turn to the first question I raised in the introduction: trust in who?

It is the system the public do not trust

The problem is not a lack of trust in tech companies. In fact, for the past 8 years in the Edelman Trust Barometer, tech companies came out on top as the industry most trusted “to do what is right” across 23 markets.⁴

Furthermore, most Americans surveyed by The Verge in 2019 would trust the tech companies (with the notable exception of Facebook — let’s put that down to Cambridge Analytica⁵) responsible for building and deploying AI with their data. 69–75% of respondents trust Google, Apple, Amazon, and Microsoft with their information.

If trust in tech companies is high (or higher than we might expect) we should look elsewhere. And when we do, it appears that the trust deficit is in governments and other institutions.

In the 2020 Edelman report and across 26 markets, the UK is second only to Russia in having the least trust in NGOs, businesses, governments and the media “to do what is right”. Furthermore, across all markets surveyed, trust is in low supply.

“This year’s Trust Barometer reveals that none of the four institutions is seen as both competent and ethical. Business ranks highest in competence, holding a massive 54-point edge over government as an institution that is good at what it does (64 percent vs. 10 percent). NGOs lead on ethical behavior over government (a 31-point gap) and business (a 25-point gap). Government and media are perceived as both incompetent and unethical.”⁶

Why is trust in governments so low? Or, to limit the domain of inquiry, why are they not trusted with regards to emerging technologies?

The public lacks optimism in government competency

The public, at least in the UK and USA, does not view their government’s ability to understand and regulate technology favourably. One reason for this is that there are fundamental concerns about whether policy makers even understand technology. The Edelman Barometer found in the UK that 72% believe the government does not understand emerging technologies enough to regulate them effectively.⁷

Cast your mind back to 2018 when Facebook was due to get a congressional lambasting. Instead, Zuckerberg faced a barrage of confused and embarrassing questions.⁸ (My favourite: “Is Twitter the same as what you do?” and runner up: “How do you sustain a business model in which users don’t pay for your service?”.) This was an episode on the world stage that left both parties, Facebook and the US government, worse off.

The Cambridge Analytica scandal meant that for many people Facebook failed #2 of the trust framework above and as a result is now the least trusted tech company. In its response, the US government failed #1 by revealing its own naivety and incompetence regarding the subject matter.

That brings us to the answer of my first question: if governments are not trusted to regulate technology and to create the frameworks in which that industry operates, perhaps that’s where the problem lies.

Lack of trust in the system is reflected in a lack of trust in technology

Consider the Covid-19 tracing app that was due to be launched by the UK government (under the guise of NHSX) before being axed unceremoniously, in favour of Google and Apple’s alternative. Aside from the technical challenges that beset the app’s development, there was an even bigger issue related to trust and uptake.

For the app to work effectively, it would have required about 60% of the UK’s population to download and use the app.⁹ However, the public already had doubts about the competency of the government to develop such technology.

“[N]early half (48%) of the UK public surveyed about the NHSX COVID-19 tracing app do not trust the UK government to keep their information safe from hackers.”¹⁰

More specifically, and concerning, a survey by The Conversation revealed that 60% of people believed that their data might be used for purposes other than for tracing COVID-19. In other words, the ethical behaviour of the government was in question.

We can easily imagine some of the fears that sections of the public may have with the centralised collection and subsequent sharing of their data with other departments. For example, immigrants or refugees might fear that their location history will be accessed by the Home Office and prejudice their naturalisation or asylum case. Benefit claimants or the self-employed may fear how their data will be used against them, whether they will be caught out, rightly or wrongly, by the Department for Work and Pensions.

Concerns about the state collecting huge swathes of personal information, particularly sensitive location history, combined with a lack of optimism in the state’s motives create a recipe for distrust.

This shows that before the app was even developed the government was failing both #1 and #2. UK citizens did not believe that the government would be competent or ethical in its efforts. The efficacy of such an app was always going to be questionable if such a significant proportion of people felt that their use of it would carry such high risks. As the Ada Lovelace Institute argued in their ‘Exit through the App Store?’ report:

“The effectiveness of a digital contact tracing app will be contingent on widespread public trust and confidence, which must translate into broad adoption of the app.”

What this example shows is that attitudes towards the state (and its motives or competency) are reflected in the adoption rate of technology. When people don’t trust the state, or the system, they won’t trust the technology either.

Or, as Emmeline Taylor et. al. put it:

“When the public trusts authorities, their concerns about privacy are mitigated. They can feel reassured that new technologies, laws and powers will be used in the correct way and not be abused.”¹¹

Here we have a crisis of trust, and I will now answer my second question: what is to be done about it?

Building trust in the NHSX app

Since the UK government did not satisfy #1, and because time was of the essence (thus ruling out any solutions that depended upon building trust over time in its technological competency) it should have leveraged existing tech solutions from suppliers that are generally considered to be competent.

Indeed, across Europe, this is exactly what happened, as nations turned to the tech giants who touted a decentralised data storing, privacy-first approach to the app.¹² This approach has helped tech companies satisfy #2, as people can see that they are going to great lengths to behave ethically and avoid any fear of surveillance — in stark contrast to the initial UK government approach, which favoured a centralised approach.

There are a series of measures the UK government could have implemented to satisfy #2. The Ada Lovelace Institute outlined one solution:

“Government should support and foster public trust in symptom tracking efforts by strengthening the governance landscape in which they are being deployed.”¹³

They recommended first establishing an expert advisory group that advises on and oversees the implementation of the app (which would also have helped satisfy #1). And second, they proposed an independent oversight mechanism that allows for the scrutiny of policy formulation.

Furthermore, evidence suggests that the public would respond well to greater regulation and oversight. Research by the Nuffield Foundation and The University of Sheffield found that people believe “better communication and the existence of safeguards, accountability and transparency would make organisations more trustworthy”.¹⁴

This combination of regulation, oversight, accountability, and safeguards would have allowed the government to satisfy #2.

Returning to the topic of emerging technologies in general, the idea that trust can be built through a stronger governance landscape is shared also by Ashley Casovan, the Executive Director of AI Global and previously the Director of Data and Digital for the Government of Canada. She has argued that to build trust in society for the use of AI tools, we need to make sure that these tools have known and accepted guardrails around them. This would provide clarity on the criteria and parameters that were involved in designing these systems.¹⁵

It is in tech companies’ interests to foster trust in the system

While people may have few problems with the technology itself, they can have concerns about wider social problems. This may well, in turn, result in a lower adoption rate of technologies, as the failure of the track and trace app clearly demonstrated. So it is in the interest of tech companies to help foster trust in the state and the wider system.

We should focus less on building trust in technology and spend our efforts on building trust in society and systems.

If we accept that uptake of emerging technologies will be inhibited due to a lack of trust and that mistrust is directed not towards tech companies but towards the wider system in which they operate, then it appears to be in the interest of those developing emerging technologies to foster trust in the wider system.

If the best way to build this trust is by strengthening the governance landscape, expanding regulation, increasing oversight and accountability, then tech companies have a strong reason to support these measures. In fact, it is in their ultimate interests to assist in this process and ensure that regulation has teeth. By abiding by the spirit and letter of the law, and accepting penalties when they don’t, they will foster the public trust in the systems on which their continued success depends.

As the case of the Covid-19 tracing app has also shown, tech companies should continue in the vein of advising the government how to follow best practice. The tech sector and government need to aid each other by holding each other to higher standards so that the public believe that they are both competent and ethical. This will ultimately build trust in technology, the state, institutions, and the system.

// Morgan Williams

¹ ‘CDEI: Trust in Technology: Keeping the flame alive’ 09/06/20 @ CogX

² https://www.aithics.co/post/ethics-guidelines-for-trustworthy-ai

³ https://www.datainnovation.org/2019/08/who-is-winning-the-ai-race-china-the-eu-or-the-united-states/

https://www.edelman.com/sites/g/files/aatuss191/files/2020-01/2020%20Edelman%20Trust%20Barometer%20Global%20Report.pdf

https://www.nbcnews.com/business/consumer/trust-facebook-has-dropped-51-percent-cambridge-analytica-scandal-n867011

https://www.edelman.com/trustbarometer

https://www.edelman.com/trustbarometer

https://www.vox.com/policy-and-politics/2018/4/10/17222062/mark-zuckerberg-testimony-graham-facebook-regulations

https://www.bdi.ox.ac.uk/news/digital-contact-tracing-can-slow-or-even-stop-coronavirus-transmission-and-ease-us-out-of-lockdown

¹⁰ https://www.globenewswire.com/news-release/2020/05/27/2039120/0/en/UK-Fears-Cybercriminals-Will-Use-NHSX-COVID-19-Tracing-App-to-Launch-Cyber-Attacks.html

¹¹ https://theconversation.com/coronavirus-survey-reveals-what-the-public-wants-from-a-contact-tracing-app-138574

¹² https://www.politico.eu/article/google-apple-coronavirus-app-privacy-uk-france-germany/

¹³ https://www.adalovelaceinstitute.org/exit-through-the-app-store-how-the-uk-government-should-use-technology-to-transition-from-the-covid-19-global-public-health-crisis/

¹⁴ https://livingwithdata.org/project/wp-content/uploads/2020/05/living-with-data-2020-summary-review-of-existing-research.pdf

¹⁵ CDEI: Trust in Technology: Keeping the flame alive’ 09/06/20 @ CogX

--

--

Stripe Partners
Stripe Partners

Written by Stripe Partners

We work with businesses to give them the know-how they need to identify opportunities and make decisions. Know-how to invent the future. stripepartners.com

No responses yet