Striking a Balance: What Tech Companies Must Do To Win Back Their Customers’ Trust
AI ethics and nature of trust in the age of information explosion
Experts urge tech companies to balance the power of AI and other disruptive technologies with responsible and ethical growth.
As the digital age unfolds and the pace of change quickens, cynicism has been growing in the public sphere about the true benefits of technology and the practices and motives of technology companies. Fujitsu addressed this weighty issue in a recent installment of its Executive Discussion Evening series, titled, “Are We Running Out of Trust? Power and Responsibility in a Digital Age,” held in March 2019 at the Queen Elizabeth II Centre in London.
The event featured two guest speakers: William Tunstall-Pedoe, founder of UK voice-assistant company Evi, which provided much of the artificial intelligence technology for Amazon’s Alexa; and Margaret Heffernan, entrepreneur, CEO and author of the well-known book, Wilful Blindness: Why We Ignore the Obvious at Our Peril. The event also featured a presentation by David Gentle, Director of Foresight and Planning at Fujitsu, who served as host and moderator for the evening.
Duncan Tait, Fujitsu’s Director and CEO of EMEIA, kicked off the evening by noting that trust is such an important topic because it is interwoven into the fabric of society, government and business. He then discussed Fujitsu’s own efforts to build and maintain trust, especially among employees, and urged other companies to do the same: “It’s so important that we have brilliant people in our companies, and that we can inspire them to do things they have never done before. But this is only possible when you create a culture of trust within the organization.”
A Question of Trust in an Uncertain, Complex World - David Gentle
Regarding the topic for this evening, really what we’re asking is: Do we have that balance right between power and responsibility? And how is technology influencing this? Perhaps a good place to start is the topic of trust. Clearly trust is a very important thing. All of our personal relationships are built around trust. Certainly in business, you can’t really get anywhere without trust. So why are we choosing this topic today? A core reason is simply the level of change that we’re seeing in the world today.
We passed a milestone last year with little fanfare. Over 50% of the world’s population is now online. So now being connected, being digital has become the norm. And being offline, being non-digital has become the exception. I think it’s noteworthy how immersive the online world is becoming, how deeply engaged we’re becoming with it. In the Western world, we’re spending a third of our waking time online.
And maybe even more significantly is this sort of commercial real estate that has opened up in the digital world. We’re seeing rapidly advancing digital companies. It’s nothing new to have organizations with hundreds of millions of customers. What really is new is the speed and the scale in which digital companies have been advancing. Coupled with the depth of knowledge they have around their customer base and their users, this is something really new.
It feels like we’re at an inflection point. It feels like we’re moving into act 2. Act 1 was all about exciting incidents and new things coming along. Act 2 is all about implications and complications.
So let’s look at some of the implications. We’ve got a piece of research coming out, in which 50 to 100 business leaders were surveyed. And 70% said they find it difficult to know if information is trustworthy. Further, 72% said they have concerns about their personal data being monetized. As trust expert Rachel Botsman said: “Trust is a confident relationship with the unknown.” If we accept that the world is becoming more uncertain, more complex, then what does that mean in terms of our level of confidence? And what, in turn, does that mean for our level of trust? And if we turn that around, how can we build confidence in this more complex world? And how can we get a better level of trust?
Huge Changes Altering Nature of Trust - William Tunstall-Pedoe
We’re here to talk about what’s new. Trust has been a constant throughout human history, but what is changing the nature of trust? There are many drivers. One is the explosion of information. We are now bombarded with information in a way that we have never been in human history. As was mentioned, we’re online a large chunk of our time. A lot of our interactions with businesses, with other people are now virtual ones.
Another huge change is the amount of data that we’re exposed to. In the last two or three years, the human race has generated more data than it has in all of history. And so things are happening at scale, but we don’t necessarily have technology that can tell us whether information is valid or good or truthful.
And the advance in technology is patchy, and the beneficiaries of this are also changing dramatically. The five biggest companies in the U.S are all tech companies. They’re all companies that have benefited from these trends. They’re all companies that have taken the network effects that come from this and have grown to enormous size.
And how does this affect trends in trust? Social media is a very big part of it. Anybody can produce content, and content can be seen by millions of people. Much more of life is recorded. If something goes wrong, usually people can see.
There are examples on YouTube, as well, of crazy things happening - some of them true, some of them completely fictitious, which have also been seen tens of millions of times. Other things have to do with machine learning. Bias is something you hear a lot when it comes to AI. These systems are built on data that’s from the past. If that data is biased, the machine-learning system learns to emulate it. So in many ways, machine-learning systems embody society’s biases.
In addition, modern machine-learning systems intrinsically cannot explain their decisions in a way that makes sense to humans. If a lender's AI system refuses your loan application, it is possible that the bank will not be able to tell you why.
But I want to end on a positive point. There are examples of where AI or other trends in technology have dramatically improved trust. I would say online reviews is one of those. I can go to eBay now and see a seller that’s sold 10,000 things, and they’ve got a 99% positive rating. I can be 100%, almost 100%, sure that that 30-quid item I’m buying from them is going to be exactly as described. You would never be able to generate that amount of trust previously.
So although technology is creating problems, I’m an optimist. Technology will eventually solve many of the problems that we’re seeing.
Rebuilding Trust Requires Transparency, Public Participation - Margaret Heffernan
We’ve seen a couple of definitions of trust. For me, what really matters is what trust is built by. And I think it has really four ingredients: Benevolence-you wish the best for me, for us. Competency-you know what you’re doing. Consistency-you don’t change your mind all the time. Integrity-you act the same way whether anybody’s watching or not.
But even in the early days in the implementation of some AI systems, the behaviors have become sporadic and unreliable. And what we’re seeing is a whole bunch of problems, which thanks to a lot of diligent investigators are being brought up.
Consent: Pearson, a major AI education vendor, inserted social psychological interventions into one of its commercial learning software programs without consent of the students, or parents or their teachers. This isn’t apparently consistent with Pearson’s values. It doesn’t exude integrity.
Bias: (Businesswoman and philanthropist) Martha Lane Fox said recently that 96% of the world’s code has been written by men. Of course, what we make is a reflection of the people who make it. So that means we’re starting in a place that’s already deeply troubled because it’s wildly unrepresentative.
Datasets: At the moment, at least 14 UK police forces have signed up for AI that claims to be able to predict crime, some using facial recognition software. Now claiming to deduce mood, psychological state, sexual orientation, intelligence, likelihood of pedophilia and terrorism through analyzing facial affects keeps being shown to be inaccurate, outdated, biased and based on inaccurate datasets.
Now these are real cases. There are more. Each one can be nitpicked apart, but the key issue is this: Much of AI crosses a fundamental boundary between objective analysis and moral judgment. Who’s making these judgments? In whose interests? According to whose values?
I want technology to live up to its promise, because if it can, trust won’t be an issue. But to get there, a whole bunch of things have to happen.
The first is that this propaganda of inevitability, which is a kind of strange brew of salesmanship, hype and determinism, has got to stop. It’s misleading. It’s dishonest, and it’s contemptuous of consumers and citizens.
All AI needs to be designed so that it can easily and readily be audited to ensure that it complies with the law and with all civil rights. The only way the public will trust AI is if it’s invited and involved in deciding where the limits and boundaries lie.
Now the medical community has understood this for decades. They have invested millions in the public understanding of science and in public consultations and patient relationships. They did that because they knew that medical decisions are almost always ethical decisions.
Yes, people can understand issues of great complexity if they’re given the right opportunity and information. They do listen and do their homework. The price of legitimacy isn’t ad spend or market dominance or growth rate or propaganda. The price of legitimacy is participation.
The speed at which AI is developing makes this a matter of real urgency for all of us-for citizens who fear manipulation and exclusion, for employees who are worried and wary about what they’re being made responsible for and for companies that can only flourish where trust is high.
The Dutch have a brilliant saying: Trust arrives on foot; it leaves on horseback.
Panel Discussion - How to Make AI Technology More Responsible
How can tech companies develop AI systems that deliver accurate and more unbiased results? And how should they address demands from an increasingly skeptical pubic for more ethical technology processes and business practices? Mr. Tunstall-Pedoe, Ms. Heffernan and Mr. Tait hashed out these and other issues as part of a concluding panel. They took questions from the audience, with Mr. Gentle serving as moderator for the discussion.
One member of the audience asked whether trust is age-related, that is, whether older people tend to take a more cynical view of AI than younger people.
Mr. Tunstall-Pedoe said he suspects that children who have been brought up with tech, are likely to be more trusting of new technology than somebody who has been exposed to it much later in life. But Ms. Heffernan wasn’t so sure.
“I would say kids in their 20s, their ability to see through it is greater than many people in my own generation, which is because they’ve been brought up with it and lived with it. They’re really canny. And I would say both of my kids and many of their friends and colleagues are kind of angry, to be honest, because they often feel quite exploited.”
Another audience member commented that while AI developers have good intentions, they might not have the right datasets to work with. He asked whether the industry needs to dramatically increase the amount of data it uses when it builds AI products.
“Building machine learning is often about finding the right data and finding as much of it as you need,” responded Mr. Tunstall-Pedoe. “And often that is very, very large amounts of high-quality data. So I don’t think it’s about not getting enough data. Data is actually what you need and that is free from bias.”
Mr. Gentle followed up by asking whether one of the things the industry is learning through AI is about the nature of data and the nature of bias.
“Quite often you have a situation where actually launching the product is what gets you the data,” said Mr. Tunstall-Pedoe. “You start with a model, which is imperfect and with the data that you manage to cobble together. And then by having customers use the product, you actually have more data to improve it.”
Mr. Tait added that tech companies need to ask the question about available data early on in the process of developing a product, because the consequences of using bad data can be serious. “I can give you two examples. On the one hand, we worked with Siemens to improve the safety of their wind turbines using machine learning,” he said. “We have relatively perfect data, with the impact to society being safer wind turbine systems. If you take the work that we did with Madrid’s San Carlos Hospital, where we’re developing AI to help the physicians in that hospital reduce suicide risk, drug abuse and alcohol dependency, you need to be really, really careful with the data. But we have found that our data is as at least as accurate as that which had been used by the physicians.”
One audience member wondered if the panel would recommend that companies collect as much data as possible, even if they don’t necessarily know how they are going to use it in the future.
“Yes, storage is cheap,” said Mr. Tunstall-Pedoe. “Obviously, you have to be a little careful if it’s personal data, but throwing data away doesn’t make sense. As you say, that data may power a machine-learning system for your business in the future.”
Other audience members were curious about the prospects for holding more debates about AI processes and ethics within the tech and business communities and also within the public sphere.
“It needs structure. It needs a diverse range of participants that are seen as legitimately representing the relevant stakeholders,” said Ms. Heffernan, adding that it’s important not to only have companies talking among themselves.
“You have to be extremely thoughtful and meticulous in how you select the people involved in the process,” she said. “I think everybody here has agreed that trust takes time to build. And you can lose it in an instant.”