Humans benefit from AI in several ways. But AI also has the potential to reproduce or exacerbate long-held prejudices. As machine learning has matured beyond simpler task-based algorithms, it has come to rely more on deep learning architectures that pick out relationships that no human could see or predict. These algorithms can be extremely powerful, but they are also “black boxes” where the inputs and outputs may be visible, but exactly how the two are connected is not transparent.
Given the sheer complexity of algorithms, bias can creep into their results without their designers intending it or even knowing that bias exists. So perhaps it’s no surprise that many people are wary of the power of machine learning algorithms.
Inhi Cho SuhGeneral Manager, IBM Watson Customer Engagement and Florian Zettelmeyerprofessor of marketing at Kellogg and chair of the school’s marketing department, are both invested in understanding how deep learning algorithms can detect, account for, and reduce bias.
The pair discuss the social and ethical challenges posed by machine learning, as well as the broader question of how developers and companies can go about building artificial intelligence that is transparent, fair and socially responsible.
This interview has been edited for length and clarity.
Florian ZETTELMEYER: So let me start this off with an example of bias in algorithms, which is the quality of facial recognition. The subjects used to train the algorithm are much more likely to be non-minority than minority members. So as a result of this, the quality of facial recognition turns out to be better if you happen to look more conventionally western than if you are of some other ethnicity.
Inhi Cho SUH: Yes, this is an example of missing data bias. Another very good example of this bias is loan approval. If you look at the financial services sector, there are fewer women-owned businesses. Therefore, you may be arbitrarily denied loans rather than approved because the lack of sufficient data adds too much uncertainty.
ZETTELMEYER: You don’t want to approve a loan unless you have some level of certainty [in the accuracy of your algorithm]but the lack of data does not allow you to make your statistics good enough.
What do you think about it Microsoft bots example on Twitter [where the bot quickly mirrored other users’ sexist and racist language]? This is another source of bias: it seems to be a case where an algorithm gets carried away because the people it’s learning from aren’t very good.
SUH: There are certain social and cultural norms that are more acceptable than others. For each of us as individuals, we know and learn the difference between what is acceptable and what is not acceptable through experience. For an AI system, this will require an enormous amount of deliberative training. Otherwise, he won’t catch the sarcasm. It will find the wrong frame in the wrong state.
ZETTELMEYER: Correctly. In a way, we deal with this with our children: they live in a world full of profanity, but we would like them not to use this language. It is too difficult. They need a set of value guidelines—they can’t just glean everything from what’s around them.
SUH: Absolutely. And Western culture is very different from Eastern culture or Middle Eastern culture. So culture and value code must be taken into account [that the algorithm is trained with] it must be deliberately designed. And you do this by bringing together policymakers, academics, designers and researchers who understand user values in a variety of contexts.
ZETTELMEYER: I think there’s actually a larger point here that goes beyond even the concept of bias.
I’m trained as an economist, and all too often economics has not done a very good job of incorporating the concept of “values” into economic analysis. There is this very strong sense of wanting to strive for efficiency, and as long as things are efficient, you can avoid thinking about whether the results are beneficial to society.
What I find interesting is that in this whole AI and analytics space, the discussion around values is overloaded. I think it has to do with the fact that analytics and artificial intelligence are very powerful weapons that can be used in very strategic, very targeted ways. And as a result of this, it seems absolutely critical for an organization that chooses to implement these techniques to have a code of conduct or a set of values that governs these techniques. Correctly? I mean just because you can do something doesn’t mean you should.
Where you have these very powerful tools available that can really move things, you have an obligation to understand the biggest impact.
SUH: Accountability is one of five areas we focus on to build trust in AI.
Many businesses are applying AI not only to create better experiences for consumers, but to generate revenue for profit. They may do so in ways where, say, data rights may not be properly balanced against the return of economic value or efficiency. So it’s an important debate: Who is accountable when there are risks in addition to benefits?
ZETTELMEYER: Do you think this is new?
SUH: I do a little, because in previous scenarios, professional programs and applications were programmable. You had to put in the logic and the rules [explicitly]. When you get into machine learning, you’re not going to have direct human intervention at every step. Then, what are the design principles you intended?
ZETTELMEYER: So a fair way of saying this is, basically, we’ve always had this ownership issue, except with machine learning, you can probably get away with thinking you don’t need it.
But you say that’s a fallacy, because you need accountability at the end of the day when something blows up.
SUH: Exactly. And this goes back to [training an algorithm to have] a fundamental understanding of right and wrong in a wide range of contexts. You can’t just put your chat bot into the public realm and say, “Here, just go learn,” without understanding the implications of how that system actually learns and the consequences that follow.
ZETTELMEYER: Okay, accountability. What is your second focus area for building trust in AI?
SUH: It is a focus on values. What are the rules for a common set of core principles by which you operate? And depending on different cultural norms, who you bring into the process [of creating these principles]?
There’s a third area of focus around data rights and data privacy, mainly around consumer protection—because there are companies that offer data sharing for a free service of some kind, and the consumer may not realize that they’re actually giving permission, no just for this occasion, but for eternity.
ZETTELMEYER: Do you think it’s realistic today to think that consumers still have some degree of ownership over their data?
SUH: I think there is a way to solve this. I don’t think we’ve solved it yet, but I think the potential is there to allow people to understand what information is being used by whom and when.
Part of this is a burden on institutions about explanation. That’s number four—be able to explain your algorithm: explain the data sets that were used, explain the approach holistically, you can identify where you might have biases. This is why explanation and justice—that’s number five—go hand in hand.
ZETTELMEYER: In an academic context, I refer to this as execution transparency.
I thought you were going to say something slightly different, that we need to move to a place where some of the more flexible algorithms like neural networks or deep learning can be interpreted.
It’s a difficult problem because, in some ways, the very thing that makes these algorithms work so well is what makes them so hard to explain. In other words, the problem with these algorithms is not that you can’t write them. You can always write them down. The problem is that it’s very difficult to create some easily understood correlation between inputs and outputs, because everything depends on everything else.
But I think the point you’re making is: okay, even if we have a so-called “black box” algorithm, a lot of biases arise, not necessarily from the algorithm itself, but from the fact that we’re applying that algorithm to a particular setting and data set, but it’s just not clear to people how it’s implemented.
When and for what purpose do we actually apply AI? What are the main sources of this data? And how do we work toward, if not eliminating bias, perhaps mitigating it?
ZETTELMEYER: I think a lot of the trust issues that have emerged in the tech industry—and advertising in particular—in recent years are directly related to this type of lack of transparency. I’m always amazed that when you go to the big ad platforms and you approach them purely as a consumer and then you approach them as a customer, it’s like you’re dealing with two different universes. As a consumer, I’m not sure you have the same sense of exactly what’s going on behind the scenes as you do if you happen to be an advertiser and have exposure to all the digital tools you can use for targeting.
I think transparency, as you call it, is not practiced very well in a lot of tech companies.
SUH: No. And there’s not even a common language to talk about it, in terms of explicitly saying, “We’re only using data that we have access to and rights to, and that’s how we collect it, and you’ve given us permission to do that.” Standards around the language itself are still developing.
ZETTELMEYER: What do you do about all this at IBM?
SUH: We actually developed one 360 degree justice kit as part of our wider AI OpenScale initiative. AI OpenScale is an open technology platform that enables your business to have visibility, control, and the ability to improve AI deployments, help explain AI results, and scale the use of AI with automated neural network design and deployment, all in in a unified management console. Includes open source toolkit to check for unwanted biases in datasets and machine learning modules. It checks for biases like the explanation around your datasets to provide feedback on different aspects of your models.
It’s the first open-platform, open-source toolkit that starts to get developers thinking proactively about bias.