Creating Inclusive and Equitable Workplaces with AI with Dr. Joy Buolamwini

Dr. Joy Buolamwini, Founder of Algorithmic Justice League and author of national bestseller “Unmasking AI”


Join Dr. Joy Buolamwini, a pioneering researcher, artist, and advocate for ethical AI, as she delves into the critical issue of algorithmic bias and its impact on workplace inclusivity. In this session, Dr. Joy shares insights from her groundbreaking work with the Algorithmic Justice League, highlighting how biased algorithms can undermine efforts to create a great workplace for all and the responsibility leaders have to shape a future that is inclusive and full of possibilities.

Drawing from her extensive experience, Dr. Joy provides a compelling narrative on the importance of ethical AI. She also shares insights from her book, "Unmasking AI: My Mission to Protect What Is Human in a World of Machines," and lessons learned through her work with global business leaders on preventing AI harms.

This keynote is a must for leaders committed to leveraging AI ethically and creating a workplace and world where every person can thrive, regardless of their background or role.

 

See Full Library

 

Show Transcript

Dr. Joy Buolamwini (00:00):

Hello? Hello. I said hello. Hello, hello. How's everybody doing? Well, I'm so excited to be in Vegas. I just came back from Kigali and it's a beautiful time in many ways, but also a very difficult time as well. And so today I'm going to share with you my work as the founder of the Algorithmic Justice League. And I wear many different hats, right? So there's that of being an advocate, an academic author of the bestselling book Unmasking AI. Thank you. But one of my favorite hats to wear is the hat of being an artist and in particular being a poet of code. So I'd like to start this presentation with the spoken word poem. That's also an algorithmic audit of various AI systems called “AI, Ain't I a Woman?” which was inspired by Sojourner Truth's 19th century speech in Akron, Ohio for the women's rights movement called Ain't I a Woman? So are you ready? All right, let's cue that video in play. 

(01:36):

My heart smiles as I bask in their legacies, knowing their lives have altered many destinies in her eyes. I see my mother's poise. In her face, I glimpse my auntie's grace. In this case of deja vu, a 19th century question comes into view in a time when Sojourner Truth asked, ain't I a woman? Today we pose this question to new powers, making bets on artificial intelligence, hope towers. The Amazonians peek through windows blocking deep blues as faces increment scars, old burns, new urns, collecting data, chronicling our past, often forgetting to deal with gender, race, and class. Again, I ask, ain't I a woman? Face by face? The answers seem uncertain. Young and old proud icons are dismissed. Can machines ever see my queens as I view them? Can machines ever see our grandmothers as we knew them? Ida B. Wells, data science pioneer, hanging back, stacking stats on the lynching of humanity, teaching truths hidden in data, each entry and omission, a person worthy of respect. Shirley Chisholm unthought and embossed the first black congresswoman, but not the first to be misunderstood by machines well-versed in data-driven mistakes. Michelle Obama unabashed and unafraid to wear her crown of history. Yet her crown seems a mystery to systems unsure of her hair, a wig, a bouffant, a toupee, maybe not. Are there no words for our braids and our locks? Does relaxed hair and sunny skin make Oprah the first lady, even for her face, well-known, some algorithms fault her echoing sentiments that strong women are men. We laugh celebrating the successes of our sisters with Serena smiles. No label is worthy of our beauty. 

(04:29):

All right, thank you. So as we see AI systems aren't always neutral, and what we see there is something that I've come to call the coated gaze. Now, some of you may have heard of the male gaze, the white gaze, the post-colonial gaze. Well, to that lexicon, I add the coded gaze, and it is very much a reflection of power. Who has the power to shape the priorities, the preferences, and yes, also at times the prejudices of the technologies that shape our lives. And as you see in this animation, I encountered the coded gaze in a very visceral way. While I was a student at MIT, I was working on an art project that used face detection. And you can see here it didn't quite work on my face until I put on this white mask. And so this led me to start asking some questions in terms of how neutral are AI systems? 

(05:26):

And I shared my experience of coding on a white mask on the TED platform. It had many different views. And I thought, uhoh, people might want to check my claims. Let me check myself. So I took my profile photo for the TED Talk, and I started running that photo through different online demos of various facial analysis systems. And I found that some didn't detect my face at all, and the ones that did misgendered me as male. Now this became even more concerning once we started seeing the use of different types of facial recognition technologies in the real world leading to a false arrest, or companies like Clearview AI that have scrap billions of our photos from social media and other online platforms. 

(06:26):

And because of this, we are now all part of a group I call the ex-coded those otherwise condemned, convicted, exploited, or harmed by AI systems. And we can all become coded in different types of ways. So you might have algorithms of exploitation, as we now have generative AI capabilities and the abilities to produce deep fakes. No one is immune. Your race won't save you, celebrity won't save you. And here we have an example of an AI generated depiction of Tom Hanks portraying him endorsing a dental product he'd never even heard of. And then we moved to endorsing political views you might not necessarily support. And behind these AI systems as well, most of these foundational models are built on a foundation of contested data, generally taken without consent, compensation, credit or control for the artist, as we've seen with strikes from writers, strikes from actors as well. 

(07:35):

And then we have algorithms of surveillance. Some of you probably flew here and went through airport security. And in that context is not just how well the systems might work, but how they might be abused that we have to consider. And then we have algorithms of distortion. So sometimes we hear that AI bias is a mirror. It's a reflection of the bias in society as you see what these high paying occupations, the fact that we're seeing men being represented in these images. And these are images generated by stable diffusion where you put in a prompt for depiction of a high paying job. And these are the images that came out. So we see a male lien here, low paying occupations, we start to see a bit more female representation when it comes to criminal stereotypes. We're seeing darker skin men represented, but I will pose that we are not seeing a mirror of society. 

(08:37):

What we're actually seeing is a kaleidoscope of distortion. And this is what I mean. So let's take for example, judges in the U.S. We're not quite at parody, but we've made some progress, right around 34% representation of women. But these AI systems and in this place, the stable diffusion example from Bloomberg represented women as judges less than 3% of the time. So what we're seeing is that the technologies that are meant to take us into the future are actually bringing us back to the discrimination of the past while robbing us of our humanity in the present. And because of that, that's why I started the Algorithmic Justice League. And it also sounds cool, but more so the first reason, and we are known for a variety of things, but probably most well-known for our research and in particular my MIT research called Gender Shades. 

(09:37):

And in this particular research exploration, I really wanted to start interrogating the neutrality or maybe the bias and discrimination within various AI systems from well-known tech companies. And so my research question was really, how accurate are some of these systems when it came to guessing the binary gender of a face? And when we did the overall analysis, it looked alright. For example, Microsoft, 94%, they get an A, IBM, 88%. Let's say they get a B on the entire data set, Face Plus Plus 90%. I'm a nice professor, I'll give them the A. So where it starts to get interesting is when we begin to disaggregate the numbers. And so when we look at accuracy by gender, we're seeing an eight to 21% error gap difference. And across all of the systems, they work better on male labeled faces than female labeled faces and intersex, transgender, none of those were even part of the outputs from these systems. 

(10:49):

And then accuracy by skin type, probably not so surprising, right? Overall they worked better on lighter skin than darker skin. But we took the analysis a little bit further and we did an intersectional analysis inspired by anti-discrimination research that showed that if you looked at one angle of discrimination or one axis of discrimination, you oftentimes didn't get the full picture. So we broke it up into these four categories. So now let's go to Microsoft. Perfection is possible, lighter males, pale males as a term of endearment, complete accuracy. Then we look over at lighter females, darker males, darker females in last place. These were the good numbers. 

(11:40):

Alright, let's go to Face Plus Plus. The reason I chose Face Plus Plus is this was a tech company based in China and they had access to over a billion face photos. And so oftentimes the heuristic is if you have more data, you'll get better results. But of course, the type of data matters. And in this case, we actually saw that the best performance just marginally was on darker males. I also point this out because it's important that we test systems individually for the populations we care about. We can't just assume that a general trend is going to be the same for each implementation or integration of an AI tool you might adopt. But one story stays the same. Worst performance for the women like me. All right, IBM, big blue, how are we doing? Well, lighter males take the lead again, lighter females, darker males. 

(12:38):

And in this case, all the other cases, worst case performance was on darker females. So I ran this test as a grad student at MIT. And before we published the results, I thought the companies might want to know. So I sent them the paper pre-released once it was for sure it would go to the conference. And I got a variety of responses, which I think we can learn from. So first response was no response. They're like, no response, no guarantees. Use as you will. And while that might be okay for the company, if you are adopting these sorts of tools, you yourself might have some liabilities to be concerned about. We got another kind of response, which was we know, we know about bias. In fact, we've taken steps to address it. And there's nothing wrong with taking those steps or making those announcements, but I think it's always important to continuously check yourselves. 

(13:39):

So the day Microsoft announced that they fixed the problem, they've released a new model, I thought I'd do a quick spot check. So I took this image of myself and I ran it on one of their computer vision systems. And in this case, I was both misgendered and aged 50. I hope to one day have that gravitas, but I'm not there yet. So there are a number of things going on here. And again, we see even with the Michelle Obama example as well, we're still having issues. Again, same day spot checks for some of these things. And so the reason I bring this up is this car recall reminder, just because there's an issue on the roads and you have issued that recall doesn't mean all of the systems or all of the cars have come off of the road or that the new system you've introduced doesn't have problems. 

(14:36):

So check persistently. IBM had another response. IBM invited me to their headquarters. I talk shop with their software engineering team, some of their head researchers. They released a new version and they gave me results, self-reported results, which are also interesting. So as a researcher, I'm like, your self-reported results are great. We're going to need to do a follow up. So we did a follow up study, and in that follow up study, we did see improvement. So I give that to IBM, but our methods were a bit different. And this is why I encourage that third party auditing and outside perspective. So think of first party auditing. Like what IBM did is checking your homework yourself. Right now, think of second party audit as having your friend check your homework, somebody have a relationship with maybe you paid them to do it. And then third party auditing is what we advocate for alongside all of those, right? 

(15:44):

Which is having an outside entity without the same kind of ties check those systems. And so when we did this third party audit, this was the result. But there was another result in addition to including IBM, because we had them in the first study, we decided to take a look at Amazon and also Kairos. Amazon was selling recognition at the time to law enforcement agencies. So we wanted to see how they were doing. And to our surprise, even though the paper had been out for over a year, thousands of articles, lots of citations, Amazon was where their peers were the year before. So imagine having the test results freely available and still falling short. So we shared it with Amazon. Amazon had, let's say, not the nicest response at the time, but I am happy to say that all of the companies we audited no longer sell facial recognition to law enforcement, all the US-based ones. 

(16:56):

And despite that initial pushback on the research, I was the most surprised when Unmasking AI came up as an editor's best pick on the Amazon platform. So I guess people come around from time to time. So this is part of what we do with the Algorithmic Justice League. We're about putting research in action and that can look like advocacy, whether it's at the federal level, whether it's with government agencies, and depending on what's going on with the federal level. Also looking at policy change within companies as well. We also truly believe in the power of art. The opening piece “AI, Ain't I a Woman?” The performance metrics that I've walked you through have their place, but so does performance art, right? Where we see what that visceral feeling is when you see some of the misclassifications and mislabeling that can go on this year, I am an Oxford University Accelerator fellow and I'll have an opportunity to create more art projects and installation. 

(18:05):

So if anyone has ideas, wants to collaborate, reach out to me. I'm on LinkedIn way too much, so check me out there. And then we also think a lot about media and storytelling. I'm so fortunate to be the protagonist for the Emmy nominated documentary Coded Bias. It had a nice run on Netflix and actually just ended April 5th, but we started a world tour with the film. We've done screenings in Paris. We were just in Taiwan, and I literally got back a few days ago. So if I'm a little off, that's why from Kigali where we did the first African screening on the African continent of Coded Bias. Thank you. 

(18:50):

And this was during the Global AI Summit on Africa where it was announced there will be a 60 billion AI fund for the continent as well as 10,000 Nvidia GPUs that have been purchased so that there can be GPUs as a service. And for me, this was really exciting because when we're talking about the future of AI, we want all the voices involved. And now we can't be called Algorithmic Justice League if we don't have a little fight. So we're always fighting for something. So in this case, we've been thinking about the fight for biometric rights, and it's one that you all can be a part of, right? And so earlier in talking about algorithms of surveillance, I did share that we have the TSA expanding facial recognition with a plan to expand it to over 400 domestic airports. And so we've been doing a campaign, the Freedom Flyers campaign, fly.ajl.org, where people can actually fill out a TSA scorecard and share their experiences on whether or not they're able to opt out, if they see signage, if they feel intimidated. 

(20:07):

The results so far have been, many people don't even know they have a right to refuse because oftentimes you're just told to step up to the camera and your flight might be about to take off, and you have a huge line behind you. So it's oftentimes coercive. Recently we've been pushing that you have the right to opt out, actually appear on the screen. So some people see that, some people don't. But we do encourage those who feel that they are in a privileged position to opt out each and every time you travel, because it's not just about the airport, it's about different areas. We're going to start seeing facial recognition rolling out. And so this is one opportunity, right? To vote for consent culture. And it's not that hard. You avoid the camera, you present your normal ID. For me, I find it's actually faster because by the time the kiosk gets to my height and you take off the glasses and all of that, they could have just done this, which is oftentimes what happens instead. 

(21:13):

So I would encourage all of you to opt out if you feel that you're in a privileged position to do so. And let us know your experience at fly.ajl.org. It also comes with cool swag, right? So you can be part of the opt out club as well. So whatever your reason, right, this is one step you can take. Now, another step you can take as this is a conference about great places to work is with the Algorithmic Justice League. We do have online excode experiences platform where you can share harms that you're experiencing or witnessing or seeing other people witness when it comes to AI technologies, or also you can share triumphs as well. But since we're on the justice side today, we're looking at harms. So I would encourage you to also check out this website so that you can learn different ways in which AI systems are being used in the workplace, share your experiences as well. And then finally, I will end where we begin with “Unmasking AI.” It is a book I am so proud of because it shares not just the technical side of the Algorithmic Justice League, but the human side of what it takes to have courage in trying times. And I think all of us could use a little more courage right now. So thank you.