The US Department of State’s first science envoy for artificial intelligence Dr Rumman Chowdhury discusses driving responsible AI, understanding AI’s limitations and how we should be working towards a positive AI future

In 2018, Dr Rumman Chowdhury launched the groundbreaking Fairness Tool. This was a solution to identify and reduce bias in artificial intelligence (AI) systems and was an industry first at the time. 

Chowdhury had joined Accenture the previous year as global lead for responsible AI. While there had been significant academic research on how to make AI models more fair, these solutions had not yet been implemented in the real world. That was until Chowdhury’s tool, which demonstrated how organisations could correct for fairness to deploy more ethical and responsible AI.

“Simply put, [responsible AI] is ensuring that AI works for everybody, every human being, every perspective,” explains Chowdhury. “If, truly, we are going to usher in a global change, an economic change, people even say a political and social change, then it is our responsibility to ensure that people are not left behind, that no one is erased by this.”

See also: Nobel Peace Prize laureate Maria Ressa on the US election, the decline of democracy and how we must all hold the line

Tatler Asia
Dr Rumman Chowdhury (photo: courtesy of Rumman Chowdhury)
Above Chowdhury was behind the groundbreaking Fairness Tool, aimed to identify and reduce bias in AI (photo: courtesy of Rumman Chowdhury)
Dr Rumman Chowdhury (photo: courtesy of Rumman Chowdhury)

Ensuring that AI is responsible would continue to be at the forefront of Chowdhury’s work. She went on to found her own startup, Parity, focused on building ethical AI solutions, before joining Twitter. It was 2021 and prior to Elon Musk’s acquisition, and Chowdhury was the engineering director for the platform’s Machine Learning, Ethics, Transparency and Accountability (META) team. 

“It was a powerful role within social media to determine whether or not the algorithms that we were using were biased, were in some way promoting misinformation, were in some way directing people towards radicalisation. It was quite a responsibility, and I enjoyed every minute of it,” says Chowdhury. 

Needless to say, when Musk took over, Chowdhury was part of the subsequent layoffs and the META team was disbanded as Twitter became X and the platform began a rapid descent into what Chowdhury now describes as a “cesspool. Between crypto, light pornography and complete misinformation, I just feel like people are screaming all the time.”

That was 2022, and since then Chowdhury has been working on other ways to drive ethical and transparent AI. As well as consulting on responsible AI assessment and implementation through Parity, where clients include the European Commission, the UK Office of Communications, DeepMind and Meta, she also became a responsible AI fellow at Harvard University’s Berkman Klein Center and launched non-profit Humane Intelligence in 2023. Through the latter she wants to “create the community of practice around algorithmic assessments. That means that I want to train technical people in the practice of auditing and understanding whether or not algorithms are working.”

I worry very much that people think AI is alive. It’s just a technology that’s really good at mimicking how we work. It’s not thinking. It’s not contextualising. It doesn’t feel anything.

- Rumman Chowdhury -

In March this year, Chowdhury was also made the US Department of State’s first science envoy for AI, a role that involves representing the Department of State in a civilian capacity, communicating and identifying opportunities to extend the potential and the values of democratic use of AI, and having open and honest conversations on the topic with people around the world. It was in this capacity that Chowdhury visited Singapore in November, a place that she highlights as being ahead of the game when it comes to the use of AI.

“I think Singapore has had a very pragmatic view of how AI can be useful,” says Chowdhury. “I’m interested in seeing how AI is making things more efficient [here], especially in public services, but also looking at how the government is thinking about how AI can be helpful for people.” 

While in Singapore, Chowdhury also undertook an initiative that she frequently carries out as part of her work through Humane Intelligence, setting up red-teaming exercises where individuals with specific experiences, whether in particular professions or with certain ethnic backgrounds or languages, test the standards of AI models. She partnered with Singapore’s Infocomm Media Development Authority (IMDA) to assess multilingual and multicultural biases, employing individuals from nine countries in the region to evaluate whether AI models worked both in their native language and in their country’s cultural context. 

“What's important about my non-profit is we work with the companies. We are providing this feedback for these model companies to improve how their models perform,” explains Chowdhury. “We want everyone to benefit from AI, and we understand that it's a huge remit and a huge task for these companies to try to capture every perspective. And, to an extent, I see it as educational for the people engaging with it, because they're understanding what AI models can’t do, but also it's helpful in ensuring that we have a more inclusive AI future.”

Tatler Asia
Dr Rumman Chowdhury (photo: courtesy of Rumman Chowdhury)
Above Chowdhury identifies Singapore as being ahead of the game when it comes to the use of AI (photo: courtesy of Rumman Chowdhury)
Dr Rumman Chowdhury (photo: courtesy of Rumman Chowdhury)

Chowdhury has garnered a number of takeaways from the many red-teaming exercises she has initiated to date. One interesting observation is the way that humans behave when interacting with AI.  

“We found that, because this technology is so humanised and personalised, we talk to it, we give it information about ourselves, and we interact with it. What we type into Google Search is very factual. When we interact with AI, we give personal information,” she says. “I think it speaks to the fact that people don’t necessarily view this as a technology. We anthropomorphise it. I worry very much that people think AI is alive. It’s just a technology that’s really good at mimicking how we work. It’s not thinking. It’s not contextualising. It doesn’t feel anything. But, the way it communicates mimics our behaviour.”

It’s one of the things that Chowdhury most wants people to know about AI. “It's not magic. It is simply built on the data that we have created. So the upper limit of imagination and creativity potential for AI is the same as the upper limit of the data we've given it. Human beings are able to do things that are completely novel, new and different. AI cannot. That’s something incredibly special about us as a species, and not something we can build into a technology.”

The upper limit of imagination and creativity potential for AI is the same as the upper limit of the data we've given it. Human beings are able to do things that are completely novel, new and different. AI cannot.

- Rumman Chowdhury -

Here, Chowdhury shares the importance of demystifying AI, the role of it in her own life, and how we should be working towards a positive AI future.

How do you use AI, personally and/or professionally?

If you had asked me this question three months ago, the answer was that I don’t really use AI. But in the last few months I have started using Google NotebookLM. You can give it a bunch of articles or pieces of information and you get a personalised chatbot and you can ask it anything about the content in the documents you gave it. I use it as a tool to digest information that’s quite dense. 

AI is used in many ways to make decisions for us, but that's the only way I use it of my own choice. I like to go home and read books. The most impactful technology in my life is my Kindle Paperwhite. 

If you could go back in time, what would you change or do differently with regard to how AI has developed?

I would focus more on the science of how we understand how AI models perform. And I would change the narrative of how we talk about AI to not make it a proper noun—I would do more to stop the anthropomorphising of AI. So much of the hype and so many of the issues around AI today are built around this mystique that AI is somehow this abstraction when it is a product. It is a technology. It is built by humans. If someone should be accountable, humans should be accountable. I would do more to stop that. 

Is it too late to do that?

I actually think that as people interact with AI more, it demystifies it for them. One of the goals of our red-teaming exercises is to give people critical thinking skills to look at the technology that's being built and say, ‘Oh, this is just a chatbot. It's not alive. It's just interacting with me. It sounds like a human. but that doesn't mean it's human’. So actually, I think that some of the practices that I'm working on help educate people to discern if something is good or bad quality, if something is good or bad information, if something is real or fake.

Above In June, 2024, Chowdhury gave a TED talk advocating for the right to repair AI systems

What are your priorities for the future of AI? 

My priorities are making sure that everyone’s voice is heard in how these AI models are built. I think it is also important for us to be able to opt out of it being used. What are ways in which we could say, I don’t really want this in my life, or I want some oversight into how this is built? I gave a TED talk earlier this year where I advocated that everybody needs a right to repair their AI systems. It’s such a strange concept because we don’t have that with most of our technology. We don’t have a right to repair our iPhones. But, shouldn’t that be the case if these things are deciding if you get a job, if you get a loan, where your kids should go to school? Don’t you have a right to say something’s not working and it should be fixed? 

How do you most want to see AI used?

I think there's this vision, which CEOs perpetuate, that AI is going to solve all these big problems in the world, whether it's climate change, disease, ageing… But in that language, it sounds as if AI has free will and it takes action. None of that is true.

I would love to see more intentionality of achieving a positive AI future. What we have today are a lot of dystopian narratives, and as a result we don't have the imagination to understand what a positive AI future looks like. If everything we imagine about AI is [television show] Black Mirror, and if every version of the world is a dystopia where AI is evil and wants to kill us, then even if that's not the world we want, it's the world we build for because it's the only one we've imagined. 

So, what I'd love to see is not just us working towards a positive AI future, but almost working backwards—defining what a positive AI future is. What does it look like? Maybe people can opt out and still be functioning members of society. That is a characteristic we would have to want to see, and then work backwards and build towards that future. So, if you wanted a future in which you could opt out of AI but still get jobs and meet people, then, by design, we have to think about that now. And we’re not.

Front & Female Changemakers celebrates the extraordinary journeys of inspiring women who have emerged as powerful changemakers in a range of fields, offering a glimpse into their lives and showcasing their courage, vision and relentless pursuit of change and progress. From social entrepreneurs and business leaders to educators, artists, activists and scientists, Front & Female changemakers exemplify the ability to challenge the status quo and demonstrate the power of women to effect change.

Topics