The US Department of State’s first science envoy for artificial intelligence Dr Rumman Chowdhury discusses driving responsible AI, understanding AI’s limitations and how we should be working towards a positive AI future
In 2018, Dr Rumman Chowdhury launched the groundbreaking Fairness Tool. This was a solution to identify and reduce bias in artificial intelligence (AI) systems and was an industry first at the time.
Chowdhury had joined Accenture the previous year as global lead for responsible AI. While there had been significant academic research on how to make AI models more fair, these solutions had not yet been implemented in the real world. That was until Chowdhury’s tool, which demonstrated how organisations could correct for fairness to deploy more ethical and responsible AI.
“Simply put, [responsible AI] is ensuring that AI works for everybody, every human being, every perspective,” explains Chowdhury. “If, truly, we are going to usher in a global change, an economic change, people even say a political and social change, then it is our responsibility to ensure that people are not left behind, that no one is erased by this.”

Ensuring that AI is responsible would continue to be at the forefront of Chowdhury’s work. She went on to found her own startup, Parity, focused on building ethical AI solutions, before joining Twitter. It was 2021 and prior to Elon Musk’s acquisition, and Chowdhury was the engineering director for the platform’s Machine Learning, Ethics, Transparency and Accountability (META) team.
“It was a powerful role within social media to determine whether or not the algorithms that we were using were biased, were in some way promoting misinformation, were in some way directing people towards radicalisation. It was quite a responsibility, and I enjoyed every minute of it,” says Chowdhury.
Needless to say, when Musk took over, Chowdhury was part of the subsequent layoffs and the META team was disbanded as Twitter became X and the platform began a rapid descent into what Chowdhury now describes as a “cesspool. Between crypto, light pornography and complete misinformation, I just feel like people are screaming all the time.”
That was 2022, and since then Chowdhury has been working on other ways to drive ethical and transparent AI. As well as consulting on responsible AI assessment and implementation through Parity, where clients include the European Commission, the UK Office of Communications, DeepMind and Meta, she also became a responsible AI fellow at Harvard University’s Berkman Klein Center and launched non-profit Humane Intelligence in 2023. Through the latter she wants to “create the community of practice around algorithmic assessments. That means that I want to train technical people in the practice of auditing and understanding whether or not algorithms are working.”