Nonprofits

NYU McSilver’s first AI chief on using the innovation for nonprofits

Minerva Tantoco discusses making artificial intelligence a positive for all communities.

Patrick Nugent, Courtesy of RMA Associates

Minerva Tantoco arrived earlier this year at the McSilver Institute for Poverty Policy and Research at New York University to serve as its first Artificial Intelligence officer. Previously, she was New York City’s first Chief Technology Officer, serving under Mayor Bill de Blasio’s administration from 2014 to 2016. As the city’s CTO, she helped launch initiatives such as LinkNYC, which provided New Yorkers with free, widespread public Wi-Fi across the five boroughs.

Outside of public service, she has worked with companies such as Merrill UBS, and Palm. She also serves on the board of trustees for the New York Hall of Science and the Technology Council and technology council for the Alliance of Innovative Regulation, among other boards and committees.

Tantoco spoke with NYN Media to discuss how nonprofits can use technology and innovations like AI to produce optimal and ethical outcomes for all. 

This interview has been edited for length and clarity.

Why do you believe there is a need to emphasize or highlight ethics in the tech industry?

There is a belief that technology is neutral or somehow existing laws and ethics don't apply to technology (e.g. car service regulations [not] apply[ing] to Uber, social media sites [not] think[ing] they're responsible for the content that their users post, but now we're seeing the impact of misinformation and hate speech on social media). From my perspective, software is not exempt from that, technology platforms are not exempt from that, and what we're finding now is that through automation and algorithms, there are some unintended consequences, including biased results. The misuse of technology can cause harm and bias can be baked into the design. Bias, from my perspective, is a bug, it's the software not doing what it's supposed to do. So, when I think about ethics in the tech industry, it really is about how we look at the impact of what technology is doing on the people that use it, on the people that it's impacting. 

In your experience, have you observed any differences in the uses for AI between private companies and the public sector? 

Yes, there are differences. Generally speaking, [in] the private sector, AI is motivated by commercial needs, making money or saving money. In the financial sector, for example, AI is used to improve regulatory compliance or detecting financial crimes. The public sector has a real opportunity to apply AI techniques to meet the needs of the public. In other words, the public sector is motivated not by commercial needs, but by the public good. Things like safety and health outcomes and measuring the effectiveness of policies and regulations, making better decisions about where to put services, making their own internal operations more efficient and delivering public services more effectively. The difference is commercial versus public good. Obviously, it's not a hard line between the two, but there's an enormous opportunity for the public sector, governments, nonprofits, and academic institutions to use AI for good.

How did your previous experience as New York’s first chief technology officer prepare you for your upcoming role as Chief AI officer at NYU’s McSilver, particularly in addressing equity in poverty and connections between race and public health? 

I spent decades in technology and finance before becoming the first CTO of New York. What I realized when I became part of the government [was] that as more government services became digitized, it became clear that the digital divide would have a very negative impact on underserved and underrepresented communities. Communities of color [and] those impacted by poverty would not have access to the ever digitizing world. While I was CTO of New York, we initiated the first major free public Wi-Fi service through LinkNYC. Then, we launched CS for All, providing computer science courses in all public schools. I'd say back in 2014, we knew there was a tech divide [in New York]. But, this was totally laid bare during the pandemic, where attending a “free public school” actually required [an at-home] computer and internet access. Access to services (such as health services) during the pandemic required technology. [This] really did prepare me for how equity in technology really is directly connected to poverty and has an enormous impact on public health.

What impact has your upbringing and background had on your career so far and how do you plan on bringing it with you into your new role?

Now, you've hit on something I'm so passionate about! I was born in the Philippines and emigrated to New York with my family when I was four years old. I learned English from Sesame Street and went to all New York City public schools. I've benefited greatly from my career in technology and the opportunities that I got growing up. And I was also inspired by my family and where the women and men are doctors and engineers and lawyers and entrepreneurs. In fact, my mother learned programming in her 40s. [I come from a] strong family background in medicine, technology, and public service, so to me, there's this potential for technology and artificial intelligence automation to really tackle some of our most complex problems. It's proven that diverse and inclusive teams, with diverse perspectives, produce better software and better results. We need more diversity in tech and I'm privileged to bring that perspective into my new role. It's an honor to work towards a better future for all within NYU McSilver.

As a woman of color and a woman of many firsts in your field, how do you view your role as a leader in your industry, particularly in the eyes of young, up-and-coming female technologists of color?

When you look at shows and how technologists are depicted on TV and in the news, it can feel like this is not a career for a young woman of color. But, young women of color should know that technology, [especially] data science and cybersecurity, are just incredible and rewarding careers. Don't let the stereotypes stop you from exploring your interests and creating your own path. Don't limit yourself just because someone has not done it before. That just means you'll be the first. 

What do you hope readers will gain from your upcoming book “Ethical AI: A Practitioner’s Guide?”

Writing this guide was inspired by a speech I gave about tech for good at MIT back in 2019. I talked afterwards about the concept of fairness and algorithms and one of the Hackathon participants asked me, “What is fairness?” And I realized, there was a space or a gap where designers and the makers of technology could benefit from sort of an ethical framework for their work. I also realized that those using AI, whether they be private sector or public sector or legislators working on policy and regulation may not have the same understanding of how algorithms work and, most importantly, how to test them for fairness. If you have this idea that bias is a bug, how do you test for that bug? This book would serve as a guide to both sets of practitioners on what they should be thinking about and [assist when] evaluating the ethical impact of both creating and using AI.

For nonprofit organizations that are unfamiliar or have limited experience with AI and lack understanding of its positive/negative effects, what would you advise organizations to do in becoming familiar with AI and adopting an ethical approach towards it?

Nonprofit organizations can greatly benefit from data science and AI if they use it appropriately, evaluat[ing] the inclusiveness and fairness of the data sets that were used to train the AI, since many data sets [are] not gathered with that in mind. Those biased data sets would produce biased results. Facial recognition is a perfect example of that. [Facial recognition technology is an] AI that [is] based on recognizing faces, but [if the] test faces were all white men, it produce[s] incorrect results for women and people of color. That's what I mean about the data really impacting how effective the software is. If we start using that in the real world, guess what? It's going to misidentify women and people of color and that could have [a] disastrous impact. Another thing that nonprofits can benefit from is sharing information and data amongst themselves. Working with tech councils and academic institutions to become more familiar with AI [and] educate themselves on what it is. In the private sector, some companies are actually creating new roles specifically around ethics and AI. The key thing that nonprofits, in particular, [can do] is to make sure community voices are heard and that we convene, so that we can not only educate ourselves, but each other and the makers of those technologies on the concerns of the communities that they're going to be used for.

As you were talking about community awareness with equity and poverty in mind, do you feel as though there is a gap between the AI/tech community and the general public?

I think many of the communities that nonprofits serve aren't even aware of how algorithms are impacting them. [I’ll] give you an example: If you apply for a job or apply for a loan or credit card, algorithms were used to decide that, right? And so it's already found that there were several instances where some of the AI algorithms used for job seekers turned out to be biased because of the patterns used. Those algorithms actually removed names that didn't fit the pattern. New York City has put in legislation that requires an audit of these job search algorithms and that's just one example. So, I think nonprofits can play a big role in raising not only awareness to the communities they serve, but on the other side, also represent their communities to the folks that are making and using technology. Community input is absolutely a great place for nonprofits to provide this bi-directional awareness.

In the spirit of lifelong learning, is there anything you would like to learn more about relating to your industry? A new niche you’d like to explore or research?

I went into college as a pre-med, then started touching the computer and realized that that's what I wanted to do with my career, but up until that point, I had been doing science projects and genetics projects and internships at hospitals and stuff like that [until] my career took me to tech and finance. Now, I'm bringing that back in a way because much of that learning that I got in tech and finance can be applied to issues of poverty, health and racial inequity. It's these intersections that really fascinate me and so I'd love to dig deeper into how new approaches to public health and mental health and poverty studies can be reimagined with these new technology tools that we could bring to it, and I hope to convene different groups to explore those intersections.