Home PublicationsData Innovators 5 Q’s for Tess Posner, CEO of AI4All

5 Q’s for Tess Posner, CEO of AI4All

by Joshua New
by
Tess Posner

The Center for Data Innovation spoke with Tess Posner, CEO of AI4All, a San Francisco-based education nonprofit focusing on increasing diversity and inclusion in AI. Posner discussed why diversity matters so much for AI, and what educators policymakers should consider as they help prepare workers for an AI-driven economy.

This interview has been lightly edited. 

Joshua New: Can you explain AI4All’s mission? Why are diversity and inclusion important for artificial intelligence?

Tess Posner: AI4All’s mission is to increase diversity and inclusion in artificial intelligence. We think this is really critical for several reasons. One is that AI is such an important technology and we’re just starting to see its impact as adoption increases. A recent Gallup poll showed that 85 percent of Americans use AI already. So we know this impact will be profound and we’re only at the tip of the iceberg. There is a, what we call a, diversity crisis in computer science, and with AI in particular. A recent study showed that only 13 percent of AI and machine learning CEOs are women, and a new report from Melinda Gates and McKinsey on the gender gap in computer science showed that in grades K through 12, almost 50 percent of girls are still interested in computer science. But only 19 percent of computing degrees are awarded to women and women only make up 7 percent of leadership roles in the field. It’s even worse for women of color. Hispanic women make up 1 percent, and black women make less that 0.5 percent of technology leadership roles.

This really is a crisis that we’re facing and it poses several problems for AI. The first is that diversity is good for business. It’s good for innovation, it’s good for growth, and a lot of research shows that diversity leads to better functioning, more profitable companies and teams. There’s new research that shows if we include women, people of color, and low-income people in the innovation economy, the rate of innovation could quadruple. So from a business perspective, diversity and inclusion should be an absolute priority, especially for technologies like AI where we don’t want to miss out on untapped talent.

For AI specifically, a lack of diversity poses a greater danger. For example, we’re seeing societal biases like sexism and racism creeping into AI and machine learning systems. Research shows that some of the most widely used facial recognition systems are substantially more accurate for white male faces, but less accurate for women and people of color, women of color especially. Because these systems can control key decision-making, such as who gets approved for a loan or access to parole, this is really concerning. We don’t have diversity in the teams building and testing these systems so we are not going to be able to address questions about bias effectively.

New: Many areas of STEM education have significant demographic disparities due to challenging cultural reason such as a lack of representation. Does that hold true for AI, or are there other causes?

Posner: I think the same problems that affect STEM as a whole certainly apply to AI. Homogenous cultures in the field, direct issues of discrimination, the lack of exposure to technical concepts early on and the sense of discouragement from pursuing technical education, a lack of relatable role models, and plenty of other systemic barriers are all still there. But I think AI also has an image problem. The way that it’s talked about in the media portrays it, which is something we hear a lot about from our students who are dealing with AI for the first time, as really exclusive. You need to have a PhD from an elite university or be absolutely at the top of your field to be part of this technology or to even start thinking about it. The current experts that are lifted up are a pretty homogenous group of these kinds of individuals. This paints a picture of AI that is exclusive, but on top of that, we see these narratives in the U.S. about The Terminator, about AI taking away jobs, and other very overhyped concerns that paint an even more dismal picture. So a lot of people get discouraged from AI seeming exclusive, but also the notion that AI is dangerous and scary, and they think “why would I want to be a part of that?”

We work really hard to both lift up different experts and diverse role models and focus on using AI for good. These tools can be dangerous, but can also be used to solve important problems and improve human capabilities to improve outcomes for ourselves and the planet. One of our students said after our program that “you don’t have to be an AI expert to know that in this day and age, we are in the middle of something almost magical, infinitely creative, and beautifully applicable in a variety of settings.” That really demonstrates the kind of attitude that we can have if we take a more human-centered, inclusive view of this technology.

New: Why does AI4All’s education programs target high school students? Why is AI education so important at that stage?

Posner: It’s important to address different stages of the career trajectory for AI because we see drop-offs at each stage. We decided to target high school based on research that shows that underrepresented populations already start to get discouraged going into STEM fields around age 15 in high school. Entering at this early stage and cultivating interest and exposing them to these technologies and connecting them with mentors is really important. We see that after our programs, 90 percent of our students leave wanting to go into the field of AI. It’s incredible what happens when you start at these younger ages.

We also have two other initiatives that aren’t on our website yet. We’ve seen that our students are so passionate about AI that they actually educate their peers and younger students. On average, every student that we educate through our program goes on to educate 14 more of their peers. So we fund grants up to $1,000 to support these efforts so they can start clubs and do additional outreach.

We also recently announced a new open learning program, which takes what we’ve learned from our AI summer camp and creates a curriculum that’s hosted online for free. It covers all the basics for people that are totally new to AI to develop AI literacy. It also focuses really closely on the social and ethical implications, as well as builds skills. We want this knowledge to be as accessible as possible because it’s a gateway to high growth, high paying careers, and also because this literacy is important for AI users. We’re releasing our open curriculum program early next year.  

New: Has AI4All’s efforts led to any interesting developments in AI research, or is it too early to tell?

Posner: We have numerous examples of our students doing amazing things with AI even though they’re still in high school. Some of our students have even won awards at the NIPS conference, which is one of the biggest AI conferences. We run summer camps hosted in partnership with AI labs at different universities, including Stanford, UC Berkeley, Carnegie Mellon, Princeton, and others. Students learn pretty rigorous technical AI skills and get to work on projects using AI for good. One of our students became interested in criminal justice reform and this summer worked as an intern researching algorithmic decision-making in the criminal justice system. That’s really impressive for someone who is still in high school–she’s just 15 or 16 years old.  Another student worked on a project using machine learning to detect multiple sclerosis (MS) lesions in MRI scans to help radiologists more effectively diagnose MS. She first got interested in AI at our summer camp, got to do research at the Quantitative Imaging lab at Stanford, and presented her research at a major bioinformatics conference this year. Our students are amazingly talented and are already making waves in the field. We’re very excited to see what they do next.

New: The United States faces a much broader challenge of ensuring that its future workforce has the skills necessary to work with AI to be competitive. Based on what you’ve seen in AI4All’s work, what would you recommend educators and policymakers focus on to address this challenge?

Posner: There are several different ideas that I would recommend. First, AI is becoming more prevalent, so AI literacy is going to be more and more important. You need to be able to understand how these systems work well enough to work with them since they will be incorporated into every industry. If you’re going into health care, for example, there’s going to be so many different use cases for AI that you need to be aware of its potential issues even if you’re not an AI developer. AI4All is part of an effort called AI for K-12 which is a working group trying to develop standards to teach AI in the K-12 education system, which is one of the first efforts to try and understand how to incorporate AI into our schools.

On a higher level, we’re seeing rapid technological changes impacting our economy and the shelf-life of skills is decreasing. That means we need people who are adaptable, lifelong learners that can update their skill sets to adapt to the changing economy. I think focusing on “future-proof skills,” like the ability to teach yourself new skills, creativity, problem solving, and so on will need to be with you. Education can’t be something you just think about in just the K-12 and university setting. We have a responsibility to update our education system to reflect this new environment and prepare people effectively.

You may also like

Show Buttons
Hide Buttons