Researchers from Stanford University have released a dataset of 10,000 labeled echocardiogram videos to advance the development of machine learning systems that can analyze videos in clinical settings. The videos are from patients who underwent echocardiography between 2016 and 2018 at Stanford University Hospital and include labels such as the volume of blood in a ventricle after the heart contracts. AI systems that can analyze echocardiogram videos could help diagnose conditions that make it more difficult for the heart to pump blood.
Creating Systems That Can Diagnose Heart Conditions Using Videos
Michael McLaughlin is a research analyst at the Center for Data Innovation. He researches and writes about a variety of issues related to information technology and Internet policy, including digital platforms, e-government, and artificial intelligence. Michael graduated from Wake Forest University, where he majored in Communication with Minors in Politics and International Affairs and Journalism. He received his Master’s in Communication at Stanford University, specializing in Data Journalism.
View all posts by Michael McLaughlin