Home PublicationsFilings Comments to NIST on Explainable AI

Comments to NIST on Explainable AI

by Hodan Omaar
by and
NIST logo

The Center for Data Innovation has responded to a request for comment from the National Institute of Standards and Technology (NIST) on its draft white paper, “Four Principles of Explainable Artificial Intelligence,” which seeks to develop principles encompassing the core concepts of explainable AI. In these comments, the Center explains that the accuracy and reliability of an AI system is likely to be more important to user trust than explainability. Moreover, while trust is useful, it is not the only factor that influences AI adoption; consumers generally care more about price and quality when making purchasing decisions.

The Center recommends that NIST clarify the multiple factors that affect trust, particularly accuracy. Further, NIST should note the relative dearth of empirical data quantifying the degree to which explainability impacts user trust and user adoption and acceptance of AI technologies. Finally, since developers do not have the context-specific knowledge to know what will cause harm in a given domain application, NIST should revise their suggestion that systems should be responsible for assessing when they are likely to cause harm.

Read the filing. 

You may also like

Show Buttons
Hide Buttons