In Depth Bernie Madoff

Published on September 16th, 2016 | by Andrew Kitchel

0

Seven Years After Bernie Madoff, Congress Still Has Not Fixed Financial Data Reporting Requirements

Defrauding billions of dollars from thousands of his firm’s clients, Bernie Madoff successfully operated the largest Ponzi scheme in history due to the disorganized and inconsistent financial reporting methods of the U.S. government. Over a 16-year period, the Securities and Exchange Commission (SEC) and other federal regulators inspected Madoff’s firm eight times without finding evidence of misconduct, including simultaneous examinations by separate offices that had no knowledge of the concurrent investigations. More than seven years after uncovering this fraud, federal regulators have made little progress in addressing the problems, such as obsolete reporting methods, inconsistent data standards, and poor communication between agencies, that allowed Madoff’s scheme to go undetected for so long. To resolve this, Congress should mandate that financial regulators collect and report data using standardized electronic fields, machine-readable formats, and make this data readily available to all federal financial regulators.

Established in 2010, the Financial Stability Oversight Council (FSOC) brings together nine U.S. federal agencies that act as financial regulators to address concerns over regulation and oversight in the wake of the Great Recession. Unfortunately, these agencies operate without data standards for compiling and publishing information and from the entities that they regulate. As a result, agencies cannot easily share this information with each other due to differing data fields and inconsistent identification of regulated entities. These inconsistencies also exist within single agencies, too. For instance, the U.S. Commodities Futures Trading Commission (CFTC), which regulates the futures and swaps markets, cannot efficiently identify risk in these markets because it does not require regulated entities to report data in a standard format.

Moreover, few agencies publish information in a machine-readable format, instead preferring to collect and publish information as PDF documents, which make it difficult to extract data. If agencies publish machine-readable data, then it could be compiled and analyzed more easily, as is already the case for many financial institutions using the XBRL format.

Finally, because agencies do not sufficiently share the data they collect with other government agencies, they do not cooperate to determine what information each collects. As a result, reporting forms overlap and agencies often collect redundant data, such as the same public financial statement information being collected via different forms by the SEC, FDIC, and Federal Reserve. Duplicative reporting requirements are unnecessarily costly, especially for regulated entities which must navigate hundreds of different forms and comply with multiple requests for the same data.

These problems make it difficult for federal agencies to regulate the industry and for individuals to access and use regulatory data for investment, research, or analysis. Having access to all of the regulatory data collected about the private sector in a standardized, machine-readable format would allow regulators to more easily use data analytics to better understand trends and issues in the industry.

The solution to these problems comes in two major parts: standardize data and publish it as open data. The first part of this proposition requires that FSOC promulgate a consistent data standard that includes a format that is both human and machine readable in order to compile and verify records. The second part will ensure that the data is readily available to all regulatory agencies, financial institutions, investors, and others, thereby making data easier to access and analyze and increasing oversight and accountability. In addition, better coordination will reduce the burden on regulators and regulated entities by reducing duplicative data collection and increasing the transparency of financial information.

Unfortunately, federal financial regulators cannot simply or quickly implement these reforms. It is a major overhaul that requires the cooperation of nine federal agencies and the entities that they regulate. However, the federal government is already setting the groundwork for such changes and has taken strides in its commitment to data innovation in the service of better governance. Examples include the passage of the DATA Act, recognition that big data can be an instrument for social good, the new Evidence-Based Policymaking Commission, and the recent efforts to replace proprietary data standards used in federal procurement with open data standards. In addition, the system that is currently in place is not only inefficient, it is unsustainable because without substantial change the financial regulatory agencies will continue to inadequately regulate a rapidly innovating financial industry.

This is why Congress should pass the Financial Transparency Act (FTA). Introduced by Rep. Darrell Issa (R-CA) and cosponsored by 35 representatives from both sides of the aisle, the Financial Transparency Act addresses the two major aspects of the proposed solution. The FTA directs the Treasury Department’s Office of Financial Research to set data standards for member agencies of the FSOC that are human and machine readable, nonproprietary, and consistent. In addition, it directs these agencies to publish all data in an open and searchable format.

In order to meet the challenges of an increasingly technological society, the agencies that regulate financial institutions need to overhaul their data standards to best oversee the industry and increase accountability. Congress should pass the Financial Transparency Act as soon as possible to best equip financial regulators to do their jobs efficiently and effectively and stop the next Bernie Madoff.

Image: U.S. Department of Justice

Tags: , , , , , ,


About the Author

Andrew Kitchel is a graduate policy fellow at the Center for Data Innovation. He is also a full-time Master of Public Policy student at the McCourt School of Public Policy at Georgetown University. Andrew is passionate about energy, science, and technology policy and the use of big data to drive evidence-based initiatives in these policy areas. His background and studies include data analysis, survey methodology, and political campaign work. Prior to coming to the Center for Data Innovation, he worked as a survey statistician at the U.S. Census Bureau. Andrew holds a B.S. in biology from the University of Puget Sound and a B.S. in political science from the University of Oregon.



Back to Top ↑

Show Buttons
Hide Buttons