6 January, 2017
How Hadoop Helps Experian Crunch Credit Reports
Experian is quickly crunching massive amounts of data and making it available to customers thanks to the open source software as well as microservices and API technologies.
Experian has implemented a new data analytics system designed to shrink from months to hours the time it takes to processes petabytes of data from hundreds of millions of customers worldwide. The information services company is deploying the software, a data fabric layer based on the Hadoop file processing system, in tandem with microservices and an API platform, that enables both corporate customers and consumers to access credit reports and information more quickly.
“We believe it’s a really big game-changer for customers because it gives them real-time access to information that they would normally have to wait for as it was ingested,” says Experian CIO Barry Libenson.
Once an open source tool designated for piloting big data projects, Hadoop has become a necessary component of many analytics strategies as CIOs seek to make information-based products and services available to customers. The technology uses parallel processing techniques to help software engineers churn through large amounts of data more quickly than SQL-based data management tools.