For a change (?) it is nice (?) to see stories about healthcare IQ Trainwrecks that don’t necessarily involve loss of life, injury, tears, or trauma.
Today’s Irish Examiner newspaper carries a story of the financial impacts of poor quality data in healthcare administration. At a time when the budgets for delivery of healthcare in Ireland are under increasing pressure due to the terms of the EU/IMF bailout of Ireland, it is essential that the processes for processing payments operate efficiently. It seems they do not:
- Staff continued to be paid pensions where they retired from one role and then re-entered the Health Service in a different role (HSE South)
- Absence of Controls meant staff who were on sick leave with pension entitlements being paid continued to be paid when they returned to work (HSE South)
- Pensions were calculated off incorrect bases for staff who were on secondment/shared with other agencies (HSE South)
- Inaccurate data about the ages of dependents resulted in overpayments of death in service benefits (HSE South).
- “Inappropriate” filing systems were resulting in “needlessly incurring wastage of scarce resources” (HSE Dublin/Mid Lenister)
Poor quality information costs between 10% and 35% of turnover in the average organisation. So the HSE may not be too bad. But the failure of controls and processes resulting in poor quality data leading to financial impacts is all too familiar.
“Poor quality information costs between 10% and 35% of turnover in the average organisation.”
Do you have a source for this? I am interested in learning more about the impact of poor information controls and this sounds promising. Was this claim based on public/government departments or private companies? When was this research produced? Is it still considered valid?
Bruce
This figure has been presented as the cost of non-quality by a number of authors and researchers in the Information quality field over the past few years and it is largely consistent with the cost of non-quality experienced in manufacturing. It is also backed up by anecdotal comment at information quality conferences globally over the past decade at least. I’ve been to dozens and the only time the figure was challenged was from manager from a government agency who considered the upper limit too low.
Larry P. English refers to it in his 1999 book “Improving Business Information and Data Warehouse Quality”. His 2010 book “Information Quality Applied” gives an exhaustive listing of the cost impacts of various information quality problems across a range of industries, including public sector.
In chapter 2 of his 2008 book Data Driven, Dr. Tom Redman cites a number of studies by IBM, Accenture, Gartner, and others relating to the costs of poor quality information, whether as a result of staff spending time searching for data and reconciling differing results (30% of time according to IBM) or the costs arising from incomplete data.
This article from Pitney Bowes cites Gartner research in 2009 that quantifies the cost of poor quality.
On the point of public vs private sector, I believe a lot of the research is private sector based but anecdotally the hidden costs of non-quality data tend to be in line if not slightly higher than in the private sector – but this would be a question worthy of further in-depth research.
As for whether the findings are still valid, all I can say is that
1) the underlying impact of poor quality information and data has not gone away. In fact, as organisations remove costs by downsizing staff numbers hidden costs of poor quality emerge as manual work arounds to data issues cease to operate.
2) Adoption of Cloud Computing solutions does not reduce the need to ensure quality of data. In fact, in many cases it increases the need to have data ‘fit for purpose’ as you may be passing data through APIs between different platforms to make use of different services