From out of nowhere, the term “Robotic Process Automation” is suddenly one of the must-have technologies for businesses that operate at scale. Despite being one of a set of tools that have been around for years, it has had a serious makeover with the sprinkling of some Artificial Intelligence fairy dust and a new love for the ‘bot’ in business.
There are a few case studies around. In 2015 the London School of Economics produced some research around an implementation in Telefonica O2. This suggested a 650%-800% ROI in three years - who wouldn’t want that result?
It’s undoubtedly true that many back-office tasks are repetitive and costly. Where systems don’t have suitable interfaces for integration, there are many areas where data is double-keyed into various systems. How many times as consumers do we end up giving the same information about ourselves to different parts of the same organisation (especially government!) because of organisational, process and technological silos?
If this re-keying can be accomplished by an automated script, it is pretty obvious that it would be faster than a human operator and once it’s working well can be scaled more or less ‘for free’ to meet peaks in demand. This is a simplistic view, of course. Many of the things that we take for granted of human operators, the ability to notice unusual or erroneous information, translation of spelling or context errors (understanding that New Quay and Newquay could be the same place, if they’re working on data from Cornwall, but not the same place if it also includes Wales) are not present in automated processes unless very explicitly expressed. There are improvements in this area through the introduction of Machine Learning tools but it’s very early days for those.
The LSE study is interesting in many ways. One that stands out is the clear antagonism between IT and Business Operations - reading between the lines there is an evident frustration at what was seen as a condescending and negative approach by the IT team, a “not invented here” mindset.
This is definitely an issue in many IT teams. I lost count of the number of surveys and panels involving IT leaders bemoaning ‘shadow IT’ as the biggest risk/problem they had. The same people would also gripe about the lack of innovation in their company. My take on it now is exactly the same as it was then. Shadow IT is what happens when the IT department is not talking to to business in the right way. This is a structural problem that has to be addressed. In the LSE example they set up a comparison between the IT department’s method of automation and the RPA vendor’s approach. The RPA way won the day, and the cost difference was the need for IT resource for the former. The RPA project didn’t suffer from these costs, as
“The Head of Back Office would just reassign some people from a process improvement team to a process automation team with zero effect on the Back Office budget.”
I love companies that can reallocate people from one task to another with ‘zero effect’. Doesn’t say much for what they were doing beforehand, does it?
So there are often organisational issues that can prevent a sensible adoption of these tools, or looking at it from the other side, can prevent people looking at the bigger picture because they fear their Big Idea will be shot down by an IT team. When IT teams are sidelined in this way, very important issues can be missed. For example, problems occur with RPA when Software-as-a-Service (SaaS) interfaces are changed, These changes should be communicated and tested, but if the RPA technology has been implemented without the knowledge of the IT team that most often run those change processes, they are missed. This can introduce data quality problems, or a complete breakdown of a process which may take days or weeks to resolve. This can be a thorny issue if the company has let go of all of the staff that could have taken up the strain.
If we’re honest with ourselves we know that business process are often poorly documented and this can lead to a false expectation around RPA. If I had a pound for the number of times I've found that a field in a system is being used for different reasons by different departments I’d have, well, several pounds. This is not a problem at the time because it's being managed by people who can recognise it. Introduce an RPA tool (especially through a third party that doesn't know the process) and it can blow up in your face.
There are some good use cases for RPA. Legacy systems with an existing retirement plan that do not have any API or interoperability capabilities can have their lives extended or the cost of ownership reduced by a tactical introduction of RPA. These systems are unlikely to undergo changes to their UIs and the data sets are usually well understood. But again this is not a simple business case. It is not simply a matter of removing a swathe of manual operators. Governance and quality management must be introduced to ensure the robotic process is achieving its objectives.
The bottom line is that cost savings from RPA are quietly leveraging further technical debt that incurs much greater costs in future. They often assume a removal of a large number of operators in a very short space of time in order to build a business case. The business cases can be manipulated like the O2 example where people are reassigned with ‘zero’ cost. Wider issues such as security and business continuity are rarely considered - each of those RPA considerations could be the subject of a blog on their own.
My conclusion is that RPA is not a replacement for a digital strategy that builds architectures in systems and data that allows interoperation between platforms and understands the value chain in an organisation. It is a useful sticking plaster but care should be taken - temporary structures have an alarming habit of becoming permanent.
There are undoubtedly some interesting developments in the machine learning field that will improve this tech, so it remains one to watch.