Increased competition and changing customer expectations have organizations rethinking how they operate today. Speed and efficiency are more important than ever, and businesses are turning to technology to automate mundane tasks, reduce manual burden, and free up resources to focus on initiatives that can drive the business forward.
According to a 2017 McKinsey Global Institute report, one-third of the time spent in the workplace involves collecting and processing data – two tasks with high (60%) potential for automation. (This number is even higher for the financial services and insurance industries, where workers spend roughly half their time aggregating and processing data.) What’s more, it’s not entry-level workers or clerks who spend time on manual entry; individuals making over $200,000 each year also spend around 31% of their time on data entry. Within financial services, for example, mortgage brokers spend as much as 90% of their time processing applications. With the right intelligent tools to automatically identify, classify, and extract data from those documents and applications, however, that number could decrease by 33%, freeing up employee capacity to focus on revenue-generating customer interactions.
While automation has great potential to transform how organizations think about work (as well as how people work), the rapid pace of technological change makes harnessing its power and avoiding its pitfalls particularly challenging.
We often hear from customers who have begun their automation initiatives, only to discover that a solution doesn’t perform as advertised or cannot integrate with their existing processes.
This Halloween, we’re uncovering the hidden monsters that may be lurking within your automation projects, so that you can steer clear of any unexpected roadblocks.
Beware unstructured monsters lurking in the shadows.
Most agree that an organization’s ability to mine and leverage data will be a key differentiator in how they are able to remain competitive into the future. The vast majority of an enterprise’s data, however, remains trapped in paper forms, PDFs, images and more, and is unavailable for practical use and analysis. According to IDC, 80% of data worldwide will be unstructured by 2025, which means the problem will only get worse. At Hyperscience, we regularly work with companies who have started automation projects only to discover that a large portion of their processes involve document processing and data extraction. Without the right solution in place, they cannot move forward to address these challenges. RPA, as an example, works well on rules-based processes with structured data inputs, but it needs a solution like Hyperscience to unlock and lift the data so that automation can happen.
Beware rules-based monsters that require pristine conditions to work.
Documents are messy and vary in layout, quality and complexity. They can contain handwriting or cursive, can be mailed or faxed into a central mailroom and scanned, or photographed by a smartphone and uploaded to a portal. These realities (e.g. low resolution images, blank pages or crossed out lines on an application) need to be accounted for. While older, rules-based technologies might promise the same results as automation (better, faster, less expensive), these technologies only work under perfect conditions or with specific, structured inputs, and fail when it comes to skew, distortion, or handwritten text. This leaves you in the same position as before; Having to manually identify and correct errors, and input information.
Before embarking on an automation journey, put a solution to the test, and have it perform on real-world documents so that you can see for yourself whether or not it can reliably process handwriting, detect signatures or checkboxes, etc. Prioritize robust solutions that can process documents as they exist in the real world.
Beware unsupervised monsters who make empty promises.
People aren’t perfect, and neither are machines. To solve challenging, real-world business problems, machines need supervision. Unlike stopping a werewolf, there is no silver bullet solution for process automation. Once you accept that, the key is to understand how a solution involves people “in the loop” to drive performance improvements. Hyperscience, for example, has built-in quality assurance mechanisms to measure when it’s likely to be right, as well as when it’s likely to make a mistake, sending edge cases to data entry teams to review and resolve. This finetunes the underlying model, leading to lower error rates and higher automation.
In addition, it’s important to keep in mind that even the most advanced solutions out there require up-front work. Look for modern-day simplicity, but be skeptical of anyone that promises results without set-up. Ask and understand what is required to deploy a solution and get it fully operational. What hardware is needed? How does it fit within existing workflows? How challenging is it for business users to use? When tech takes months and hundreds of development hours to get new use cases up-and-running, bringing on new lines of business and scaling the benefits can be resource-prohibitive.