Before undertaking a project to improve operational processes, you need to fully understand the specific problem or task you want to improve. The word ‘specific’ in the previous sentence is important. We see a lot of time wasted by organizations that have either vague and ill-defined requirements or have bundles multiple steps into one, which complicates analysis and makes identifying automation difficult. Success comes from establishing measurable and achievable goals at the start.
As mentioned above, understand what data or action is being used to trigger the process.
There are a lot of things to consider when it comes to data. At Federos, we look at data as a multi-dimensional entity. The key dimensions we consider are:
Quantity (height) – Generally, the more data points you have, the more accurate and certain you can be that any analysis and subsequent automation will achieve the required results. However, having a lot of poor-quality data is problematic (see below).
Breadth (width) – Having a lot of data from one source is useful, but if you can bring in multiple related sources, the abilities to perform more intelligent correlation increase significantly.
Quality (depth) – As mentioned above, having a lot of poor-quality data can cause problems. We always recommend that having unmodified ‘raw’ data should be the objective. Data that has already been modified by downstream systems or processes can cause root cause analysis and correlation processes to be less accurate and, in some cases, fail entirely.
We recommend performing two separate analysis steps at this point. Now that you understand the input or trigger(s) for defining process, examine what you are currently doing with the data, (for instance raise a ticket, produce an event or alarm) compared with what would you like to do with it.
We are often surprised that, when thinking about implementing automation, organizations miss the opportunity to look for process improvements to increase infrastructure stability and availability. In many cases, processes have been in use for many years and, when asking why the process is performed in a specific way, we often hear ‘it’s always been done this way.’ Now is the time to review whether the process can be improved. Here’s why doing so is important: you can automate an existing inefficient process and get some business value, but not as much as automating an optimized and efficient one.
It is also worth highlighting the importance of being able to manage and change the processing that is performed at this step on your data. Environments change and data is dynamic. Having a ‘closed’ or ‘black box’ solution that can’t be easily changed to meet the nature of the environment does not make sense.