Field Service Management Systems: Helping with data governance

When it comes to field service management systems, especially for those high-volume organisations, automation is the key to efficiency and cost savings. Automations can reduce back-office processing costs, but there are also many other benefits: automating the scheduling can save on second visits or wasted truck rolls. Automating resource planning can reduce inventory holding and purchasing costs. Automating the work closeout can make the learnings and performance/financial analytics much more accurate.

As I was writing in the previous article though, the efficiency of the automation itself drops very quickly as the quality of the data it relies on drops. Automations cannot “guess” information that is not present or is in people’s minds. The field technicians’ skills, their availability, the plan duration of the work all must be accurate for a system to produce a schedule that will not have to be re-worked as soon as its generated.

The governance loop

So how can high volume organisations ensure their data is “clean” enough to allow for automation? The answer is often summarised simply in the term “Data Governance”. This can mean many things to many people, but in general, it means having the tools, processes and people in place to ensure the data remains consistent and meaningful.

And how can a field service management system, or any business support system for that matter (the “tools” part of that trifecta), help an organisation keep its data consistent and meaningful? The answer to that is quite simple: by automating.

You enable automation by governing your data to high standards of quality, which you do by automating your system processes. It’s a circular approach, which accelerates the better you get at it.

The user

The actual crux of this approach is about getting the users out of the loop. Not entirely – systems are for people and by people – but enough to bring people back to roles of review and approval and let the system do the grunt of the data manipulation.

In many low-automation systems, users need to do a lot of data updates themselves. When an event occurs on a client request, users update the status of a record. When a material part is required to be used, they create the material reservation and ensure it’s allocated to the right storeroom with the right expected date. And if the work is rescheduled, the users need to remember to change the material reservation date as well. When work is complete, they manually create an invoice and reconcile it to the expenses that were, themselves, also manually entered in the system.

Having so many touch points on the system means users have many possibilities to make mistakes or enter things in slightly inconsistent ways. They can also attach different meaning to the data, which leads to reporting losing its comparative value. One work administrator adds contractor-used material parts as a service actual on that work order while another administrator simply adds these as notes on a text field.

Not only is this very effort intensive, it prevents standard cost reporting from being anywhere near accurate. A financial controller would have to spend days, at month’s end, trolling through old work order notes to figure out what parts were actually used, and then manually enter these as expenses in the financial system.

The automation

In a high-volume automated system, the users would be kept out of the loop for most data entry actions. This not only reduces administration cost, but also, more importantly, increases the overall quality and consistency of the data. It helps govern the data.

When the technician arrives on site, the mobile device triggers the change of status via geofencing. When a material part is used, the RFID or code scan triggers the generation of a material actual record, the system knowing how to consistently enter contractor-used material. If the auto-scheduler reschedules a work order, the same automation also takes care of the related material reservations, reallocating them to another store and date if appropriate. And at month’s end, the reconciliation and all necessary corrective actions have long been taken care of, as for a system, there is no need to wait for month’s end. Automatic actions can be taken immediately.

Users simply become approvers of the automated process. When a material usage is out of tolerance, a notification is sent to the right user, asking it to review and approve the transaction. Or when the technician leaves site, the mobile device asks him whether he confirms the automatically recorded time. No need to enter the timesheets, they’re already generated; users simply needs to confirm.

In this way, users remain in control, but only interact where it matters. The automations take care of the grunt work and the data quality remains high.

The master data

This approach can also apply to master data management. An asset installed in the field is always the outcome of a piece of work being carried out, which itself is tracked in the system by a work order and automated through its lifecycle accordingly. If the equipment is scanned at the time of issue, the system can automate the update of the asset hierarchy, swapping assets and returning the previous one to store. The state of the master data is correct because the automated process was completed as programmed.

This way, by fully automating its operational processes, an organisation can achieve better confidence in the quality of its master data. And this, in turn, allows for automations to produce higher quality operational controls like schedules, material reservations and contractor purchase orders.

It a virtuous cycle that can quickly lead to cost saving on human effort.

But the main yield of good data governance, beyond quality data, is the ability by the organisation to leverage its data as an asset, i.e., to produce more revenue.

Ready to get started?

Contact us for a free consultation.

We don’t just sell features; we sell complete, customer-centric software.