Your operations create and accrue massive amounts of data, yet you still may not be able to resolve simple queries. Here are 7 questions you should be asking.
(Prefer the audio version? Click here to access the podcast episode.)
At 3AG, we’ve spent over a decade helping companies clean up their data and make the most of their reporting activities. We've seen almost every kind of data dysfunction: from people copying written notes into spreadsheets then having others copy these to pdf to share with management, to operations being held up by “gatekeepers” unwilling to share their data with others because of a siloed mindset or concern about their jobs becoming obsolete. Whatever the cause, such data breakdowns can cause serious damage to manufacturers’ bottom lines.
When it comes to keeping people informed so they can work more effectively, we don't sugar-coat our observations or advice. The hard fact is, manufacturers generally make these 7 errors with their data:
Everyone now understands the importance of data-driven operations. What many still miss is the benefit of integrating this data across the company.
One of the most common mistakes operations teams make is focusing only on how organizational data can improve their part of the operations. There is nothing wrong, for example, with the maintenance team keeping detailed equipment records; but if don’t share this data with other teams, the benefits of data to the company are limited. Another example: a planning team that’s mastered weekly work schedules but doesn’t communicate the need to work around wildly variable incoming orders actively will undermine the sales team’s effectiveness.
The bottom line is this: Keeping operations confined to the factory floor hurts everyone, including the factory floor as it continues running on incomplete or downright incorrect information.
Ironically, the very reason manufacturers might save most of their data on-site instead of the more robust and secure cloud is their rich history of recording data…on-site. Traditional supervisory control and data acquisition (SCADA) systems and operational historians are not new, first appearing in factories and industrial control systems decades ago.
Most manufacturers now consult real-time dashboards on the floor, which means they can manipulate data on-site. This also means there is much less pressure to move data to the cloud; and if your team can review and analyze data on-site, and only on-site staff need access to this information, uploading to the cloud may not seem necessary at all.
Apart from the fact that uploading to the cloud is key to secure backup and maintenance, there is the larger issue of being able to easily share corporate data. Data locked away with the team that generates it likely won’t be integrated with data from other departments—which means insights extracted from it will be at best limited and at worst incorrect and potentially damaging to your business interests.
An incredibly powerful tool, Excel is arguably the world’s number-one database / data engineering / analytics / reporting / dashboarding tool. And it will continue to hold this position for the foreseeable future because it’s easy to handle, has multiple uses, and is available to pretty much anyone who works with a computer.
This flexibility has a cost, however: layers of unnecessary data complexity, which can limit company growth. Every spreadsheet your team creates represents another isolated data source other departments are likely unaware of. Any insight generated by brilliant pivot-tabling, even if shared with the whole team, is still trapped in that spreadsheet. Think about it this way: Every unshared Excel spreadsheet is basically a fully self-contained database. It would be insane to build and maintain thousands of such independent databases, yet this is standard practice in most companies relying on Excel for data collection and collation.
Excel can be useful for ad-hoc analysis, but it should never be used as an industrial data management tool. When used for massive data-driven operations, Excel will absolutely work against corporate best interests: The more it’s deployed, the less control and understanding you’ll have of your data.
Almost every company we start working with has one person who builds out their reports and dashboards, often someone the GM tapped years ago to create “weekly reports.” These weekly reports require near-heroic efforts as they remain a labor-intensive, manual task. This process involves pulling data from different systems, cleaning up the data, integrating them into a master spreadsheet, making adjustment for charts, then putting the charts into a PowerPoint that’s integrated into the final document. Depending on how complicated this operation is (and how messy the data), it’s not uncommon for this one person to spend 25% of their working hours preparing this report—every week!
These hours could be put to better use—especially considering this process is actually a complete waste of time. Manual data processing is boring as hell for the unlucky employees tasked with doing it and it introduces errors in company reporting. Indeed, manual data processing actively distances organizations still relying on it from the real-time insight required to make strategic, savvy business decisions. In our experience, managing data this way creates mistrust of the very data companies require to thrive, especially when results are unfavorable.
There are many ways to address manual reporting, which we discuss in our articles on ad-hoc analysis, data warehouses, and data engineering. But generally speaking, it’s wisest to use manual reporting as a backup or emergency measure, rather than a core activity.
Organizations that manually build reports also likely face key person risk. Continuing with the previous example, consider what would happen if the staff member responsible for building weekly reports resigned or retired. The short-term business impact would be significant, both because someone new would have to be assigned the task and because they wouldn’t know all the little tricks their predecessor learned over the years to make wrangling data easier.
Unfortunately, key person risk doesn’t stop at report generation. Organizations with legacy data systems often end up fully dependent on the one remaining employee who understands how to use them. This creates a catch-22 situation. Since they’re the only one who can navigate the old system, they spend a disproportionate amount of time with it, often at the expense of upgrading their skills. Alternately, they may see removing or revamping this system as a threat to their job; if retiring these old systems comes up, your report builder may offer a very long list of risks associated with such change.
This is such a common issue that even NASA fell prey to it, using code developed in the 1970s in their Space Shuttle program right up until it was retired in 2011. It’s a trap today’s data-driven organizations need to avoid if they’re to succeed.
Having manual override in place, regardless of system type, is a generally good idea in spite of the risks of overdependence. Automatic cars come with a manual override option to enable gearing down in extreme conditions. Similarly, manufacturing plants sometimes have to overwrite collected data when, for example, a sensor malfunctions.
It’s when companies regularly override data that challenges arise. Such overuse of what is essentially an emergency function can cause serious business havoc; manual override misapplied to primary source data, manually massaging aggregate data to make it fit a desired conclusion or set of results, or dropping outliers for a report can all shake what should be firm ground for company planning. Now imagine this occurring every week, the mistakes and omissions piling up…
Any data collection process that depends on manual entry, whether into a notebook or a spreadsheet, can become a major business liability. Whether through entry and transcription errors, or inconsistencies in types of data recorded and reporting frequency, manual data entry will include inaccuracies.
The issue is not that companies manually override data or enter it manually in the first place. The problem is twofold: first, too many overuse this tool to begin with; second, they either don’t limits its usage or fix the compounding errors it produces quickly enough—or at all.
There is a critical difference between reporting data and reporting insights. For example, a dashboard that tracks units per workstation per shift, and then aggregates all workstations for a week, is only reporting data. This is true, even if it reports the information in real time. Similarly, adding another data source to the report, such as worker absenteeism, is simply the addition of more reporting data. If a dashboard requires the user to connect the dots, then that dashboard is not providing insight; and if it’s not doing that, it’s not earning its metaphorical keep.
Manufacturing insights should be more detailed, sophisticated, and meaningful than simple KPIs. Ideally, such a company will have a manufacturing database coupled with manufacturing analytics processes that can regularly provide new, accurate insights from its manufacturing operations. The key word here is process—continuous improvement is not a one-time event.
At 3AG, we’ve developed Optimizer specifically for the purpose of identifying opportunities for plant and process optimization (obviously). Our IoT Synchro service identifies and reports leading indicators impacting production—in real time. And we have a full set of complementary data engineering services to fix the kinds of data issues manufacturers regularly face.
If you’re having trouble answering basic questions about your operations, get in touch. We can help you begin making the most of your organization’s abundant data—as soon as you’re ready.
Looking to learn more about data engineering? Check out our Guide to Data Engineering
with helpful resources on this topic.
Speak to Our Experts
Connect with a 3AG Systems expert today and start your journey towards efficient and effective data management.