GRAND RAPIDS — In an era when so many businesses rely on information technology systems, seven out of 10 customers that US Signal Co. LLC surveyed say they expect to have at least one unplanned disruption over the next year.
Nearly two-thirds of respondents believe their I.T. systems are vulnerable to disruptions, while 10 percent believe they are “very” vulnerable.
If something does happen, less than one-quarter of the businesses said they feel “very prepared” to handle it, and a little more than two-thirds are “somewhat prepared.”
The wide-ranging responses US Signal got back from I.T. professionals at 50 middle-market clients in the Midwest illustrates that many businesses still are not fully prepared to recover quickly should disaster strike their systems.
The Grand Rapids-based I.T. service provider has a lot of clients that are mature in their practices to manage, protect and recover data, “but there’s still enough out there” who are not, said Amanda Regnerus, executive vice president of products and services at US Signal.
“Customers, it seems, are willing to take the risk of not having a fully planned, tested and executed disaster recovery plan,” Regnerus said. “That’s everyone’s goal, but like any of us in our day-to-day, we get wrapped up into our daily paths. The planning and the testing and a lot of the details around updating (disaster recovery) plans and putting playbooks together, we find that customers just have a hard time getting back to that detail.
“They either can’t afford it, don’t have the staff to pull it off, or they’re willing to take the risk and take a wait-and-see-what-happens approach.”
That approach can prove costly for businesses if their operations are down for an extended period because they cannot readily access or recover their data following a fire, natural disaster or a cybersecurity breach. Regnerus advises that companies should compare the cost of regularly updating and testing their recovery plans against the cost of having operations down for a period of time.
In 2017, data breaches alone cost U.S. companies that were hit an average of $7.91 million, according to an annual analysis by tech giant IBM and the Traverse City-based Ponemon Institute LLC.
The total cost includes $1.76 million spent on post-breach responses such as help desk and investigative activities, communications, legal expenditures, and identity protection services for customers, according to the July 2018 Cost of a Data Breach Study that analyzed 477 data breaches globally. Business lost as a result of a data breach cost companies an average of $4.2 million, an amount that includes “abnormal turnover of customers, reputation losses, and diminished goodwill,” according to the report.
The report for the first time counted the costs of mega data breaches, or those that involved the loss of 1 million to 50 million records.
UPDATE AND TEST
Among respondents to US Signal’s survey, 30 percent said they have disaster recovery plans in place in case of a disruption or outage in their I.T. system. Another 58 percent indicated they have a plan “with room for improvement.” The remaining companies did not have a plan, have discussed it, or were unsure.
Of respondents with a recovery plan, 30 percent update it every six months and 34 percent update it annually. Another 14 percent update their plans every two to three years, while 18 percent were unsure of the frequency of updates.
Every business that relies on an I.T. system for data management and storage should have a disaster recovery plan that they regularly update and test, said Regnerus at US Signal. The ability to recover data from an off-site data center or from the cloud is just as important — if not more — than making sure it’s protected, she added.
Companies contracting with an off-site data center or an I.T. service provider using cloud storage should know what their service contracts state regarding how quickly their vendor can recover their data, and how often the system gets tested, Regnerus said.
“That testing is so important because replicating to an off-site (data center) is great, but if you haven’t figured out the networking and the playbook and the run book for what needs to be recovered first, what the networking needs to look like to be able to actually access that data when there is a disaster, and you haven’t tested that, then that’s a problem,” she said. “You have to be able to recover that.”
Of the companies that experienced an I.T. outage in the prior year, more than half said it was caused by a natural disaster and one-quarter cited an error while implementing new technology. I.T. overload and ransomware were each blamed for 21 percent of outages.