Does your organization have a Data Recovery Strategy vs Backup Plan?

By | media, Uncategorized

With the increasing frequency of ransomware attacks it is crucial to ensure your organization has a solid data recovery strategy.  Almost every news article about ransomeware mentions the need for good backups. Maintaining a good set of backups is one of the key strategies of mitigating the damage caused by the attacks. But backups alone are not enough.

One of the topics I find has not been discussed is the recovery plan.  Pre-planning is critical to a successful data recovery strategy.  Many organizations plan their systems to meet a specific “Backup Window”.  When I talk to clients, often the first thing they want to discuss is “how fast can the data be backed up?”.  In my experience, what many organizations fail to do is plan for the recovery window.  Backups are worthless if you are unable to recover or recover quickly enough.

Below are a few topics that should be considered when reviewing your organizations recovery plan:

  • Do you have defined data types with Service Level Agreements? Not all data is equal. The process of prioritizing your recovery is not an easy task. It is crucial for organizations to understand what restores would need to be accomplished first and appropriate expectations set.
  • How much data of each type do you have? The size or amount of data that needs to be recovered will have an impact on the solution architecture.
  • How fast does your data need to be restored? Often there are systems that are more critical than others to the daily operations. A list of systems with priorities should be created.
  • Does your primary storage and applications support snapshot technologies? A recovery from a snapshot will be faster.
  • Does your backup/recovery management software provide snapshot management? Utilizing a product that integrates with your primary storage vendor and software will greatly enhance both backup and recovery times.
  • Does your backup plan include for “Air Gapped” backup storage? Should your organization experience a security breach it is possible for the attacker to render some restore technologies useless (Snapshots, disk array targets or appliances, cloud, etc.).  A solid Disaster Recovery strategy will include some level of offline or air-gapped backup storage (tape or offline disk).

When your organization has planned and executed a solid recovery strategy, it is quite possible that it will take longer to make the decision to recover than the recovery takes to complete.

Gartner’s 2016 backup software report: Commvault and Veeam are standouts

By | media | No Comments

Posted by: | June 21, 2016, 3:59 PM PST

Original Article: ” Commvault and Veeam are standouts

Commvault is the top-ranked enterprise backup company, but they and rivals will have to stay agile as customers are feeling increasingly comfortable with using new technologies from smaller providers, research firm Gartner explained in its 2016 Magic Quadrant for Data Center Backup and Recovery Software.

SEE: Data backups: The smart person’s guide

Gartner storage analyst Dave Russell noted that his team’s annual report has some changes from the previous few editions. It no longer covers integrated backup appliances, instead returning to its roots as a pure software discussion. Only one company, FalconStor, is dropped after being part of the 2015 report. That is because FalonStor refocused on its own storage products and stopped selling backup software separately. There are no new companies listed in the report, not counting Veritas Technologies, which was formerly part of Symantec.

Commvault ranked first for completeness of its vision and its ability to execute that vision. That company’s products cover just about every major backup software feature throughout the widest selection of storage hardware arrays, Gartner stated. However, there’s a steep curve for installation/training, and Commvault’s sales channels outside the United States are not top-tier.

Everyone knows about EMC, IBM, and Veritas, which fill out the rest of Gartner’s top category for backup software; that tier also includes lesser-known Veeam, which formed in 2006.

“Veeam has jumped up to be the fifth-largest backup vendor, according to Gartner market share. Veeam now takes in more backup revenue than [Hewlett-Packard Enterprise], although in early 2016 HPE acquired a virtual machine backup vendor (Trilead) to in part compete better with Veeam. Curiously, Veeam is #5 in terms of market share, and Veeam is also the fifth-most mentioned backup vendor in Gartner end user inquires,” Russell said.

Three more takeaways from the report

What else is new or different about the enterprise backup software market? Russell made three observations based on the report:

  • “The market overall is continuing to show a willingness to consider new vendors and new approaches, and in the last year this has accelerated. One interesting difference is that more and more organizations are willing to deploy multiple solutions, for VM backup or to protect new workloads, in an attempt to improve cost, obtain new capabilities, or to reduce complexity associated with backup and recovery. One other item that is new is that, overall, pricing has decreased on a cost/TB basis.”
  • “While the four largest vendors remain dominant with just over 70% of the market, the market looks like it will continue to fragment. There are many ways to spend your data protection dollar. Not only are there many different backup vendors, with new emerging vendors, but hyperconverged solutions, self-protecting storage (e.g., snapshot, integrated array software for key application, and replication), etc.”
  • “After almost 27 years in backup, it’s amazing how very differently the same product can be perceived by different administrators and organizations. After all these years, backup is often still very complex and sometimes still quite brittle.”

What to expect in the backup software market by 2020

Russell added that enterprise storage customers are changing and so backup software providers must be ready to adapt. For example, his team expects that by 2020 more than 40% of organizations will supplant long-term backup with archiving systems, double the amount that did so last year, and that by 2018 more than 50% of organizations consider purchasing from vendors that formed less than five years ago, up from less than 30% today.

What Is Your Cloud Disaster Recovery Plan?

By | media | No Comments

Posted by: Penny Gralewski | 23 March 2017 12:00 AM |

Original article:”What Is Your Cloud Disaster Recovery Plan?

Do you have a cloud-based disaster recovery plan, or is your plan to always have an updated resume in case of an actual disaster? Talk with your peers; the results may be surprising.

For many IT professionals, disaster recovery planning is a pain – and a job requirement. Literally dozens of job descriptions outline DR as part of the job, yet it remains a challenge for many.

Many organizations struggle with the cost, preparation and accountability of disaster recovery. At the same time, it is essential to any organization that wants to recover from any minor or major emergency and stay in business.

Cloud Disaster Recovery: What’s your strategy?

Talk with current and former IT leaders about cloud-based disaster recovery and you will hear some truly painful common stories:

  • A telco IT architect spent hundreds of thousands of dollars a year on a co-location facility, reserving the resources for disaster recovery testing and any actual emergency. Working with multiple backup systems and many storage locations, the telco tested on weekends, reformatted plans and still struggled to make DR a smooth process.
  • An enterprise had a written DR plan, but never tested it. A 7.0 earthquake hit the facility and the plan was put into action. The IT leader had an updated resume, but lucked out and didn’t need to use it.
  • Some IT leaders just have the updated resume. DR is perceived as painful and several IT leaders will confidentially tell you they avoid DR testing at all costs, yet keep their resume current in case of emergency.

Intrigued by these IT themes, a Commvault Twitter poll asked about your disaster recovery plans. Cloud disaster recovery is in place in your organization, but for some of you a resume is still a DR plan.

Resilient, consistent and affordable

Disaster recovery can be a successful, reliable and less painful process if you incorporate two key resources: cloud storage and Commvault.

Cloud is a flexible, (usually) affordable option for testing disaster recovery plans. With pay-as-you-go pricing, it’s more affordable to spin up cloud resources for a DR test then turn the extra cloud resources off when the DR testing is complete.

Commvault has full cloud disaster recovery data management support – automated and accessible for IT professionals. With Commvault, you can:

  • Automate disaster recovery workflows with tested cloud management policies.
  • Recover data and apps directly in the cloud.
  • Streamline work – from a single interface, manage the control and insight for disaster recovery on-premises and in the cloud.
  • Understand current status with built-in alerting, monitoring and reporting.
  • Automate activating cloud services, shutting them down again once you are done with a DR test or actual emergency.
  • Manage it all from a single platform to consistently apply the right policies, backup locations and restore points for different levels of data stored in multiple sources.

More reliable than an updated resume

Before you update your resume, check out what Commvault can do to automate disaster recovery to the cloud, and to simplify your DR plan.

Learn more about Commvault Cloud Disaster Recovery via demos, videos, whitepapers and additional resources.

Flash in the Enterprise Data Center

By | media | No Comments
Posted by:
February 2, 2017

Organizations constantly are seeking new ways to address workload-specific storage demands in terms of performance and capacity while also meeting service-level agreements, response-time objectives and recovery-point objectives.

Many information technology operations are inspired by successful hyperscale organizations such as Facebook, Google and Amazon. However, most enterprises lack the scale and substantial development and operations commitment necessary to deploy software-defined storage infrastructure in the same ways. Hyperscale economics also typically don’t work out at smaller scale, resulting in poor utilization or unacceptable reliability issues.

Another hurdle for enterprise information technology is that hyperscale organizations typically have a very small, tightly controlled application environment that facilitates these economies of scale. In contrast, most large enterprises must deploy storage infrastructure to serve a heterogeneous application workload with diverse requirements.

The common thread for both hyperscale and enterprise information technology is the compelling rise of software-defined storage. Though early adopters of software-defined storage technologies were cloud builders, the advantages of this approach are now quickly spreading to enterprise data centers with an increased focus on cost reduction, automation and lock-in avoidance.

Software-defined platforms are enabling a wealth of diverse and scalable storage solutions based on industry-standard servers. Flash technology has emerged as a key component of these strategies, as organizations encounter the physical limitations of hard disk drives and even hybrid approaches.

Unfortunately, conflicting information about how flash can be best used in the enterprise exists throughout the industry as widely diverse architectural approaches already are in place.