Josh Mazgelis

Subscribe to Josh Mazgelis: eMailAlertsEmail Alerts
Get Josh Mazgelis: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Amazon, Inc. is a remarkable business.  Not only has it transformed the internet shopping experience for most of us but their hosting business, AWS, has also thrown down the gauntlet to commercial IT shops by showing how much can be done to dramatically lower the cost of IT operations.  Indeed, there’s a parallel with adoption behavior during the early phases of the virtualization market, as customers test out potential benefits by migrating some non-critical workloads, such as test/dev, that remain under IT’s direct control, into the public cloud.  In parallel, the lessons of public cloud are being learned in earnest within corporate IT, with the twin goal of driving down the delivery costs whilst at the same time transforming a technology-driven culture into one of flexible, agile, service delivery.  Welcome to the private cloud!

Each customer’s journey to the private cloud involves a number of steps.  Workload migrations consolidate IT operations into a smaller number of large-scale datacenters to reap economies of scale.  Technology vendors are now delivering a new way to scale out capacity via low-cost, pre-integrated modular building blocks of compute and storage.  Virtual infrastructure on the server side is now ubiquitous, which drives deployment density and enables agility of service provisioning and configuration changes.  Now a new generation of ops management software is targeting the human element of the IT operations cost base with increased levels of automation.  Yet despite all this innovation, there are still obstructions along the way that load cost back into cloud projects and put the economics at risk.

The first challenge presents itself at the point of cloud migration planning.  How many IT professionals have an accurate, up-to-date and easy-to-interpret blueprint of what apps live where, who uses them, and what dependencies (known and unknown) exist with other apps, VMs and hosts?  Determining what assets to migrate, and in which sequence, presents the next challenge.  How does this complex web of interdependent assets roll up into the discrete set of IT services consumed by the business?  How are these services ranked in order of priority by the business?  When the time comes to actually migrate a service, is it possible that some unknown, unforeseen interdependency could break overall service delivery if a critical component is omitted from the migration plan?  And what infrastructure makes it possible to migrate multiple interdependent applications, VMs and hosts, and the datasets and databases they rely upon, with zero interruption to service delivery?

Thereafter, once services are up and running in the cloud, it’s self-evident that the business expects ultra-reliable IT service delivery to comply with stringent service levels.  But with cloud computing characterized by dramatically increased scale and density of operations coupled with unprecedented agility, ironically, it’s harder then ever to assure that service protection strategies will work as intended across complex business services.  In the face of frequent configuration change within the platform, not to mention the “normal” gamut of threats, including power failures, platform and software failures, operator errors and the occasional whole site outage, how will the most critical, complex business services recover?  Is it possible to detect gaps in a seemingly coherent service protection strategy and remediate these exposures automatically?  Is it possible to provide complete and on-going assurance that cloud service delivery will meet service level targets so that operational compliance can be demonstrated at all times, especially for those in the regulated sectors?  In short, can the private cloud diagnose its own gaps in protection infrastructure and heal these automatically without human intervention? This situation is driving change in the approach CIO’s need to take in regard to IT as more and more services move to service providers and the cloud.

Falconstor

In order to help our customers rise to the challenges described above, Neverfail and FalconStor are working in partnership to resolve the key issues outlined above.  Neverfail brings to the table its award-winning IT Continuity Architect product and FalconStor brings its proven, enterprise-scale suite of storage virtualization and workload/data migration infrastructure.  Together, we bring some fresh thinking, implemented in modern technology, to help address these demands.  Come see us at the Gartner Datacenter Conference in Las Vegas, December 9th – 12th for more information.  Thanks for reading!

Image credit to: hulagway

Read the original blog entry...

More Stories By Josh Mazgelis

Josh Mazgelis is senior product marketing manager at Neverfail. He has been working in the storage and disaster recovery industries for close to two decades and brings a wide array of knowledge and insight to any technology conversation.

Prior to joining Neverfail, Josh worked as a product manager and senior support engineer at Computer Associates. Before working at CA, he was a senior systems engineer at technology companies such as XOsoft, Netflix, and Quantum Corporation. Josh graduated from Plymouth State University with a bachelor’s degree in applied computer science and enjoys working with virtualization and disaster recovery.