Disaster Recovery


A lot of people talk about disaster recovery and not many people get how it fits into the business and how or why it can be important, we thought we would share a recent chain of events that worked so well most of the users in the business didn’t know anything was going on. Sometimes if you do your job well enough no one knows your doing it!

One of our clients is a Tier 1 automotive manufacture supplying  to internationally known Car manufactures.  All companies operate 24/7 so there is a strict zero downtime policy. Every minute of downtime can equal hundreds of thousands of pounds of losses and extra costs for staff to catch up on hours lost in the factories.

We had been planning a virtualisation and disaster recovery project for all the physical servers, so when we discovered we were also moving to a new large factory and office we tied the projects together. We virtualised all physical servers and even some routers. Once virtualised we enabled off site replication to a secure Datacentre, this allowed us to turn off physical hardware and move the office without any of the key ERP software and backbone software like Emails being interrupted, despite the original hardware being in the back of a lorry being moved!

After the original physical servers had arrived at the new location we switched back to the main hardware’s new location. Now the disaster recovery plan had been tested in a controlled manor we felt comfy enough after additional testing that in event of a disaster at the main site the business would be able to ride through large problems which could have crippled the company before.

The disaster recovery plan was tested for real shortly after being at the new factory when the sites power lines failed due to circumstances outside of the clients control, at which point generators kept the factory running, while the company disaster recovery plan kept all key ERP software running and externally all emails and VoIP phone lines stayed up. Externally nobody knew anything and more importantly production was able to keep running.

If the downtime had been for a longer period of time the plan would have allowed all employees to go home and log onto the sessions via remote desktop and carry on working where they left off. Phone extensions could have been diverted to user’s mobiles but luckily for all involved the downtime was not prolonged.

The one thing which a good plan always seems to lack is staff training. Having a plan is great but people need to know it, when it kicks in and what they need to do, so drumming in basics like accessing emails and phones is key.

With ERP software becoming so important to large businesses, having multiple plans to survive basic incidents is a must. SaaS is something we expect to be seeing more of in the future and while some people are sceptical of SaaS, cloud and disaster recovery being buzzwords, they can boil down to some very simple procedures and just using what you may already be using. Businesses are starting to see that disaster recovery systems combined with virtualisation and cloud is a very smart and powerful tool which can be applied to all large and small businesses.

Cameron’s Cryptic Encrypted Problem

encrypted

Politics and IT are clashing again, this time David Cameron has been talking about encrypted traffic over widely used encrypted public apps and platforms such as WhatsApp, Snapchat. Skype etc

David Cameron
“Are we going to allow a means of communications which it simply isn’t possible to read?” “My answer to that question is: ‘No, we must not.’ ”

The underlying mechanics of what he said could have a very serious effect on all businesses. Private businesses are not countries or political parties, a lot of companies span multiple countries so what’s OK in one is not OK in the other. The danger of governments getting involved in apps to stop terrorism is a very slippery slope to businesses having to jump through more hoops to keep a politician happy.

Whatsapp is an app, but its also a private business. Cameron seems to be mistaking an App for a tool, 99.9% of people will use a tool for good, sadly some will use it for bad. Monitoring traffic will not prevent bad things, people will just use something else.

Either something is encrypted or its not, if a government is in the middle then its not secure, so when you as a business say “our systems are secure” you are actually not telling the truth, because of government monitoring in the middle! The government after all are just people, what’s to stop rogue staff from giving away data?

Companies will have a real world cost attached allowing government access to their data, not to mention all the paperwork attached to this. Apple and Microsoft have decided to remove the costly problem of government and law enforcement requests of data by simply removing themselves from being able to access the data, and by the same token its a huge selling point to businesses. Microsoft and Apple cannot give away you’re data if your an individual or a business, so if you use them its even safer!

Microsoft and Apple have made a very clever business decision to cut themselves out of a costly and bureaucratic problem and turn it into a marketing bonus! Please don’t mistake what i am saying with people should be allowed to hide, but the definition of encrypted which is part of our business fundamentals and common sense is on the same path as banning secure personal chat apps. Secure should be Secure, Private should be Private

Future of data centers

Data centers are to be found in the most unlikely places — from the icy tundra of Antarctica or the belly of a converted 19th century church, to a retrofitted nuclear bunker or a 32-story colossus.

In 2009, Google received a patent for the idea of building data center platforms that would sit some miles offshore. Picture an oil platform for compute and storage and the whole thing running off wind and solar power.

The jury is still out about the future of the data center. Will it become a modular shipping container and be shipped out to sea, one stacked on top of the other (similar to Google’s idea), or will it become an enormous, ultra-efficient warehouse out in Nevada or in the solar goldfields of the Saharan desert?

Whatever happens, one thing is clear: we will need ever-increasing amounts of data storage and computing power.

The cloud may not have won every battle for outsourcing the datacenter—a significant number of enterprises prefer to build their own—but it’s winning the battle for convergence. Through hybrid cloud innovations, the line between the corporate firewall and the massive scalability of tier 1 hosting providers is becoming a blurred one.

Geography is becoming virtual and data, regardless of where it lives, must scale massively and across multiple platforms. NoSQL and MySQL and Oracle queries emerge within this ecosystem as allies, not sworn enemies.

When Facebook recently announced that it will open source Presto, the data engine that fuels storage and retrieval of over 300 petabytes of data for its one billion users, it signaled that Open Source will continue to play a vital role in the future of the data center.

Wherever it goes, and whatever it looks like, the data centers of the future will be super efficient and push the boundaries of the physical, but they’ll also be driven by open source innovation.

Click to see a live webcam of Antarctic conditions at the McMurdo Station data center.

Source The Next Web

 

Flash Virtualization

Virtualization has changed the way modern datacenters operate, but I/O bottlenecks still hamper storage systems and application performance. Flash hypervisors could be the answer.

In today’s enterprise, IT managers need a way to efficiently scale storage performance using virtualization, much in the same way they scale server compute and memory. This has given rise to a new technology, called the flash hypervisor, which is paving the way to true software-defined datacenters. By aggregating available flash storage into clusters that accelerate the performance of reads and writes, flash hypervisors are changing the way that IT owns and operates datacenters.

Overcoming storage bottlenecks
I/O bottlenecks in primary storage can add significant latency to virtual applications, resulting in slow or unusable applications. This frustrates end users and creates numerous problems for IT, including unpredictable expenses and costs.

To date, the only option when faced with the above challenge is to throw storage hardware at the problem. For example, storage administrators can improve the capabilities of a storage area network (SAN) by upgrading interconnect speeds or deploying faster disks and processors. Unfortunately, these are all very expensive and disruptive solutions, and don’t even guarantee an improvement in application performance.

Many companies are keen for a change. They want a solution whereby storage performance is decoupled from storage capacity, eliminating the need for unnecessary storage hardware upgrades. This has created an enormous market demand for server-side flash, which in turn has created a need for flash hypervisor software.

Why flash virtualization?
A flash hypervisor virtualizes all server-side flash into a clustered acceleration tier that enables IT to scale out storage performance quickly, easily, and cost-effectively, independently of storage capacity. Just like traditional hypervisors abstract physical CPU and RAM into a logical pool of resources, a flash hypervisor does the same for all server flash devices across a datacenter.

More specifically, the flash hypervisor provides a resource management scheme that multiplexes multiple VMs to a set of flash devices according to user-specified policies. The result is dramatically faster and truly scale-out read and write performance for all virtual machines, without the need to change existing storage infrastructure.

A flash hypervisor virtualizes server-side flash into a clustered acceleration tier that delivers scale-out storage performance independent of storage capacity.

 

Flash hypervisors fundamentally change datacenter design. Gone are the days when storage was designed with performance and capacity in one tier. For the first time ever, storage performance can cost effectively scale out according to demand.

For example, a traditional midrange SAN costs about $100,000, and delivers around 50,000 I/O operations per second (IOPS). To double this performance, one must buy a new SAN, which doubles the total price to $200,000. In contrast, a flash hypervisor can deliver around 100,000 IOPS on a single flash device, which is twice the performance at less than one tenth the cost of the SAN (less than $10,000).

[For more background on how the software-defined trend affects storage, read: Software-Defined Storage: A Buzzword Worth Examining.]

To double the amount of IOPS, one must simply add another inexpensive flash device to the flash hypervisor cluster. The result is substantially higher storage performance at a fraction of the cost of a SAN alone.

What makes a flash hypervisor different from traditional server-side flash caching solutions? Below are the key criteria that make this technology unique, and that IT departments should evaluate in the context of their own environments:

  • Seamlessly works with all VMs, hosts, and storage
  • Supports heterogeneous flash devices (PCIE and SSD)
  • Cluster technology transparently supports all virtual machine operations, such as vMotion, DRS (Distributed Resource Scheduler) and HA (High Availability). This ensures virtual machines can move around freely between hosts without impacting application performance.
  • Supports read and write acceleration (with replication between flash devices for fault tolerance). This ensures that all read- and write-intensive applications will benefit.

Storage has long been a war-of-the-boxes, with marginal innovation. Flash hypervisor technology signifies a giant leap from the storage status quo. It brings a scale-out microsecond-level storage acceleration tier to every workload, in every virtualized datacenter. Its enterprise-class features and unparalleled benefits make it a strategic infrastructure investment that will fundamentally change datacenter storage design.

Jeff Aaron is vice president of marketing at PernixData, and has almost two decades of experience working in high-tech software, networking, and telecommunications companies.

Source Informationweek

Kixo - IT Support uses cookies to make the site better. Click to find out more.