For example have a look at Syd airports departure timings for today (both domestic & Intl), you will will find more of VA's flights were delayed by 2 - 2 1/2 hours each on average, and Jetstar who were also down average delay was an hour.
Even their own website says bugger all except a couple of lines.
Travel Alerts | Virgin Australia
The comments on Virgin's Facebook status regarding this have made my day.
Update: Apparently a Power Failure at a data centre in Sydney is the cause of everyone's pain this morning.
Why a single power failure to cause so much havoc I don't know....
My understanding this morning is that while the UPS was working fine, the air conditioning failed which led to some shutdowns.
You'd have thought they'd have a second data centre somewhere else that could be switched to...
All systems have troubles from time to time. That's why they have manual processes in place.
Yes - considering the Sept 2010 problems may have cost VA around $20M (may be a bit overblown IMO) - then even if todays disruptions only cost $1M in extra wages, lost revenue, planes and aircrews being out of position, back office manual paperwork, cancellations, refunded exit row fees, and aircraft stranded once the SYD curfew kicks in tonight etc etc and considering that VA knows that Navataire can fall over at the drop of a hat then I guess they can justify at least $1M training everyone to use Sabre? And to train everyone in manual backups if Sabre ever falls over?
What about the people that "will never fly Virgin again" or will see VA as an LCC because of this? Brand damage is a bigger issue than the immediate costs.
AFF Supporters can remove this and all advertisements
Given that they are so dependent on their systems to operate I am gobsmacked that they don't have a Disaster Recovery functionality that can switch on a "replacement" system much more quickly.
What about the people that "will never fly Virgin again" or will see VA as an LCC because of this? Brand damage is a bigger issue than the immediate costs.
A DR plan normally takes considerable time to enact. The term used for rapid fail-over to an alternate location/system is generally "Business Continuity". The technical requirements for a BC plan are generally a lot more complex than for a DR plan. In many cases, a DR plan may have a RTO (Return Time Objective) in excess of 24 hours, and an RPO (Recovery Point Objective) of 8-12 hours. So a traditional DR plan may well be in place but is not going to be activated unless there is a declared disaster event which implies a return to normal operations is at least 24 hours away.Given that they are so dependent on their systems to operate I am gobsmacked that they don't have a Disaster Recovery functionality that can switch on a "replacement" system much more quickly.
Yes, I have designed systems with even lower RPO (1 hour for some systems), but RTO still generally remains much higher. I think the lowest RTO I have seen for a true off-site DR soluition is 12 hours and that was just for the core application with other apps being 24 to 48 hours in a prioritised list.We have RPOs as low as four hours. We also have systems that are replicated in real time so we have actually switched servers (simulating a failure of one) and the business didnt notice.
Well I certainly hope that Joe is not taking his coffee into the data centre computer room environment ...I used DR as a generic term as the failure of a single system would often not be a DR or BC situation. It is also something that I find business do not plan for adequately - they invest many $ on BCP or DR to cover the entire business or get up and running when a building burns down but don't much of a plan when Joe spills a cup of coffee over the server hosting the core applications.
Well I certainly hope that Joe is not taking his coffee into the data centre computer room environment ...