Report - IT 'glitch' grounds Qantas (resolved)

Status
Not open for further replies.
In my experience over the past 3 or so decades, the IT industry tends to be rather cyclic.

Say you start with all your resources in house. Awesome. local knowledge. specialised knowledge. great. Then management sees their costs skyrocket (staff) and they see flashy ads to cut costs to outsource functions to (at first) other firms and then potentially offshore resources at a fraction of the cost. Bean counters have accountingasms and cry yes for the bottom line. Yes! Yes! Yes! I'll have some of what they're having!

wait a few years... 3-5.....

the oursourcing causes unexpected issues - maybe it's comms.. maybe it's interaction / integration with existing (what is left) local management and the outsourced assets (programmers, operations, data centres, whatever)... maybe it's issues getting things done in a timely manner due to bureaucracy (in the job I mentioned in previous post, there was a crazy system where I, as a contract resource working for a middle man consultancy (one of the big firms that shall remain nameless) for the big telco - had to get any requests for simple things done at the operational level - submit the request to get approved by the telco manager, then that got put through to the operations provider(that aforementioned three letter company) in Sydney who would again approve and finally go to the guys on the floor to actually do whatever was required. Even though I could call and talk to them and we both knew that getting X done was a 2 minute job, it could take hours to actually get all the approvals and paperwork done through the system. Maddening at times!). ... perhaps the quality of the work is not up to par, or the outsourcers are not meeting their obligations etc... so...

Management decides "I know! Let's bring those functions back in house...." and go on a hiring spree and find resources etc...

.....

rince and repeat.


Now, in the age of cloud platforms and all the rest this is probably a bit different than it was, but I have noticedthat general principles still hold - the trade off - local knowledge and skills (and by local I mean in house more than a comment on onshore vs offshore) is more costly, but probably better in the long run. However it is also tied into the nature of higher management roles changing in cycles too.. so a new CEO or CIO might come in and :want to make their mark" by changing everything. I'm sure everyone is well aware the first thing usually that happens is a reorganisation of some sort will usually happen (I have lost count of the number I have been through over the years) and then a new person will come in, blame the last lot for the problems (real or otherwise) with various things, and then reverse it.. hence cycles.


Anyway how does this relate to QF? I imagine there's a bi of this. Probably also with pandemic cost cutting a lot of knowledge was poentially let go (as in all areas) and that has an effect. We like to joke about the Work Experience Kids, but there could be some truth to this in respect of whoever is doing the back end work that a lot of IP has been lost. That takes time to regain and learn - no matter the industry or who is tasked to do it.

And QF as noted earlier, is beholden to many third party platforms and providers... CRM (which I think is Salesforce?), Amadeus(of course), and all the rest. All of these things form very complex systems... and integration with legacy systems is hard (and usually costly).

side story: I know of a certain organisation, using a very old Finance system (we're talking >20years old). Costs to upgrade or migrate were too much s it was consistently put off. Anyway in order to at least get some better functionality going they contracted with a outsourced company in a certain foreign locale to work on integrating the old system with a modern provider for invoicing purposes. This project, which was originally scoped to be around six months or so I believe, had run to 18 months with constant delays due to inability of the third party to do the work, issues with tests cases, potentially misunderstanding of requirements and in the end a situation where while most of the additions worked, but not to the satisfaction or to the requirements of the organisation. The project was eventually canned and all of the (very costly) work undone and left unused. Last I heard they were looking to make the (even more costly) choice to migrate to a modern platform (which is what should have been done a decade earlier, but we digress).

Just an example of how apparently lower cost options can go belly up (andbe way over time estimates in the process).
 
No mention of what the actual failing was, but it halted all of its aircraft movements (in Australia, I assume).
I'd guess it's possibly a dispatch system of some sort in that case. While crews can manually do things like weight and balance, fuel calculations etc (I think) all of these things do go into dispatch systems at the company level.. it co-ordinates things like flight plans and all that fun stuff.

The strange thing is that the reports I heard all talked about huge lines at security (at least in MEL) which would possibly be something else. I can't see local security being affected by any QF opertational issue.. even if check in was down then, in theory, they wouldn't even be getting to the security line I would think (and there was no report that issues were with res or check in).
 
The strange thing is that the reports I heard all talked about huge lines at security (at least in MEL) which would possibly be something else. I can't see local security being affected by any QF opertational issue.. even if check in was down then, in theory, they wouldn't even be getting to the security line I would think (and there was no report that issues were with res or check in).

Perhaps some-one ordered no more QF pax to be admitted airside, lest the downtime became a very long time and the place got over-crowded. Plenty of pax would have checked in on-line ...
 
Read our AFF credit card guides and start earning more points now.

AFF Supporters can remove this and all advertisements

In my experience over the past 3 or so decades, the IT industry tends to be rather cyclic.

Say you start with all your resources in house. Awesome. local knowledge. specialised knowledge. great. Then management sees their costs skyrocket (staff) and they see flashy ads to cut costs to outsource functions to (at first) other firms and then potentially offshore resources at a fraction of the cost. Bean counters have accountingasms and cry yes for the bottom line. Yes! Yes! Yes! I'll have some of what they're having!

wait a few years... 3-5.....

the oursourcing causes unexpected issues - maybe it's comms.. maybe it's interaction / integration with existing (what is left) local management and the outsourced assets (programmers, operations, data centres, whatever)... maybe it's issues getting things done in a timely manner due to bureaucracy (in the job I mentioned in previous post, there was a crazy system where I, as a contract resource working for a middle man consultancy (one of the big firms that shall remain nameless) for the big telco - had to get any requests for simple things done at the operational level - submit the request to get approved by the telco manager, then that got put through to the operations provider(that aforementioned three letter company) in Sydney who would again approve and finally go to the guys on the floor to actually do whatever was required. Even though I could call and talk to them and we both knew that getting X done was a 2 minute job, it could take hours to actually get all the approvals and paperwork done through the system. Maddening at times!). ... perhaps the quality of the work is not up to par, or the outsourcers are not meeting their obligations etc... so...

Management decides "I know! Let's bring those functions back in house...." and go on a hiring spree and find resources etc...

.....

rince and repeat.


Now, in the age of cloud platforms and all the rest this is probably a bit different than it was, but I have noticedthat general principles still hold - the trade off - local knowledge and skills (and by local I mean in house more than a comment on onshore vs offshore) is more costly, but probably better in the long run. However it is also tied into the nature of higher management roles changing in cycles too.. so a new CEO or CIO might come in and :want to make their mark" by changing everything. I'm sure everyone is well aware the first thing usually that happens is a reorganisation of some sort will usually happen (I have lost count of the number I have been through over the years) and then a new person will come in, blame the last lot for the problems (real or otherwise) with various things, and then reverse it.. hence cycles.


Anyway how does this relate to QF? I imagine there's a bi of this. Probably also with pandemic cost cutting a lot of knowledge was poentially let go (as in all areas) and that has an effect. We like to joke about the Work Experience Kids, but there could be some truth to this in respect of whoever is doing the back end work that a lot of IP has been lost. That takes time to regain and learn - no matter the industry or who is tasked to do it.

And QF as noted earlier, is beholden to many third party platforms and providers... CRM (which I think is Salesforce?), Amadeus(of course), and all the rest. All of these things form very complex systems... and integration with legacy systems is hard (and usually costly).

side story: I know of a certain organisation, using a very old Finance system (we're talking >20years old). Costs to upgrade or migrate were too much s it was consistently put off. Anyway in order to at least get some better functionality going they contracted with a outsourced company in a certain foreign locale to work on integrating the old system with a modern provider for invoicing purposes. This project, which was originally scoped to be around six months or so I believe, had run to 18 months with constant delays due to inability of the third party to do the work, issues with tests cases, potentially misunderstanding of requirements and in the end a situation where while most of the additions worked, but not to the satisfaction or to the requirements of the organisation. The project was eventually canned and all of the (very costly) work undone and left unused. Last I heard they were looking to make the (even more costly) choice to migrate to a modern platform (which is what should have been done a decade earlier, but we digress).

Just an example of how apparently lower cost options can go belly up (andbe way over time estimates in the process).
Your first few paragraphs outline exactly what happened with the Customs IT systems in the late 1990's/early 2000's. The whole system was outsourced to a single multinational contractor. Most of the staff in those areas transferred over and often at higher wages. Actions that had been previously been able to be done immediately become bogged down in the internal company processes. And most equipment became costlier to obtain. We had to 'buy' laptops, printers etc through the contractor rather than just walking to the corner store who usually could supply it cheaper and faster. The contractor claimed that the extra costs were justified by them providing support for the goods - but the normal product warranties were just as extensive. In the end the outsourcing to a single supplier increased costs and reduced efficiency. After the 5 year (I think) contract period control of the IT systems was taken back in house. Multiple outside contractors still did much of the work but oversight was returned in-house.

 
No mention of what the actual failing was, but it halted all of its aircraft movements (in Australia, I assume).
...on an IT issue that caused delays of around one hour for some of our flights on Sunday afternoon. The IT outage impacted the communications system between the aircraft and maintenance team that enables electronic sign-off of engineering tasks prior to departure. Impacted teams reverted to a manual process until the outage was resolved, with around 15 flights directly impacted and some minor flow-on delays. Thank you to our teams who worked quickly to resolve this issue and for those on the frontline who communicated with our customers about the delay.
Post automatically merged:

Perhaps some-one ordered no more QF pax to be admitted airside, lest the downtime became a very long time and the place got over-crowded. Plenty of pax would have checked in on-line ...
No, that was just purely a security staffing issue. Not related whatsoever.
 
Status
Not open for further replies.

Become an AFF member!

Join Australian Frequent Flyer (AFF) for free and unlock insider tips, exclusive deals, and global meetups with 65,000+ frequent flyers.

AFF members can also access our Frequent Flyer Training courses, and upgrade to Fast-track your way to expert traveller status and unlock even more exclusive discounts!

AFF forum abbreviations

Wondering about Y, J or any of the other abbreviations used on our forum?

Check out our guide to common AFF acronyms & abbreviations.
Back
Top