Should Qantas sack their website team?

Status
Not open for further replies.
The general public will, thats the problem when you release software to a large audience.

Which is why I suggested releasing it to a professional testing team first.
The problem is a lot of IT professionals are like economists. They don't write software which works for imperfect end-users/consumers.
Instead they expect imperfect end-users to perfect themselves.
 
Can anyone recall that informative post from an IT professional 2 years back? He outlined the difficulty in translating front end concepts to functional websites. After reading that post I whined less about the usability of a websites. If you recall the post please repost the link.

It was another whinge about QF IT work experience kids are at it again.
 
There is a glitch on the Qantas website booking syd-cgk. If you book economy class, business class flights also come up, showing syd-cgk 20 April and cgk-syd on 25 April, the combined J class fare shows up as $2,106 return, however, if you book the same flight from another search engine in business class flight on the search engine it comes up as $1,877. I called Qantas and they said there was a fare reduction so they wont refund me.
 
I called them again and they said they would look into it, must be an interface glitch between their booking system and the website. O other travel booking sites such as Expedia they are selling the same fare at about $1700 something. I know most people would not go to book a J class fare after selecting Y class from the drop down box and then click on J class there, but it shows some higher fare as the J class fare is not much more than a fully flexible fare and you might just want see what J class fare is.
 
......Programming is one of those things, everyone assumes it's easy .....

harvyk, I certainly DO NOT assume programming is easy. But IMHO there is just no conceivable excuse for designing a search system that failed the way the QF system did in searches for J product. Absolutely no other way to describe what they achieved than as a complete "fail".

I would fully expect that a system so large as the QF site would always have bugs and stuff - that is normal. But that J search stuff-up demonstrated failures of both the initial programming, and of the subsequent testing / reviewing when it went live.

QF has at it's mercy AFF - who provide a very good pool of reviewers and testers. It is a group of all sorts of people who not just do a lot of flying (and hence usage of online systems) but who take the time to go online here and sensibly debate them.

If members here cannot get the system to perform the way a consumer needs, what hope do "mums and dads" have??
 
Big company whom are not IT specialists and therefore IT will be considered a sunk cost, and therefore getting anything past the bean counters is likely to be a struggle.

QF have a website and IT systems because they know it will drive business, but like all such sunk costs if QF thought they could do just as well without involvement of IT systems you can bet your bottom dollar those systems would be scrapped faster than you could blink.

Using the example of "safety equipment", QF do look after the things which are their primary business (such as keeping their planes flying) in much the same way a company whom is selling "Safety equipment" would make sure the stuff they are selling is A1.

In my earlier post I included management in my sights, don't worry. But unless the keyboards are snatched from under the programmers fingers, there is no excuse for launching non functional links or stuff that fundamentally doesn't work.
 
EXCLUSIVE OFFER - Offer expires: 20 Jan 2025

- Earn up to 200,000 bonus Velocity Points*
- Enjoy unlimited complimentary access to Priority Pass lounges worldwide
- Earn up to 3 Citi reward Points per dollar uncapped

*Terms And Conditions Apply

AFF Supporters can remove this and all advertisements

There is a glitch on the Qantas website booking syd-cgk. If you book economy class, business class flights also come up, showing syd-cgk 20 April and cgk-syd on 25 April, the combined J class fare shows up as $2,106 return, however, if you book the same flight from another search engine in business class flight on the search engine it comes up as $1,877. I called Qantas and they said there was a fare reduction so they wont refund me.

Isn't that where this comes into play?

Qantas Best Price Guarantee
 
In my earlier post I included management in my sights, don't worry. But unless the keyboards are snatched from under the programmers fingers, there is no excuse for launching non functional links or stuff that fundamentally doesn't work.

The keyboards don't need to be snatched out from under programmers fingers for bad software to be released. It's entirely possible that code is checked in to the code repository ready for release and the bug is then found in later testing. Assuming they are doing proper task management a bug will be raised in their bug tracker, however that still relies on the bug to be assigned to a programmer to look at. Depending on the severity of the bug it's entirely possible that no developer looks at the bug in the bug tracker before the release manager releases the code in the code repository out to production.


You also need to remember that a large complex system it's entirely possible for a change in one area to unexpectedly affect a completely different area. Take the 404 for example, it's possible that a change to the filename will cause a 404 error. Whilst modern day tools have what is called refactor where both a thing, and everything looking towards it is appropriately renamed, it's not perfect.


There is a condition which is nicknamed "Fear of release". Basically if you wait until software is perfect before release you will literially end up with nothing. I've seen it time and time again with software developers so worried about potential bugs in their code, that they produce exactly zero code.


To give you a bit of an idea of bug severity and why software might be released with known bugs at work we work with the following levels, and I suspect QF would have a similar system in place.


- Show-Stopper - These must get fixed immediately and a patch released as soon as possible. These sorts of bugs will cut weekends short and result in very late nights. Typically these sorts of bugs are things which stop the system from operating or present a massive security risk. To put this into a QF context, I would consider a bug which allowed access to a FF profile without password, the booking system stating a booking has been made without actually recording the booking, or offering SYD-LAX F class for $10 as show-stoppers


- Critical - These are bugs which must be fixed in the next release (when ever that is). These bugs are fixed during normal business hours, an example might be an intermittent problem which prevents seat selection, but an error message appears if the seat selection was unsuccessful.


- High Priority - These are bugs which need to be fixed, but it may take several releases to do it (to allow time to fix Criticals or Show-Stoppers). An example might be a 404 error if certain links are clicked on where those links are unlikely to be clicked often. Yes it looks messy if you happen to click on one of those links, but it's not going to stop you from using the system.


We also have normal and low priorities, however for bugs they would rarely be used and almost certainly never for bugs which can be seen by the end users.

Bugs fit in the priority list along side features and other tasks (thus the reason we could issue a bug at a normal or low priority). Bug's don't automatically get preference over features and tasks, it depends on the importance of the feature vs the severity of the bug.

If you have more questions about my chosen profession please feel free to ask them, more than happy to answer. (BTW, being serious, not sarcastic)
 
Good summary, Harvyk

I use the QF site regularly and have very few issues. I've even adapted to the new design.

I can't even use the VA site without opening an incognito window or clearing my cookies - this must be costing them so much money as it is impacting many other people (see the thread VA forum thread). Even once in the incognito window, basic functionality is broken or inconsistent.

As I said before, the QF site is a breath of fresh air in comparison. Can it be improved? Yes! But compared to their domestic competition, they are streets ahead.
 
<snip for space>
If you have more questions about my chosen profession please feel free to ask them, more than happy to answer. (BTW, being serious, not sarcastic)

OK, I'm learning about the processes, thanks :) . Lots of discussion about bug tracking, priority of bug fixes and I sort of get all that. Meeting rooms with coughpy melamine furniture, whiteboards and discussions that us mere mortals wouldn't understand.

But what about the Bug Identifier called the 'ol Mark 1 Mug Punter? Give it to 6 people for a couple of hours and I dare say most of the bugs that affect the users will come out. Again, I appreciate that this is a management type process and not a software programmer type process, but my points are about the QF web site, not the only programmers.

An urgent fix obviously needs to be done ASAP but the other fixes you talk about seem to be of the nature that 2 days to fix and verify and re-test would not seem to be funds mis-spent when the value of a properly functioning web site is considered. This is what I think a number of us just don't get. Its a complex thing so unintended consequences will happen. But for goodness sake, allow for that and don't impede the punters from making bookings and helping to pay the directors emoluments.
 
harvyk, I certainly DO NOT assume programming is easy. But IMHO there is just no conceivable excuse for designing a search system that failed the way the QF system did in searches for J product. Absolutely no other way to describe what they achieved than as a complete "fail".

I would fully expect that a system so large as the QF site would always have bugs and stuff - that is normal. But that J search stuff-up demonstrated failures of both the initial programming, and of the subsequent testing / reviewing when it went live.

QF has at it's mercy AFF - who provide a very good pool of reviewers and testers. It is a group of all sorts of people who not just do a lot of flying (and hence usage of online systems) but who take the time to go online here and sensibly debate them.

If members here cannot get the system to perform the way a consumer needs, what hope do "mums and dads" have??

Part of the problem again can be the initial requirements were incomplete or incorrect. That's what gets tested against. A big part of the problem in the software industry is the customer (internal or external) doesn't actually know what they want.
 
The customer wants good data.....
And you're right, they have no idea about what good data means..

harvyk outlined a great summary of the waterfall system of older technology systems builds, upgrades and fix work and rollbacks - slow to market because it all needs be 100% perfect before release. There's was also the comment about there's no code because Its not perfect. That's risk averse behaviour...

the new scaled agile framework philosophy would mean MVP Minimum Viable Product going onto the market for a small set of your customers to test. It's a good risk assessment where you open source test and get it done "free" by a small group of the public instead of highly paid professionals.

It means time to market is much much, much shorter, and that new features get tested after release not before because it changes the assumption from "all code will have errors in it" to most code is written properly so we don't need to endlessly test it, And where it is wonky, open source testing will pick it up quickly, and we can fix them quickly because we aren't spending time endlessly testing good code that actually works - wasting time and effort on things that work perfectly well. We shift the use of time from risk averse activity to useful outcomes.

The earlier question about the wrong J fare price being offered on the Y fare page as a comparison which induced the higher priced purchase when the J fare page had the $500 lower prices fare is an instance of "feeling duped". Should the customer get a refund of the difference ? Well, people do expect internal consistency even where it's buyer beware, and where most customers wouldn't go back and recheck pricing?
Now does, this mean where I do the trip in reverse, eg Bangkok - Sydney instead of Sydney - Bangkok that I ought pay the same price ? That's not how markets work.... The price pointing means same seat, different pricing and can often be quite distorting. Eg 20% of the reverse journey price. Just think Grand final flights Perth - Melbourne and the complaints on $1200 one way tickets when cheap red deal fares were like $199 or $249. Our high wages mean we can and do pay more

As a measure of goodwill from QF YES they ought do so.....
 
I was keeping my answer a little methodology agnostic as bug support post release is relatively the same (with some minor differences in how exactly fixes get released). Agile more affects how features are developed in the first place rather than how bugs are repaired post release.


In the waterfall method you would have a perfect design document which is fully developer prior to a single line of code. In that it outlines every last thing to do with the software ( / website) and it's features, and nothing is released until the entire document is developed in accordance with the original spec.


In Agile (and it's important to note there are many different flavours of Agile development, so this is in general only), there is no formal document outlining what is required. Instead there tends to be a high level document outlining the basic outcomes desired. There are no features formally written down instead you have constant meetings with customers to workout how a feature is to be developed. One of the key things with each of these meetings are that they are frequent (eg weekly or fortnightly) and only talk about a couple of weeks worth of work at a time, with the intention of releasing the software to the customer at the end of that fortnight.

The fastest release cycle I've ever heard of was a daily release cycle, this company which I deal with a fair bit have a new release of one of their products on a daily basis. Typically it's all minor tweaks. That's pretty unusual, the fastest I've worked under was a weekly release cycle, with fortnightly the most common I've seen.

In regards to software testing, there is a thing called test driven development. In that case the developer comes up with the test which proves their software works, and then builds their software. In reality I've never seen it work as expected, and often additional tests are developed as bugs are discovered.


In terms of releasing software as "Open Source" and allow the community to do the testing, I personally have a few problems with that, first of all is I like to get paid for my work (the conference I attended last year in SEA, it was extremely refreshing to hear software developers talking about not working for the love of it but for the money in it), the second is unless you are working on a really sexy project (eg a linux release, OpenSSL, jQuery) you are not going to get people testing your software. Plus you then have the problem of how to monitize that work.


In terms of allowing the customer to do the testing, well there you are releasing sub-standard software to the customer. Do that too many times and the customer just gets annoyed. Furthermore as I've mentioned earlier non-testers tend to use the software as they would, but software can easily be used one of 1,000 different ways, and there is no guarentee that John Smith will use the software in any way remotely related to how 99% of others would use it, plus the customer will sometimes come back and simply state they are too busy to test. Futhermore they may or may not enter in deliberately bad data (eg what happens if you enter in a date of 30th Feburary 2016), these are the sorts of things which proper testing team, or at least a test plan will cover. Of course, the person running the test plan should not be the same developer whom developed the software.


Finally there is always a financial cost to all the above, I've lost count of the number of times I've been told to simply ignore a feature or a bug when I've given the customer an estimate of building the feature or fixing the bug. You get this problem more with Agile as there tends to be a budget of a certain size put aside, but that sizes doesn't typically mean anything in regards to the actual cost of the project. In waterfall you tend to have a budget allocated based on the original design document which is relatively close to what it will cost to achieve what is set out in the design document with a little bit left over to cover the unexpected.
 
...In terms of allowing the customer to do the testing, well there you are releasing sub-standard software to the customer. Do that too many times and the customer just gets annoyed.....

Yep, sounds like QF :)

Edit: The only reason I bag Qantas on some problems with the site is that I love Qantas and want them to have the best chance competing in this difficult aviation world !!
 
Yep, sounds like QF :)

Edit: The only reason I bag Qantas on some problems with the site is that I love Qantas and want them to have the best chance competing in this difficult aviation world !!

OTOH QF needs to fly from your airport if they are going to compete...
Now where does QF fly internationally from PER and ADL?

Happy wandering

Fred
 
<snip>
In terms of allowing the customer to do the testing, well there you are releasing sub-standard software to the customer. Do that too many times and the customer just gets annoyed. Furthermore as I've mentioned earlier non-testers tend to use the software as they would, but software can easily be used one of 1,000 different ways, and there is no guarentee that John Smith will use the software in any way remotely related to how 99% of others would use it, plus the customer will sometimes come back and simply state they are too busy to test. Futhermore they may or may not enter in deliberately bad data (eg what happens if you enter in a date of 30th Feburary 2016), these are the sorts of things which proper testing team, or at least a test plan will cover. Of course, the person running the test plan should not be the same developer whom developed the software.
<snip>.

Personally I was suggesting 'off line' testing by a panel of punters pre release; probably paid but not 'regulars' or insiders. Testing live on the customers is what QF does, as juddles suggests.

Saying punter A uses it different to B or everyone else is beside the point. They will find things that the developers and professionals missed (as we do seemingly every time QF tinkers with their site). I don't imagine the testing would be random - give them specific types of tasks, on different browsers etc and throw in a few 'free rangers'.

Fact is, whatever Qantas has been doing to date, their web site in the hands of the public usually has easily found faults on release, and has some very annoying aspects on-going. Having the punters test out when changes are made wouldn't be the worst thing they could do.
 
Status
Not open for further replies.

Become an AFF member!

Join Australian Frequent Flyer (AFF) for free and unlock insider tips, exclusive deals, and global meetups with 65,000+ frequent flyers.

AFF members can also access our Frequent Flyer Training courses, and upgrade to Fast-track your way to expert traveller status and unlock even more exclusive discounts!

AFF forum abbreviations

Wondering about Y, J or any of the other abbreviations used on our forum?

Check out our guide to common AFF acronyms & abbreviations.
Back
Top