Tony Kern's book
Flight Discipline
[1] discusses the notion of discipline related to flight safety through
his experience flying, training and by example - almost invariably by
way of a crash.
It is testament to the aviation industry overall
how safe flying has become through the application of safety procedures
and techniques, and also the very real need for pilots to be extremely
disciplined in their work. The same concepts are also being applied in
industrial and medical environments too as described
earlier in this blog.
This week however has seen an accident at San Francisco airport involving
Asiana Flight 214. The causes (note:
plural!)
are being investigated by the NTSB and I'm not in any position to
appropriate blame, but I do think this will become quite a seminal
accident in terms of what is found.
At the time of
writing it is known that the aircraft was believe to be in a thrust
managed mode which should have kept its speed and that the crew somehow
allowed its speed to decay and mismanaged the final stage of landing.
There are another set of possibly contributing factors such as CRM, crew
fatigue, experience, the ILS being out of operation, a visual approach
etc. This of course is compounded by the fact that hundreds of other
aircraft have landed successfully at SFO in recent weeks under similar
conditions.
Whether or not this is the fault of the
pilots, a fault of the automatics on the aircraft or a combination of a
host of those and other factors is to be ascertained. One thing that
seems to be sure is that the pilots failed to monitor a critical aspect
of the aircraft's situation. In Kern's terms the discipline of the
pilots to maintain proper checks has failed. This will be an interesting
case to follow, possible a seminal case with respect to discipline.
In
terms of software engineering or specifically some aspects of
developing with security in mind, I've been discussing with a colleague
about
SQL injection attacks
and that these are remarkably easy to detect and defend against -
though also remarkably easy to write. Given this, we still seem to be
detecting vulnerable code remarkably often.
So this leads to the question why are we still writing code vulnerable to SQL injection if it is so easy to detect?
King puts forward four reasons for breakdown in discipline, which we can put into our context:
1. Programmers don’t know the procedures for developing and testing code that is susceptible to SQL injection.
2. Programmers don’t know the techniques to avoid writing code vunerable to SQL injection.
3. Programmers forget either of points 1 and 2.
4. Programmers willfully decide to ignore points 1 and 2.
Most
programmers do know about the errors and pitfalls of particular
implementation technologies, so the failure in discipline does not
necessary lie mainly there. Even if it does, this is corrected simply by
training.
Regrettably one of the excuses is always
that the programmers are being “agile”: up against deadlines, constantly
and inconsistently changing requirements, no or poor specifications,
lack of emphasis on design and thought, and poor organisational
communication. Errors can be fixed in a later sprint (
if remembered and the
if the fix is implemented). This then ends up with vulnerabilities such as SQL injection being forgotten by the programmers.
Willful
ignorance of vulnerabilities follows from the previous point. As
vulnerabilities are luckily not often exploited at run-time and if in
place detection methods will probably catch these errors, then the
programmers can often get away with vulnerable code. Furthermore,
responsibility for the code nor the maintenance does not fall often with
the original programmers or software engineering team.
Could
it be that in software engineering we still have the hero mentality?
Ironically it was many of the early 'hero' pilots who actively developed
the procedures and techniques for safe flying.
Unlike
software engineering the economics of bad flying proved to be a powerful
incentive for actively pursuing good flying techniques and ultimately
very sophisticated safety programmes. Maybe the only way of solving the
discipline problem in software engineering is to radically alter the
economic argument.
The economics of flying badly tends
to end up with deaths and often the pilot’s own life (it is probably
even worse if he/she survives) – the direct and collateral costs here
are easily calculable. The economics of security errors in programming
aren’t well understood nor calculated.
If a SQL
injection releases one million records of personally identifiable
information, why should the programmer care especially when there are
little or no consequences for the programmer, the software engineering
team, the managers and the business overall?
I don't
suggest that we start killing programmers for errors, but maybe alter
the way we prize teams for getting software into production. The fewer
changes and bug fixes over the first year of a product's life = bigger
bonus; inverting the bonus for bug fixing? Should we be more like
hardware engineers and increase the costs of compilation of code such
that a software engineering team gets only 1 chance per day to compile
their code with penalties for compilation errors?
To
finalise, maybe Tony Kern talking about discipline and what makes a
great pilot - now apply this to software engineering, privacy, security
etc etc:
References:
[1] Tony Kern (1998) Flight Discipline. McGraw Hill 0-07-034371-3