Saturday, 27 July 2013

2001 again...

James Maynard Gelinas' blogpost on "Underground Research Initiative" entitled
2001: A Space Odyssey - Discerning Themes through Score and Imagery is a 22,000 word essay about the film 2001 and one certainly as worth reading as watching Kubrik's masterpiece.

In the 50 years since its release, while special effects have improved, the sheer grandeur of the film from many angles, especially that of science, has not diminished.

I can't say anything more other than what is already written in Gelinas' article, maybe than we got Facebook and Twitter instead, which given the effect of the Monolith in later books in Clarke's series might have been a good thing (though it seems that while 3001 told us how to destroy a Monolith, another movie used the idea - and a Macintosh - aliens (and humans too), beware of iPods).

OK, if I do add something, in 2001 you're probably going to see the best portrayal of a computer in any film (including ones using Macintoshes).

Maybe I should lobby Finnkino to show it on their number one screen in Helsinki....

Monday, 22 July 2013

Children of the Magenta

Children of the Magenta is a term used to describe pilots overly reliant on automation rather than trusting their own skills in flying an aircraft. This video from an American Airlines training session, where the term "children of the magenta" was coined - due to the colour of the flight director bug or icon - is essential viewing not just for pilots but for anyone involved in driving, managing or building systems of any kind:

In particular I want to present the bullet points from one slide and then address another aspect regarding levels of automation:

Automation Dependency
  • The pilot flying should remain as 'one' with the aircraft in any low altitude maneuvering environment with the autopilot engaged
  • To maintain situational awareness of both aircraft performance and flight path.
  • The autopilot and autothrottles have limitations which affect performance
  • Crews are returning to the autopilot in an attempt to resolve a deteriorating situation
  • Autopilot and Autothrottles, however good, cannot recover the aircraft from a critical flight attitude

If we consider what we do in software engineering, and especially in areas where there are strong privacy and security aspects such as any form of web and database development, do we end up relying upon our skills and technique or do we blindly follow process and procedure? Do we blindly follow waterfall or agile processes to their logical conclusion and attempt to beat that deadline and deliver something knowing that the quality of the delivered product is deteriorating. Do we allow our processes to become our overly trusted autopilots?

Finally three levels of automation were presented: low, medium, high. The lowest corresponds to hand flying, the medium to using the autopilot to guide the plane and the highest using the flight management computer to run everything.

I want to discuss these in a later post, but for now I'll leave it as an exercise to map these level to how we act in a given software engineer project.

Probably the biggest takeaway from this lesson in aviation when applied to any aspect of software engineering is:
If you are losing control of any aspect of project then the process won't correct it; only you as a software engineer will - if you take control.
And that applies to any aspect of the system: functionality, performance, security, privacy etc etc etc...


Sunday, 21 July 2013

Big Metadata

One of the issues I see with auditing systems for privacy compliance is actually understanding what data they are holding. Often it is the case that the project teams themselves don't understand their databases and log files sufficiently. Worse is that misinterpretation of the NoSQL and BigData approaches have left us in a situation where schemata can be forgotten - or at least defined implicitly at run-time. The dogma is that relational databases have failed and all this "non-agile", predefined, waterfall, defined schemata stuff is a major part of this.

Losing all this information about types and semantics is a huge problem because no longer can be fully sure of the consistency and integrity of the data and the relationships of that data to other objects and structures.

We are also then losing the opportunity to add additional information in the form of aspects to the data, for example, security classifications, broad usage classifications, and so on. This leads to embedding much of the information about the data statically into the algorithms that operation over that data; which in turn hides the meaning of the data away from the data itself.

I think this article entitled "Big Data success needs Big Metadata" by Simon James Gratton of CapGemini sums it quite quite well: forgetting about the meaning of data will seriously compromise our ability to understand, use and integrate the data in the first place!

To achieve this, good old fashioned data classification and cataloguing is required. Ironically this is exactly the stuff that database developers used to do before the onset of the trend to making everything schema-free.

Together with suitably defined aspects and ontologies that describe information (meta-information?) in much the same way as the OMG's MOF with additional structure and semantics we already have much, if not all, of the required infrastructure.

Then the process side of things needs to ensure that development of systems, services, analytics and applications integrates with this - in that whatever data they store (locally or in the cloud) gets recorded (and updated!). That's probably the hardest part!

See also:

Friday, 19 July 2013

Systems Safety - Defining Moments

As I've been concentrating on "safety improvements", or at least techniques for the improvement of system I've tended to concentrate on four areas:
  • Aviation
  • Industrial
  • Medical (specifically surgical)
  • Software Engineering (specifically information privacy)
The parallels between theses areas when it comes to what each define as "safety" and the techniques should be obvious. However the question remains what actually triggered each of these respective areas to take a more inherent safety approach and then what will trigger a similar approach with regards to information safety?

 
Above Diagram Key: Y-Axis: relative degree of safety embedded into that discipline, X-Axis, year of time since seminal incident.

Aviation safety's seminal moment was the 1935 crash of a Boeing Model 299 aircraft during a presentation flight. Instead of blaming the pilots, effort was made to understand the causes of the accident and develop techniques to help prevent similar accidents in the future.

For industrial safety the seminal moment was the 1974 Flixborough Disaster in the UK. This resulted in work on the design of industrial plants and the development of the notion of "inherent safety".

Surgical safety has quite a long tradition especially with the development of anaesthetic safety from the 1960s and the introduction of a proper systems approach. However anesthetists seem not to feature prominently as surgeons and doctors so the fame would probably go to Peter Pronovost et.al. for the Central Line checklist. This was probably one of the major contributors to the WHO Surgical Safety Checklist discussed in detail in Atul Gawande's book The Checklist Manifesto which brings together much of the above incidents. 

If you're still in doubt maybe Atul Gawande's article in the New Yorker magazine entitled The Checklist: If something so simple can transform intensive care, what else can it do? (Dec 10, 2007) might help.

Getting back to the crux of this article, what is the incident that will cause the wholesale change in attitudes and techniques to software engineering that instills such a sense of discipline that we can eradicate errors to such a degree that we could compare ourselves favourably with other disciplines?

The increasingly frequent hacking and information leaks? The NSA wiretapping and mass surveillance? Facebook and Google's privacy policies? None of these have had any lasting effect upon the very core of software engineering if any at all. Which either means that we place such low value on the safety of our information or that the economics of software are so badly formulated in society that the catastrophe would have to be so huge that it would have to cause societal change?

Interestingly, in software engineering and computer science we're certainly not short on techniques for improving the quality and reliability of the systems we're developing: formal methods (eg: Alloy, B, Z, VDM etc), proof, simulation, testing, modelling (in general). What we probably lack is the simplicity of a checklist to guide us through the morass of problems we encounter. In this last respect, this is why I think we're more like surgeons that modern day aviators; or, maybe some of us are like the investigators to the 1935 Boeing crash and other aviation heroes learning their trade?




Thursday, 18 July 2013

Asiana Flight 214 and Lessons for Software Engineering


Tony Kern's book Flight Discipline [1] discusses the notion of discipline related to flight safety through his experience flying, training and by example - almost invariably by way of a crash.

It is testament to the aviation industry overall how safe flying has become through the application of safety procedures and techniques, and also the very real need for pilots to be extremely disciplined in their work. The same concepts are also being applied in industrial and medical environments too as described earlier in this blog.

This week however has seen an accident at San Francisco airport involving Asiana Flight 214. The causes (note: plural!) are being investigated by the NTSB and I'm not in any position to appropriate blame, but I do think this will become quite a seminal accident in terms of what is found.

At the time of writing it is known that the aircraft was believe to be in a thrust managed mode which should have kept its speed and that the crew somehow allowed its speed to decay and mismanaged the final stage of landing. There are another set of possibly contributing factors such as CRM, crew fatigue, experience, the ILS being out of operation, a visual approach etc. This of course is compounded by the fact that hundreds of other aircraft have landed successfully at SFO in recent weeks under similar conditions.

Whether or not this is the fault of the pilots, a fault of the automatics on the aircraft or a combination of a host of those and other factors is to be ascertained. One thing that seems to be sure is that the pilots failed to monitor a critical aspect of the aircraft's situation. In Kern's terms the discipline of the pilots to maintain proper checks has failed. This will be an interesting case to follow, possible a seminal case with respect to discipline.

In terms of software engineering or specifically some aspects of developing with security in mind, I've been discussing with a colleague about SQL injection attacks and that these are remarkably easy to detect and defend against - though also remarkably easy to write. Given this, we still seem to be detecting vulnerable code remarkably often.

So this leads to the question why are we still writing code vulnerable to SQL injection if it is so easy to detect?

King puts forward four reasons for breakdown in discipline, which we can put into our context:

1.    Programmers don’t know the procedures for developing and testing code that is susceptible to SQL injection.
2.    Programmers don’t know the techniques to avoid writing code vunerable to SQL injection.
3.    Programmers forget either of points 1 and 2.
4.    Programmers willfully decide to ignore points 1 and 2.

Most programmers do know about the errors and pitfalls of particular implementation technologies, so the failure in discipline does not necessary lie mainly there. Even if it does, this is corrected simply by training.

Regrettably one of the excuses is always that the programmers are being “agile”: up against deadlines, constantly and inconsistently changing requirements, no or poor specifications, lack of emphasis on design and thought, and poor organisational communication. Errors can be fixed in a later sprint (if remembered and the if the fix is implemented). This then ends up with vulnerabilities such as SQL injection being forgotten by the programmers.

Willful ignorance of vulnerabilities follows from the previous point. As vulnerabilities are luckily not often exploited at run-time and if in place detection methods will probably catch these errors, then the programmers can often get away with vulnerable code. Furthermore, responsibility for the code nor the maintenance does not fall often with the original programmers or software engineering team.

Could it be that in software engineering we still have the hero mentality? Ironically it was many of the early 'hero' pilots who actively developed the procedures and techniques for safe flying.

Unlike software engineering the economics of bad flying proved to be a powerful incentive for actively pursuing good flying techniques and ultimately very sophisticated safety programmes. Maybe the only way of solving the discipline problem in software engineering is to radically alter the economic argument.

The economics of flying badly tends to end up with deaths and often the pilot’s own life (it is probably even worse if he/she survives) – the direct and collateral costs here are easily calculable. The economics of security errors in programming aren’t well understood nor calculated.

If a SQL injection releases one million records of personally identifiable information, why should the programmer care especially when there are little or no consequences for the programmer, the software engineering team, the managers and the business overall?

I don't suggest that we start killing programmers for errors, but maybe alter the way we prize teams for getting software into production. The fewer changes and bug fixes over the first year of a product's life = bigger bonus; inverting the bonus for bug fixing? Should we be more like hardware engineers and increase the costs of compilation of code such that a software engineering team gets only 1 chance per day to compile their code with penalties for compilation errors?

To finalise, maybe Tony Kern talking about discipline and what makes a great pilot - now apply this to software engineering, privacy, security etc etc:




References:

[1] Tony Kern (1998) Flight Discipline. McGraw Hill 0-07-034371-3


Tuesday, 9 July 2013

Facebook and Twitter....instead....

It was one of those funnily serendipitous things that connect everything together...first of all it started "playing" with XPlane 10....a damned good (understatement!) simulator and I suppose the next best thing to actually owning your own aircraft or holding a pilots license...anyway, flying around with some VOR or VOR navigation I ended up in Shannon --- and yes, I did land the plane.

Then I started wondering what flies out of Shannon now - the Shannon Stop-Over has been gone a very long time - and so I ended up looking for the Shannon departures board. Noticing the BA001 and 003 flights there - London City to JFK via Shannon on the outbound leg on an Airbus A318, I was wondering what that service was like.

Read a few passenger reports and noticed a comment about it being like the Pan Am Lunar Shuttle inside - certainly the photos of the all business-class seating look like it.

Which brings me to a clip of the said Lunar Shuttle from Kubrik's 2001:


Which then got me thinking:

  1. The Blue Danube or more correctly An der schönen blauen Donau is a beautiful piece of music perfectly suited to a space docking
  2. Kubrik was a genius
  3. A.C. Clarke's 2001 is also a piece of genius
  4. We had vision in the 1960's of a future where space travel and space-stations might be common place
  5. We got Twitter and Facebook instead....
 ....which allows me to write about what we could have had instead of having it....