Via the

n-Category Cafe blog I noticed a real gem in the workshop on

The Foundations of Applied Mathematics. Simplifying systems and trying to understand the foundations of things (information systems, software engineering, privacy) is of great interest to me.

For example, in my current area of information system privacy we have a huge problem in even

contemplating what the foundation of the discipline looks like, let alone trying to formalise it (and it probably has a lot to do with entropy, semantics, measurement theory and information flow if you ask me - but before we get there we've a

lot of other work to do).

The particular piece of work that interested me was by John Baez who has a history of work trying to unify various areas of mathematics ostensibly through category theory and his presentation entitled

The Foundations of Applied Mathematics (

slides) presents his work on applying these ideas to applied mathematics. This was also followed up by an interesting discussion on Google+.

The whole presentation and body of work starts to remind me of a few things

- layered architecture
- esoteric models of understanding (- need a better way of explaining this!)
- visualisation (= diagrams)

Baez presents a model of four (for oversimplicity's sake as he describes in his presentation) layers linking science and engineering with applied mathematics with pure mathematics and finally grounding that in the foundations of mathematics. Immediately coming to mind here are the kinds of layered architecture we see in software engineering with each layer building upon the primitives provided by the immediate lower layer. This is then followed by the question:

**"Can developments in applied mathematics force changes in the foundations of mathematics?"**

I'd have to answer 'yes' to this if only to assert the position that if the lower layers have not been correctly formulated or established and can not support the upper layers in their needs then the above is inevitable.

The reasoning for this comes from, at least in my chosen area of software engineering, is that layers [of software] are often constructed without any reference to any other layers, the operations and structures in those layers are not "atomic" or fundamental in nature with respect to that layer and the separation of concerns and layering of architecture is often a matter personal choice. Layers are often polluted with convenience functions and particular architectural choices, see

Anti Patterns.

Another way of looking at this is via

a famous xkcd cartoon on the relationship between Lisp, Diety and how it was really built:

*Maybe *it just is that the foundations aren't bad at all, just that we got the layering wrong. Maybe mathematics isn't layered at all...object oriented anyone?

Further in the Google+ discussion started by Baez, a posting by Carsten Führmann

quoted Von Neumann:
*"As a mathematical discipline travels far from its empirical
source, or still more, if it is a second and third generation only
indirectly inspired from ideas coming from 'reality', it is beset with
very grave dangers...."*

The quotation continues but the gist is that the further a discipline goes away from its foundations the more esoteric it becomes. I think in software engineering we see this quite clearly with things like use case modelling and agile methods. Not that there's much wrong with either but their current application is often far from the ideals proposed by their creators.These then get use to develop the architectural layers seen earlier.

In information system privacy we do not have a clear, coherent understanding of the foundations of the discipline and yet we are developing at very high levels of abstraction concepts, laws, ideas without understanding how these are to be architected into the technologies we are building. Contemplate the semantic gap between the

European Privacy Directive and the actual C++ (or whatever) code that implements those. I'm not saying that laws, directives etc are wrong, just that even simple ideas such a

Do Not Track will expose huge holes in our understanding of the discipline of privacy (and a lot of related subjects too). Actually I think that DNT won't work in its current form because too fee people do not want to address the formalisation of privacy but remain at some esoteric level far removed from what DNT is trying to address and that might have other implications and Von Neumannesque

danger signals.

I won't discuss visualisation here, that's for another time, but just to reference the work at the

University of Brighton on diagrammatic notation and also to refer to another point in Baez's presentation on the use of diagrams across disciplines (and layers of science) to express concepts to different audiences...presenting

Euler diagrams to lawyers, marking people etc is an extremely interesting experience, especially when - as "non-mathematicians" - they start interacting using those notations and at some level we've "accidentally" unified the languages mathematicians and lawyers....