Gov 2.0: an afternoon at Westminster

Oxford Internet Institute (OII) and the Parliamentary Office of Science and Technology (POST) co-hosted an event at Westminster last week: Gov 2.0, or Truly Transformative Government. It was an interesting afternoon offering some insights into thinking by both government and IT sector about how to develop the delivery of services in the age of social computing. Though I was left wondering whether what I’d attended was a technology or policy briefing, a think tank, or just a talk session …

I’ve been waiting for the presentation material to appear on the OII website, but it’s not there yet. However, David Evans of the British Computer Society, who I met there, has created a write-up on the BCS Unqualified Remarks blog with useful links. I’m not going to double up on his write-up, but just add my take-aways.

There was stuff about why public service projects fail. The additional dimension in the political sphere is that ministers’ primary task is to get re-elected and this both shortens their time horizon and reduces their willingness to commit; but to achieve that aim, the table is cluttered with eye catching but unrealistic promises – ID cards, anyone? William Heath of Kable referrred to “assertive non-listening and spin”. In the Q&A I quoted Richard Feynman’s aphorism from his report on the Challenger spacecraft disaster:

For a successful technology, reality must take precedence over political necessity: for nature cannot be fooled.

Evolutionary small changes work well; the Land Registry achieved significant success without any political involvement whatsoever, by staying under the radar.

Jerry Fishenden of Microsoft spoke of the need to shift to person-centric not provider-centric services. I remember that when I triggered SmithKline Beecham to go on the web in the mid 1990s we correctly decided to structure the website around what people might want to use it for, not around the structure of the company – not a universal perspective in those days, but any management structure shifts and changes anyway. But there’s a new aspect: Jerry pointed out that social tagging, and exposed APIs for mashups, can facilitate this user-centric structure without much need for detailed analysis or design by providers. Users will just do it. But the underlying structures must be right. Again, there’s a parallel. The long-established Relational Third Normal Form, if strictly adopted, means that a database can be easily adopted for a purpose different to the one it was designed for. Get the design right, and new uses are easy.

And Martyn Thomas, who knows a thing or two in this area since he ran Praxis for many years, emphasised the role of the systems architect “starting with validation of the business requirements”. Compare Jeanne Ross of MIT with her thesis that architecture is business strategy.

So there was some interesting stuff. But I wasn’t convinced that it would have, or was designed to have, any measurable impact.

Links and references:

Her Britannic Majesty, Queen Elizabeth 2.0 BCS Unqualified Remarks 22 Jan 2008

Gov 2.0, or Truly Transformative Government event details for Tues 22nd Jan; hopefully the presentation materials will show up here!

Oxford Internet Institute

Parliamentary Office of Science & Technology (POST)

Enterprise Architecture As Strategy: Creating a Foundation for Business Execution, Jeanne Ross, Peter Weill & David Robertson, MIT Sloan School of Management, Harvard Business School Press, 2006

Links are provided in good faith, but InformationSpan does not accept responsibility for the content of any linked third party site

Oracle/BEA: quicker than expected

So Oracle raised its bid for BEA, and BEA has agreed a takeover. Compared with the battle for PeopleSoft, this bid has gone through like greased lightning!

In the corporate space, suggests that a year of confusion is in prospect. Larry Ellison says the acquisition as “will significantly enhance and extend Oracle’s Fusion middleware software suite”; perhaps it will provide the missing pieces which will actually make Fusion happen? Forrester’s analysts, at the time of the opening bid, noted that the aquisition would bring Oracle up the market to overtake Microsoft and be second only to IBM in middleware.

There are other consequences. BEA’s acquisition of Plumtree means that Oracle will have two competing products in the portal space. Will Oracle continue to develop Plumtree (AquaLogic)? It will want to move users to the Oracle platform using, as the Forrester report notes, “a combination of carrots and sticks”. But Oracle has bought a lot of software as well as a customer base, and there are sure to be pieces which are preferred over the pre-existing components. The bumpy ride may embrace some Oracle customers too.

The best advice? Talk to your analyst service and get their insights.


Oracle gets BEA … (, 16 Jan 2008)

Oracle and BEA (Oracle website section with links to press release and other material including, for a short time, a replay of the investor webcast)

An Oracle-BEA Combo: How It Will Affect You (Forrester Research, 16 Oct 2007)

Links are provided in good faith, but InformationSpan does not take responsibility for the content of linked third party sites which may use popups, cookies or other advertising and tracking. Most news sites archive material after a short time; use the site’s search engine to track out of date articles.

Distributed architectures: reverse assumptions, still relevant

On a call today, discussion turned to the open-ness – or not! – of today’s infrastructure architectures for “Living on the Web”, as Doug Neal of CSC’s Leading Edge Forum calls it. The comment was made that much of even today’s Microsoft-based infrastructure still embodies assumptions from the days of closed-network architectures.

What did we mean? Living on the Web – for major corporates such as BP and Aviva, who’ve been doing it for some years now – means accepting that the corporate perimeter is not the safe haven often assumed. Nor is it, in today’s business world, practical for business. Companies don’t do everything within the firewall these days. They partner, they outsource, they collaborate. The firewall, and the quasi-security mindset that goes with it, don’t facilitate this way of doing business. They get in the way. There are good reasons why the mindset is there, but they just make it harder to challenge!

BP is well recognised now for adopting the open Internet as its base infrastructure and for treating its user community as IT adults who can often make their own decisions – with support where home-based experience doesn’t translate to corporate needs. Resources that need to be protected are certainly protected, but much closer to source. Access through the firewall isn’t any longer an open sesame to vast swathes of corporate resources. As someone on the call commented, Port 80 is the hacker’s internet these days. And this approach has been audited to show it is more secure, not less.

But – to return to the point – many infrastructure professionals still work from the implicit, probably unrecognised and certainly unquestioned assumptions that it’s best to do it the same way as since the local area network was invented. And I called to mind some work done by a UK project called ANSA (Advanced Networked Systems Architectures) between 1985 and 1998. ANSA was one of the first projects to seriously examine what happened when systems no longer were located on a single computer, but were cooperative systems constructed from components running potentially anywhere worldwide. Internet-based collaborative systems are that, to the nth degree!

ANSA offered a set of assumptions which systems engineers made then, which have to be turned on their head in a distributed environment. We still need to be reminded of some of them:

  • local→remote: more failure modes are possible for remote interactions than for local ones
  • direct→ indirect binding: configuration becomes a dynamic process, requiring support for linkage at execution time
  • sequential→ concurrent execution: true concurrency requires mechanisms to provide sequentiality
  • synchronous→ asynchronous interaction: communication delays require support for asynchronous interactions and pipelining
  • homogeneous→ heterogeneous environment: requires common data representation for interactions between remote systems
  • single instance→ replicated group: replication can provide availability and/or dependability
  • fixed location→ migration: locations of remote interfaces may not be permanent
  • unified name space→ federated name spaces: need for naming constructs which mirror administrative boundaries across different remote systems
  • shared memory→ disjoint memory: shared memory mechanisms cannot operate successfully on a large scale and where remote operations are involved.We’ve got used to working with some of these. But, in the commercial world, many enterprises still want to retain control. In terms of these reverse assumptions, it means not coming to terms with the ones related to federation – name spaces in particular, which includes authentication through federated directories; and heterogeneity, so that not everything in the system accords with the decisions that “we” make about our own architecture. There’s still a way to go!


    ANSA: An Engineer’s Introduction to the Architecture ANSA, 1989, part of the Official Record of the ANSA Project at

    Web 2.0: The New Frontier for Employee Responsibility and Innovation CSC Leading Edge Forum, 2007

    You can see how far the theme goes back in BP by reviewing this BP presentation by John Leggate: Exploiting digital technology … (CERA Conference, Houston, 13 Feb 2001)

    Or just Google for “Living on the Web”!

  • Ribbit, Ribbit – a new take on unified telephony

    MIT’s Technology Review reports on a new platform called Ribbit which can simplify your life if you have several different phone channels. In my case an office landline; a mobile; a Skype account; a Google account which could do calls if I wanted; until recently a company-networked IP softphone that lived on my laptop (very useful when travelling); and (out of hours) the home phone.

    The usual unified communications approach to this problem is to offer presence awareness, allowing the user to specify which one device should be used to receive calls at any particular time and, in the more sophisticated variants, responding to knowledge of where the user is. For example, if the network detects that I’m using my computer from my office then it might switch calls to the IP phone on my desk.

    Ribbit has a different approach. A call routed through Ribbit will call all the devices it knows about. And as soon as the call is picked up on any one of them, the others all go quiet too. It doesn’t of itself have some of the additional features that unified solutions from the likes of Cisco or Siemens will offer, such as interchangeability between voice- and text-based channels (tran scribe voicemails into text and email them to me, or forward my emails as voice messages). But Ribbit is also a near-universal application platform, with integration to technologies such as Adobe’s AIR, and developers can build applications which enable these things to be done. There’s already a transcription application linked to, for example.

    Instinctively this feels like it would be a whole lot simpler to understand, manage and use than existing developments in unified communications. And – this one will be available to individual use, not tied to a corporate messaging server which is where Cisco and Siemens are aiming. It just might take off. Watch this space!

    Transforming Communication MIT Technology Review, 4 Jan 2008
    Crick Waters VP, Product Management and Strategic Business Development, Ribbit (LinkedIn profile)
    Siemens Open Communications
    Cisco Voice and Unified Communications

    Links are provided in good faith, but InformationSpan does not take any responsibility for the content of linked sites. Most news sites archive information after a time; you may need to use the site’s search engine to find out-of-date content.

    Reality Mining, or what your cellphone knows about you

    Some years ago MIT’s Professor Sandy Pentland was working on biosensors. A marathon runner, he had sensors built into his footwear which monitored his physical condition from the moment he put them on. In a short MIT video (which I can’t find on the Web any more) he came up with a classic one-liner: “Your shoes may know more about you than your doctor does”.

    The idea behind this research is automatically gathering and aggregating data from an individual’s activities to create useful knowledge. Now, Pentland and co-workers are about to publish a paper (Eagle, Pentland & Lazer, in submission) describing research based on detailed data gathered from specially adapted mobile phones issued to a hundred MIT students and staff. MIT’s Technology Review explains that the researchers collected data not just on who called whom, but also on close encounters – using Bluetooth, the phones detected when they were physically near to another phone in the trial. Out of this, for example, it’s possible to determine who are people’s close work colleagues, who their social contacts, and so on.

    Prof. Pentland says this work falls under the umbrella of an emerging field: “reality mining”. Social networks such as Facebook are part of it too, but your network on Facebook or LinkedIn is something you develop with a lot of manual intervention. Reality mining could allow this to be taken care of automatically.

    And there’s more that the next generation of cellphones can detect. As Pentland puts it:

    The iPhone … has an accelerometer that could tell if you are sitting and walking. You don’t have to explicitly type stuff in; it’s just measured. And all phones [could] be used to analyze your tone of voice, how long you talk, how often you interrupt people. These patterns can tell you what roles people play in groups: you can figure out who the leader is and who the followers are. It’s folk psychology, and some of the stuff people may already know, but we haven’t been able to measure it, at such a large scale, before these phones.

    Does this scare you? Sandy Pentland is alive to the privacy and surveillance issues, and acknowledges their significance; but he also has counter-examples of possible applications for considerable social benefit. For example: if one morning no-one in your network is coming to work, might this give early warning of a major epidemic? Like all technologies, reality mining could be used in either way. It’s up to professionals to highlight the issues as well as to develop the possibilities.


    What your phone knows about you (MIT Technology Review, 20 Dec 2007)
    Reality Mining at MIT Media Lab: click “Publications” for published papers

    Edinburgh takes another step in supercomputing

    The Guardian technology pages today carried the story of HECToR, £113M-worth of supercomputer installed at the University of Edinburgh.

    UK universities have a strong record in high performance computing. The ATLAS computer, at Chilton near Didcot, was the country’s fastest machine in the late 1960s and was installed at the Rutherford Laboratory as a resource for the research community – as a researcher in Oxford I was one of its users. A Cray supercomputer was installed in the University of London Computer Centre in the 1970s as a resource for the whole of the south east. And at the same time, Queen Mary College (as it was then), where I was working, hosted ICL’s pioneering Distributed Array Processor (DAP) which – if you picked the appropriate problem – could out-perform the Cray by a couple of orders of magnitude. Edinburgh’s Parallel Computing Centre (EPCC) stands in this tradition and was established in 1990.

    Of course the power of these early supercomputers is not so far different from today’s desktop machines or multi-processor systems. But that’s exactly the point. Sure, these machines enabled the research community to compute problems that had, up to then, been intractable. But they also pioneered technologies which became mainstream in “ordinary” machines: pipelines, single-instruction multiple-data multiprocessing, resource coordination, programming for advanced architectures. One of the most important areas explored with the DAP was how the techniques and algorithms used for serial computers had to be completely rethought for parallel machines if their power was to be fully exploited. Think Grid Computing. It wasn’t just a case of optimising existing code!

    Goodness knows why The Guardian has finally caught up; the contract for The High-End Computing Terascale Resource (HECToR’s official name) was signed nearly a year ago. ITasITis intends to be more on the ball as we bed in! The computer will be used for problems which in many cases are the 21st century equivalents of those tackled by Atlas, the Cray and the DAP: weather forecasting and its big brother, climate impact; simulations in these and other areas such as aircraft design and financial markets; high-energy particle physics; drug design; and so on. And no doubt, as computer scientists learn the tricks of exploiting its capacity, these will become the basis of future generations of “ordinary” machines too. They may be for academic use; but they are relevant to all of us.


    Inside the UK’s fastest machine (The Guardian Technology, 2 Jan 2008)
    University Signs Contract For Supercomputer (University of Edinburgh press release, 22 Feb 2007)
    Edinburgh Parallel Computing Centre (EPCC)
    ICL Distributed Array Processor (Wikipedia entry, edited by me in the course of creating this note!)

    Links are provided in good faith, but we don’t take responsibility for the content or actions of third party sites which may use pop-up advertising or set cookies. Most news sites archive material after quite a short time; use the site’s search engine to find archive material.