Cloud and legalities

Just come off a Bright Talk webinar, part of a day on issues around Cloud. Miranda Mowbray of HP Labs, Bristol, gave a comprehensive round-up of legal issues which might arise through organisations’ use of externally-sourced Cloud services (everything from basic infrastructure like EC2 up to full-featured applications such as Sherlock Holmes, apparently, would recognise the issues from Baskerville Moor.

It reminded me of attending a conference in the early days of the Web, where I heard the first attempt to figure out what the regulatory issues might be for pharmaceutical companies, with our heavy and very country-specific regulatory issues to work through. As so often, we’ve been here before: legislation necessarily lags behind technology, and case law has not yet started to accumulate.

The recording is available on the Web and there’s a published paper too. So I won’t attempt to precis it here. Suffice it to say that the issues range from “Where’s my data held?” (and that includes my account and usage data as well as the data I’m handling in the cloud) to “What happens if …” questions (the service goes down, the provider goes out of business, and much more). In particular, beware: most terms and conditions mean that it’s the user, not the provider, who normally carries the responsibility for continuity of service and for backup.

A great deal was packed into thirtyfive minutes and although Miranda Mowbray is not a lawyer (so the advice, of course, is “If in doubt, consult one!”) she clearly has a good grasp of the issues that may well arise.

Something to reference in the end-user guide on “Signing up for web-based IT” which I’m working on. Watch out too for a posting here in a day or two about analysis of “Distribution characteristics” for business and business applications, a piece of work I did many years ago which is highly relevant to today’s developing cloud environments.

• Cloud Computing and the Law, BrightTalk webcast, 30 Sep 2009
• Miranda Mowbray’s home page at HP Labs
• The Fog over the Grimpen Mire: Cloud Computing and the Law. Mowbray, M., SCRIPTed Journal of Law, Technology and Society vol 6 issue 1 (April 2009) pp.132-146 (the link is directly to a PDF of the document)

Three to watch

Outsell offer a list of “30 to watch” in the information industry, and none of them are insight service firms serving enterprise IT.

Here are by suggestions: just Three to watch.

Altimeter Group: Charlene Li’s firm, just on a year old, has acquired three new partners. One of them is one of the foremost thought leaders in ERP, R “Ray” Wang, also from Forrester. Ray was recently named Analyst of the Year by the Institute of Industry Analyst Relations. This will clearly take Altimeter from being a one-person enterprise focused on “Social and emerging technology” towards a wider-based insight firm. Very definitely, watch this space.

Ovum Knowledge Centre: Datamonitor have completed their transition to unite the technology insight services of Ovum, Butler Group and Datamonitor Technology under the Ovum brand. Interestingly, the prime URL now reflects “Ovum Knowledge Centre” rather than just “Ovum”. Butler have been trying to break into the US market without success for some time; it will be worth watching to see whether the reorganisation finally creates a competitive global player.

Corporate Integrity: Mike Rasmussen left Forrester only a couple of years ago and his Global Risk & Compliance insights are in demand. The events of the past twelve months highlighted the disastrous consequences of a failure to understand and manage risk. For a one-man-band, Mike’s profile and reach are exceptional.

• Altimeter Group
• Press release Altimeter Welcomes New Partners, 27 Aug 2009
• Ray Wang named IIAR Analyst of the Year 2009, Institute of Industry Analyst Relations, 25 Aug 2009
• Ovum Knowledge Centre
• Press release Datamonitor Group to integrate its three technology businesses, Datamonitor, 14 Aug 2009
• Corporate Integrity
30 to watch for 2009, Outsell, undated

Scale out, not up: the Cloud mindset is different

Just come off a call with a group which meets regularly by phone to think about the issues of moving corporate IT services to the cloud.

The debate is moving on. Originally, it was triggered by the emergence of Amazon’s EC2 and S3, and similar services, which enable individuals to have easy by-the-drink access to high powered and flexible compute and storage power.

Then, it was key questions about how to enable enterprises to move services to “the cloud”: what do you move and how, and what are the risks that have to be understood and managed?

Now, there’s an understanding that a hybrid model will have a lot to recommend it. Cloud services offer flexibility, and that’s more important than cost saving. The question is no longer “In-house or cloud” but “How do we integrate cloud with in-house, for flexibility and overflow”. You set up multiple-hosted services so that when in-house runs out of capacity, the request is routed seamlessly to a cloud resource.

Someone on the call characterised this as “Scaling out, not up”. And it requires a different mind-set when applications are created. Something which recalled to my mind the “reverse assumptions” for heterogeneous wide area distributed systems, created by the UK/European ANSA project something like 20 years ago. I said I’d re-publish them. Here they are.

When building a distributed system, a number of assumptions which are commonly made when engineering systems for single hosts not only become invalid, but have to be reversed. The most important of these are:
local >> remote
more failure modes are possible for remote interactions than for local ones
direct >> indirect binding
configuration becomes a dynamic process, requiring support for linkage at execution time
sequential >> concurrent execution
true concurrency requires mechanisms to provide sequentiality
synchronous >> asynchronous interaction
communication delays require support for asynchronous interactions and pipelining
homogeneous >> heterogeneous environment
requires common data representation for interactions between remote systems
single instance >> replicated group
replication can provide availability and/or dependability
fixed location >> migration
locations of remote interfaces may not be permanent
unified name space >> federated name spaces
need for naming constructs which mirror administrative boundaries across different remote systems
shared memory >> disjoint memory
shared memory mechanisms cannot operate successfully on a large scale and where remote operations are involved.

There was, and is, another one as well. When you’re creating an application – any application! – don’t assume it will always stay with the localised architecture you’ve created it in. The gotcha assumption these days is that the app is being created with links to a private cloud not the public one, so it’ll stay that way. Deal with it – interfaces, databases, security, the whole nine yards – as if, one day, parts of it will sit on public infrastructure.

• ANSA project, 1984-1998, document repository (now free access)
• ANSA: An Engineer’s Introduction to the Architecture, ANSA project, Nov 1989 (see section 2.1 p 3 for the reversed assumptions)
Distributed architectures: reverse assumptions, still relevant, my previous post (ITasITis, Jan 2008)

The Long Tail of Innovation

I’ve had a note for months to catch up on reports which quote Pfizer along the lines of “Watch out for the Innovation Killers”.

It’s not new news; see some of the Links at the end or try a Google search. But when I finally got to it properly, I discovered two things. First, it’s even less new news than I thought. The “Innovation Killers” idea goes back several years, and has been covered by Christensen et al (who else?) in Harvard Business Review.

Second, I was alerted to this by a note from Doug Neal of Leading Edge Forum (LEF). And his trigger was (I believe) a presentation from a LEF session, which is very well worth working through. It’s available, open access.

Yes, it does contain the “Innovation Killers” slide. But its primary thesis is much more than this. And it’s compelling for anyone trying to develop successful innovation strategies that depend on more than just serendipity.

Rob Spencer’s main theme, in Falls Church back in April, was that innovation is a “Long Tail” phenomenon. There may be a handful of people who can come up with a lot of worthwhile ideas. But if you can open up to the vast array of individuals in a large organisation (such as a global pharma company) then the potential number of ideas increases enormously even if only (say) 20% of those people contribute only one or two ideas each. If you don’t, you miss out. Big time.

When I started writing this piece, I mistyped the title as “The Log Tail …”; and I almost left it that way, because Rob shows that the metrics of this kind of innovation activity don’t become predictable until you learn to use a log-normal plot instead of our usual mean-and-spread bell curves (such as Six Sigma is based on). A metric like “Average number of ideas per individual” is meaningless. In a telling phrase, Rob categorises it as “not even wrong”. That is, it’s the basis of the analysis that’s way off target, not just the maths.

This Long Tail version of innovation can’t be done without means of reaching out to the wide area community. That doesn’t just mean a wiki; the presentation includes a string of practical and structured techniques, far too many to summarise here.

So, innovators and, even more, innovation facilitators: get hold of this presentation, and some of the materials surrounding it. Read, review, and learn!

• Horses, Carts and Long Tails, Rob Spencer (Pfizer) at LEF Forum, April 2009, PDF
• Making innovation count in uncertain times, In Vivo, 25.1, 1-8 (Jan 2007), PDF
• Innovation Killers: How Financial Tools Destroy Your Capacity to Do New Things, Christensen, C., Kaufman, S.P., &  Shih, W, Harvard Business Review, Jan 2008, pp 98-105. Web link is to summary only; library access required for full text
• or search Google for “innovation killers”

Back to the decentralised future?

Andy Oram at O’Reilly Radar has published a lengthy and thoughtful article discussing the social web’s increasing reliance on centralised services, why this causes problems, and whether/how a return to what I might call “managed decentralisation” might help.

There’s a thoughful discussion of the problems, including the imposition of multiple, multiply-centralised flat namespaces. Trust and authentication which are issues to be addressed, because in the absence of central authority you have no central management for abuse; but some variety of a trust network might cope with that (compare the way LinkedIn works to authenticate your request to connect with someone). And there’s some thought about protocols that could contribute to the solution (rssCloud and Jabber’s now-standardised XMPP) which have a less centralised basis.

Well worth a read.

• RSS never blocks you or goes down: why social networks need to be decentralized, O’Reilly Radar, 143 Sep 2009
• rssCloud (or RSS 2.0),
• XMPP, XMPP Standards Foundation

Hype Cycle 2.0 …

I absolutely love this version of the Hype Cycle which Euan Semple found from Geek & Poke:

Hype Cycle 2.0

Hype Cycle 2.0

The thing is, it’s a fair representation of it. Rather like the version of the Laws of Thermodynamics as a restatement of Murphy’s Law, which goes:
1 – you can’t win
2 – you can only break even at absolute zero
3 – you can never reach absolute zero

Doesn’t apply in enterprise IT, though, where absolute zero is sometimes quite common!

Gartner Hype Cycle Version 2.0, The Obvious, 26 Aug 2009

“Cosmopolitans”+”Locals” = “global team” ** updated

A quick alert for readers, especially those of us who espouse the various forms of collaborative working and online-mediated teams to operate globally.

Every so often, research revisits “What makes a global team work?” and Wharton Business School have done just that. A lot of their conclusions relate to established cultural assumptions, and issues like rotating meetings so it’s not always the same people who have to be in a 3 a.m. Most of us say “Sure, we know that”; but it’s useful to have them restated in what is, in this article, quite a concise summary at its close.

But there is one concept that might be less familiar. Global enterprise values people with experience of working in different regions, different languages and different cultures; and rightly so, even if the company’s lingua franca is English (ok, I used a Latin phrase deliberately there!). Wharton call these people Cosmpolitans.

But the article re-emphasises that a successful team also values the local. Of course, nothing guarantees success and operating across boundaries can be divisive as well as beneficial. But Wharton believe that when cosmopolitans sit down alongside people with deep and long experience of their local market, and each values the other’s contribution, the synergy at least enhances the likelihood of successful outcomes.

You don’t have to agree; but I’d recommend reading the article for a reminder of some fundamental truths.

[Note added 11th Sept: there’s a related comment from the British Computer Society today. It’s a case study from a global engineering group for whom sustainability is part of their business and therefore reducing their carbon footprint – travelling less, in effect – is crucial. Significant that this should appear as we reach the anniversary of “9/11”. I’ll just quote their conclusion: The old adage of ‘think global, act local’ is … replaced by ‘create global, deliver local’. And collaboration is the key to cracking this new world order.

(Two posts in one day … must be catching up!)

• ‘Locals,’ ‘Cosmopolitans’ and Other Keys to Creating Successful Global Teams, Knowledge@Wharton, 2 Sept 2009
Globalisation, innovation and collaboration, BCS, Sept 2009