In preparation for a presentation yesterday, I was thinking about what precisely we mean when we say something (a technology, an idea) is disruptive; and what happens to disruptive technologies a little downstream. Wikipedia can tell you all about Christensen; none of that here.
Let me illustrate instead by example. In the mid 1990s, the Internet and its killer app, email, crept out of academia (and the military). We had AOL and Compuserve, and a few others.
Corporations didn’t adopt email. But a lot of individuals did, and began to short-cut the conventional channels of (snail) mail and telephone. They imported their personal accounts into the workplace; procured and paid for a modem; and did their own thing. For business benefit. I was one of them. And no-one knew how much hardware was bought, or subscriptions covered, on corporate expenses to facilitate this new way of doing business.
But one thing a lot of managers did get was scared. Scared of viruses. Scared of damage to internal networks and resources. Scared of leakage of intellectual property. Scared of reputational risk. When I wrote a case for corporate websites, internet access and email, I had to tackle those issues. One thing I didn’t need to address, though, was probably the most important disruption that email brought. People started to communicate across their own personal networks. It compromised the old command-and-control lines of management.
Well. At the time, I had just ceased being a Part Time Tutor for the Open University. Note the job title; it’s important. In those days, everything was paper based, including marking. You were sent a mark script, which you followed – quite rightly, because with large numbers of assignments being marked consistency was vital. But occasionally I’d disagree, academically, with something in the script and I’d write in. Invariably I’d get a courteous response; but in essence it would just say “Don’t argue; the real academics at the centre know best”.
Then I was recruited onto a course that provided students and tutors with online conferencing (and a host of other stuff) at the beginning of the revolution in home-based online learning. We had private conferences where the central academics and the (still “Part Time”) tutors could and did discuss course issues. So that became where comments about the mark scheme, or anything else went. All of a sudden, a comment might generate a small handful of support, from other tutors. And the central team (who’d been farsighted enough to set up this one single conference for all the staff members) would react positively, often accept the comment, and offer a work-round – or a reasoned argument why in practice it couldn’t be changed for now.
It became a total change. It drew us, as part time staff, much more fully into the academic environment; we were contributing not just to the teaching but to the development of the materials and future courses. And when I came back to the OU a couple of years ago I found that I am now an Associate Lecturer on a salary, not a Part Timer paid piece-rate. The OU was always a good employer and develops its staff and their skills. But now we are accepted as academics equally with the full time staff, and share much more fully. The change of name is more than just symbolic.
That was a disruptive change, to be sure. Because the OU is an open organisation, it accepted the disruption and achieved great benefit from it. Twenty years downstream, that disruptive change has shaped the new normality, for the better.
In the IT space, especially the corporate IT space, there’s a perception that the existing delivery model is being disrupted. And it is: I’ll explore that another time. But what happens to disruptive innovations (Christensen’s phrase) as they evolve?
In short: the change may be permanenent and creates a new normality as in the OU: or think what Ryanair did to travel and the south of France; aeroplanes weren’t new technology.
Or the disruptive new idea may become niche, important in its space but not in the mainstream (some of these, though, may be biding their time, like electric cars). Or it doesn’t fulfil its hyped potential and simply disappears. For your own thoughts: examples please of the latter two …
It’s fairly easy to see that social media – Twitter, Facebook and the rest – are still disruptive, by one simple test: IT Security are still scared of them! And like email, the risks are real. But the benefits will drive us to manage them, not least because today’s business is disaggregated and networking is the way we work these days. What’s not to adopt from a set of tools that match the way we do business?
But Cloud is a different matter. In the beginning the model was highly disruptive: business executives could do things, pay with their corporate credit card, and escape the long-drawn-out procurement, installation and verification cycles. And this Cloud is still there, finding its niche in testing, proof-of-concept, rescuing overwhelmed servers and so on. But “as a Service” stuff like, particularly, messaging have been tamed. Buyers are discussing service level agreements, compensation clauses, long term contracts and business risk. It’s outsourcing, with a few twists. It’s gone mainstream:
“Forrester believes that cloud computing is a sustainable, long-term IT paradigm, and the successor to previous mainframe, client/server, and network computing eras.” The Evolution Of Cloud Computing Markets, Forrester Research, Jul 2010
Cloud is no longer disruptive. It’s been tamed.
Do you agree?
Links: none, but I might add one or two. Watch this space!