jump to navigation

Forrester get TechRadar on the road [updated] 25 Apr 2008

Posted by Tony Law in Insight services.
Tags: , , , ,
2 comments

One of Forrester’s acknowledged problems has always been how to compete with Gartner in the assessment of technology adoption. Everyone knows the Hype Cycle, though there’s varying understanding of how to use it. Technology adoption lifecycles go back over 50 years; they describe the adoption or acceptance of a new product or innovation.

Forrester haven’t had anything in this area. But now they do.

I’ve been aware of the development of Forrester’s TechRadar for some time; it was pre-launched last summer. This week’s IT First Look mailing highlights its emergence with three reports: mobile devices and management solutions; infrastructure virtualization; and extended supply chain applications.

Forrester prefer not to have TechRadar seen as their answer to the Hype Cycle, but its intention is to enhance their coverage in the technology watch area. So what are the differences and similarities?

First and foremost, remember that Hype Cycle describes industry and media perception of a technology, not its intrinsic capabilities or its actual value. Media presence holds a mirror to uptake and so helps judgements on pricing, on capability, and on supportability. But, for example, the trough of the Hype Cycle might be the right time to invest if you’re convinced of the value of a technology. Prices will likely be low, and resources relatively cheap. Conversely, if a new technology offers a significant potential advantage then invest in the Trigger stage. But be clear that prices will be on the up, knowledge scarce, and the technology itself possibly short lived. Not many people read the Hype Cycle this way!

Using the Wikipedia definition, Forrester’s Radar is not an adoption lifecycle in the same sense. It has a similar horizontal to the Hype Cycle; but even here there’s a difference. It goes right through the life cycle to final decline. On the vertical, though, it’s completely different and the curve shape therefore is different too. Forrester are charting their judgement of the potential for business value. And they split the chart according to likely maximum value. So, for example, in the supply chain TechRadar they see MRP as in decline; supply chain intelligence as reaching its maximum, high value; and advanced planning and scheduling as at its peak but of less overall value. Supply chain event management, they show, is still on the way up but won’t achieve great value even at its peak. And, as with the Hype Cycle, a time dimension is overlaid.

When I saw the first versions of TechRadar, I commented that it looked overly complex. I’m pleased to say that the final version appears a lot clearer. Users will need to invest time to figure out how to read these charts, and to get familiar with the standard table format for data in the reports. But the methodology’s good, and it will be worth it.

Oh, and (I added this to repair an omission) in typical Forrester fashion, you can download the data in a spreadsheet.

Meanwhile, there’s a lot of catching up to do. Gartner’s 2007 Hype Cycle summary “features more than 1,500 technologies and trends in 70 technology, topic and industry areas”! Maybe there should be a TechRadar for adoption cycles.

Links:

Introducing Forrester’s TechRadar Research Forrester Research, updated 15 Jan 2008
Forrester TechRadar: The Extended Supply Chain Application Ecosystem, Q2 2008 Forrester Research, 15 Apr 2008
Gartner’s Hype Cycle Special Report for 2007 Gartner, 1 Aug 2007
Understanding Gartner’s Hype Cycles, 2007 Gartner, 5 Jul 2007
Technology adoption lifecycle Wikipedia

Forrester and Gartner reports will not be visible in full without a membership subscription. TechRadar is a Forrester trademark.

Computing: the science-engineering continuum 18 Apr 2008

Posted by Tony Law in Uncategorized.
Tags: ,
add a comment

Professor Sir Tony Hoare is one of academic IT’s great names: one of the first UK Professors of computer science, and now associated with Microsoft’s Roger Needham Research Laboratory in Cambridge. What may not be so well realised is that his early days in IT included the decidedly practical delivery of the first commercial compiler for the Algol-60 language – the first language I myself learned as a postgraduate at Oxford less than ten years later. But, in the early 60s, everything was new and there was as much research involved in such a task as practical engineering.

The BCS’s London Group meeting yesterday evening was probably better attended than any I’ve been to, showing how widely recognised and respected Prof. Hoare is. He shared his thoughts about the two ends of the spectrum, and the collusion (no, not collision) between them. He stimulated a wide ranging discussion about the differentiating characteristics of “pure” exploratory science, and practical engineering. I’ll pick just a few, and reflect the discussion.

A scientific investigation focusses on the long term development of knowledge, or of an effective framework to understand and rationalise observations. It is as much interested in the details that don’t fit. In fact, probably more so: since these indicate the imperfections that will lead, in due course, to the flash of inspiration that creates a newer, better and more generalisable theory. We might say that a scientific theory is developed to stand for the foreseeable future, though “foreseeable” might be short-lived or very long term.

An engineer, on the other hand, is concerned to develop a serviceable, dependable product whether it be a bridge, a vacuum cleaner or a software module. The 80-20 rule applies; an engineer will either over-compensate for the unknown, to assure safety, or find a way to work round it. Innovation is a source of risk that has to be assessed and managed, and an engineer for preference will work on the basis of what has been successfully understood and used before. An engineering artefact is developed to meet a particular need and to stand for a known period of time. In many cases this is short term, until the next scheduled generation of the product is developed, although the designed-for period may be long and it may be considerably exceeded (think mediaeval buildings, for example).

Tony Hoare is interested in the science of correctness of programs. He argued that this endeavour is truly a science, based on the list of characteristics he (and the audience) had adduced. However, its target is the development of engineering dependability in the software that will be proved using the tools. He sees the link between the two as the necessary development of domain models which can themselves be scientifically proved, and can then be used as the basis for dependable production-scale engineering. He cited the parallel of the aircraft designer, who tests aerodynamics using a model in a wind tunnel. Such models must be of significant enough scale that the tests are realistic, but will not be production scale. Finding such domain models is itself a challenge; in some cases they may have to be developed.

Over refreshments, I talked briefly with Prof. Hoare about the challenge of developing tools to prove correctness. Their own correctness must of course be of a higher order than the correctness they are trying to prove, otherwise the error might be in the tool, not the artefact being tested! There’s no primary standard. If I understood correctly, there’s a bootstrap process which can be used to successively prove the correctness of elements of the tool. The necessity for proof of that process is what shows that this is science, not engineering!

Links:

Tony Hoare home page on Microsoft Research
Elliott Algol-60 (History of Programming Languages, Murdoch University, Australia)
• BCS London Central Branch past events; look here for the download of Prof. Hoare’s slides, not yet available

One identity, multiple networks … 9 Apr 2008

Posted by Tony Law in Consumerization, Tech Watch, Technorati.
Tags: , , , , ,
1 comment so far

Brighton BarCamp (see my post here) raised a question that’s been insistent in my mind for some time. Social computing sites (including virtual worlds) multiply like rabbits. How do you manage that? As I put it at the time:

The second day of BarCamp was illuminated by several conversations about the future of Social Networks (is there one? will multiplicity kill them off? is Facebook past it or you ain’t seen nothin’ yet? what about identity sharing with XFN and similar frameworks?)

Being a bit long in the tooth, I’ve been here before. A dozen or so years ago, I wrote the initial business case that took the company I worked for onto the Web. It was possible, then, to have a pretty good go at listing all the sites that were relevant to a pharmaceutical company and my IT colleagues. But of course that didn’t last long.

Another lesson from the past is that human beings are a gregarious species: we communicate. Almost any network technology goes person-to-person. The telephone, it was thought, would be used to broadcast church services and concerts. We know what happened. On France’s Minitel, perhaps the only really successful teletext service, the greatest successes were the interpersonal applications – not the databases. The Web’s going the same way: still lots of good information out there to browse, but Web 2.0 is all about the person-to-person Web: blogs, wikis, virtual worlds, social platforms and no doubt other things still to come.

The multiplicity of websites gave rise to the search engine. But when it’s multiple platforms that all carry part of your life, that’s not going to help.

Solutions are beginning to emerge: integration technologies to pull your life streams into one place. It’s not just what you put out there: it’s keeping in touch with your friends on different platforms. Here’s some of what’s going on; maybe the techies can comment with others.

Jabber links multiple instant messaging platforms. Corporate closed IM services are beginning also to open up to the outside world. It’s a great help to doing business.

Any RSS reader, pulling together feeds from any number of places but, in this context, perhaps particularly from blogs you want to keep up with. I like Google Reader.

OpenSocial is intended to link social networking platforms: led by Google (which includes YouTube, remember) with, among others, Friendster, LinkedIn, MySpace, Plaxo, Salesforce.com, Six Apart and the database giant Oracle.

XFN (XHTML Friends Network) is Semantic Web technology to assert links between different web sites or services which are “you”, like this blog, my Pocket Website, and my base InformationSpan website. See the blogroll for links!

Now there’s Thwirl, downloadable software that plugs directly into Twitter. Twhirl, lets you post to three services at once: Twitter, plus the similar services Pownce and Jaiku, according to the report in MIT’s Technology Review.

And Technology Review also reports on the MOGBox, which will let you design a high-resolution 3-D character and transport it as an avatar to multiple virtual worlds. It wouldn’t link the worlds themselves, but at least you can look the same everywhere without having to recreate.

Come to that, SMTP was the unifying technology for email. In the early days there were two addressing conventions on the Internet and proprietary closed systems like AOL as well …

So you have to bet on the unifying power of the human spirit to pull together these threads.

Links:
Consolidating Your Web Banter Technology Review, 9 Apr 2008
One Avatar, Many Worlds Technology Review, 8 Apr 2008
OpenSocial (Google)
XFN
Jabber.org the Jabber project
Thwirl “a desktop Twitter client”
MOGBox announced (Mogware blog, 19 Feb 2008)
Minitel (Wikipedia)

Sizing up the insight services marketplace 8 Apr 2008

Posted by Tony Law in Uncategorized.
add a comment

I’m about halfway down a thorough trawl through the list of firms who are, or may be, IT insight services providers. That’s analyst/research/advisory firms, starting with Forrester and Gartner and going right down to the niche providers and boutiques. Watch the main InformationSpan website when I get it licked into shape (later this month if I’m lucky) but some interesting findings already!

First, you wouldn’t believe how many firms state they are the market leader in market research for semiconductor vendors, or various aspects of telecoms … A lot of research actually is market research, aimed at vendors and service providers rather than enterprise IT. Second, there’s a fair slew of out of date data in the registries held by the analysts’ analysts. Even Tekrati, which is the leader (in my view) and has a news stream I’m coming to rely on, has some dead links. Firms move on or get acquired, and one-man (or woman-) bands close through retirement.

But third, there’s a gold mine of potential coverage of the niche areas, like legal document management or LIMS (laboratory instrument management) which I struggled to find when I was doing this for enterprise IT. I haven’t had time to sort the wheat from the chaff yet. But watch this space!

Links:

These are the sources of most of the links I’m reviewing; the database is being added to on my own account as I explore. AllTheAnalysts is in my blogroll; here are the others:
Tekrati lists nearly 500 sources and very nearly all of them are accurate!
Techra has about 450 but the quality is more variable
Outsell covers IT analysts as part of much wider coverage of the entire information industry; I’ve found mention of about a hundred IT sources

Analyst industry ethics 1 Apr 2008

Posted by Tony Law in Insight services.
Tags: , , ,
add a comment

I’ve just come across a paper by Joe Clabby (Clabby Analytics) talking about advocacy and objectivity in the analyst business (what InformationSpan calls Insight Services). It’s worth reading to get you thinking about how you use insight services, and what they’re doing.

Joe’s a researcher (as I am) and he espouses a solid research-based methodology. He expects insight services analysts to base their positions on actual research not just “feel”. He’s comfortable that analysts take a position on a market place: the best of these are hands-on researched, like Forrester’s Wave and similar tools, and I agree with him. After nearly 15 years as a service user, I know that enterprises want help and actionable support in making actual decisions, not just the raw data and an invitation to “make up you own mind”.

However – I think he’s over-optimistic in expecting technology press to be accurate and objective, over against an analyst. A reporter is going to be on one assignment one week, and another the next. They may have an area of specialisation, but a good analyst from one of the larger firms, or a niche specialist, will outdo them. How often have you read a trade press report about something you actually know about, and agreed with everything they say? Not often, I assume! which is by no means to question their professionalism, only to say that with deadlines and limited resources they will mostly get only part of the truth. Sure, I use the news sources; but I treat them with caution. Especially today, which is April 1st, but that’s by the by!

I picked the Forrester Wave as a prime example because, unlike some competitors, Forrester gives access to the raw data so that a client can re-balance the scores to meet their own specific environment. And that is another thing which I expect from a good analyst: the ability to take their in-depth research-based knowledge and apply it to my particular concerns: the company’s business aims and culture, the “how we do IT”, the CIO’s top six issues, and so on. In other words, reading the research report isn’t the end, it’s the beginning of the conversation.

Joe – thanks for starting this debate. Let’s keep it going!

Links:

IT Analyst Ethics: Advocacy vs. Objectivity Clabby Analytics, Jan 2008
Waves and Vendor Comparisons from Forrester Research

Links are provided in good faith, but InformationSpan does not take responsibility for the content of linked third party sites. Forrester Wave is a trade mark.<

Follow

Get every new post delivered to your Inbox.

Join 122 other followers