Benefits realisation: analyst insight

I’m facilitating an event tomorrow on “Optimising the benefits life cycle”. So as always I undertook my own prior research to see what the mainstream analysts have to offer.

Forrester was a disappointment. “Benefits Realization” (with a z) turns up quite a lot, but the research is primarily labelled “Lead to Revenue Management” – that is, it’s about sales. There is some material on the wider topic, but it dates back several years or longer. Though it’s always relevant to remember Forrester’s elevator project pitch from Chuck Gliedman: We are doing A to make B better, as measured by C, which is worth X dollars (pounds, euros …) to the organisation.

There is a lot of material from both academic researchers and organisations like PMI (Project Management Institute). But in the IT insight market, there seems to be remarkably little (do correct me …) except that the Corporate IT Forum, where I’ll be tomorrow, has returned to the issue regularly. Tomorrow’s event is the latest in the series. The Forum members clearly see this as important.

But so far as external material is concerned, this blog turns into a plug for a recent Gartner webinar by Richard Hunter, who (a fair number of years ago) added considerable value to an internal IT presentation I delivered on emerging technologies for our enterprise. I’m not going to review the whole presentation because it’s on open access from Gartner’s On Demand webinars. But to someone who experienced the measurement-oriented focus of a Six-Sigma driven IT team, it’s not a real surprise that Richard’s key theme is to identify and express the benefits before you start: in business terms, not technology-oriented language, and with an expectation that you will know how to measure and harvest the benefits. It’s not about on-time-on-budget; it’s about the business outcome. Shortening a process cycle from days to hours; reducing the provision for returns; and so on.

If this is your topic, spend an hour reviewing Richard’s presentation (complete with family dog in the background). It will be time well spent.

Links:
• Getting to Benefits Realization: What to Do and When to Do It, Richard Hunter, Gartner, 7 Aug 2014 (go to Gartner Webinars and search for Benefits Realization)
• Corporate IT Forum: Optimising the Benefits Lifecycle (workshop, 16 Sep 2014)

Advertisements

Dark Web: good, bad, or amoral?

Last night I watched BBC’s Horizon programme reviewing the history and impact of what’s become known as the Dark Web. Here seems to be the scenario.

In the beginning, was the Internet. In the early days of the Web I wrote a strategic report for my company which triggered the adoption of web technology and internet email. One of the things I pointed out was that, in the precursors such as newsgroups, no-one was anonymous. Traffic has identifiers or, at least, IP addresses attached to it. People know who you are, and your company’s reputation hinges on your behaviour online. As the Internet of Things expands, the amount of information about individuals that can be analysed out of internet traffic expands exponentially with it.

Governments, particularly the US, recognised the potential for compromising security and the response was TOR (The Onion Router network) which passed traffic through a number of nodes to disguise its origin. The project moved to Open Source and has become widely used in response to the growing levels of surveillance of internet traffic, revealed most notably of course by Edward Snowden. Wikileaks uses TOR to facilitate anonymous contributions: it wasn’t tracking which identified Snowden, or Manning. It has been used extensively in recent events in the Middle East.

So at this point, governments are trying to put the genie back in the bottle: they invented TOR, but they don’t like it being used to hide information from them. Moreover, it is being used for criminal transactions on a substantial scale: and at this point Bitcoin becomes part of the picture, because (unlike conventionally banked money) it too is not inherently traceable.

There’s no firm conclusion drawn in the programme, and surely that’s right. Technology of this kind isn’t inherently good or bad: it is, in the strict sense of the word, amoral. But the uses people make of it, as with almost any technology, are not amoral. And the programme raises strong issues about the balance of privacy and security, both in their widest senses. The sources used are strong and reputable: Oxford University’s Internet Institute; Julia Angwin, an established technology researcher and writer, key individuals in the development of these technologies, Julian Assange of WikiLeaks, and not least Tim Berners-Lee who admits to having been perhaps naive in his early assessment of these issues.

While it’s still on iPlayer, it’s worth a watch.

Links:
• Inside the Dark Web, BBC Horizon, 3 Sep 2014 (available on iPlayer in the UK until 15 Sep)
• Tor Project online, and Wikipedia article
• Oxford Internet Institute
• Julia Angwin

Growth, Innovation and Leadership: Frost & Sullivan

I’m on a Frost and Sullivan webinar: Growth, Innovation and Leadership (GIL: a major Frost theme). It’s a half-hour panel to discuss successful types of innovation and examples of future innovative technologies with Roberta Gamble, Partner, Energy & Environmental Markets, and Jeff Cotrupe, Director, Stratecast. David Frigstad, Frost’s Chairman, is leading. The event recording will be available in due course.

Frigstad asserts that most industries are undergoing a cycle of disrupt, collapse, transform (or die: Disrupt or Die is an old theme of mine). We start with a concept called the Serendipity Innovation Engine. It’s based on tracking nine technology clusters; major trends; industry sectors; and the “application labs” undertaking development (which includes real labs and also standards bodies and others). And all of this is in the context of seven global challenges: education, security, environment,  economic development, healthcare, infrastructure, and human rights.

Handover to Gamble. This is a thread on industry convergence in energy and environment, seen as a single sector. Urbanisation, and the growth of upcoming economies, are major influences here in demand growth.

We do move to an IT element: innovation in smart homes and smart cities, with integration between sensor/actuator technology and social/cloud media: emphasising this, Google has just bought a smart home company (Nest Labs). City CIOs and City Managers are mentioned as key people – a very US-centric view when most urbanisation is not occurring in the developed world … we do return to implications for developing economies, where the message is that foundations for Smart (which includes effective, clean energy use) should be laid now while there is a relatively uncluttered base to start from.

Frigstad poses a question based on the idea that Big Data is one of the most disruptive trends in this market. Gamble suggests that parking is an example. Apps to find a parking spot, based on data from road sensors or connected parking meters, are not though only being piloted in San Francisco. Similar developments in the UK were mentioned at a Corporate IT Forum event I supported earlier this year.

It’s a segue into the next section: an introduction for Cotrupe, whose field is Big Data and Analytics. Examples of disruption around here include the Google car: who would have thought Google would be an automotive manufacturer? Is your competitor someone you wouldn’t expect? An old question, of course. The UK’s canal companies competed with each other and perhaps with the turnpike roads; they mainly didn’t foresee the railways.

Cotrupe’s main question is: What is Big Data really? He posits it as an element of data management, together with Analytics and BI. I’d want to think about that equation; it’s not intuitively the right way round. But high volume, rapidly moving data does have to be managed effectively for its benefit to be realised – delivering the data users need, when they need it, but not in to overwhelm them. And this means near real-time. It’s IT plus Data Science.

Frost suggest they are more conservative than some, because they see growth of the BD market held back by the sheer cost of large scale facilities.

We’re on the promised half hour for the primary conversations, but still going strong, basically talking with Cotrupe about various industry sectors where Big Data has potential: to support, for example, a move from branch based banking to personal service in an online environment. There’s some discussion of Big Data in government: how will this affect the style of government in perhaps the next 20 years? Cotrupe mentions a transformation in the speed of US immigration in recent years, where data is pre-fetched and the process takes minutes instead of hours. He’s advocating opening up, sharing of information: in other industries too, for example not being frozen by HIPAA requirements in (US) healthcare or, perhaps, EU data protection requirements. I have personal experience of obstructive customer service people trying to hide behind those, and in fact parading their lack of actual knowledge.

Cotrupe talks about privacy, not least in the wake of Snowden and what’s been learned about sharing between NSA and the UK agencies. Cotrupe would like to see theis ease of sharing brought to bear in other areas: but asks how we manage privacy here? There are companies which are leading the way in data collection in consumer-sensitive ways, and this needs to become standard practice. In any case, not collecting data you don’t need will reduce your data centre (should that be Data Center?) footprint.

As we come to a close, with a commercial for the September event in Silicon Valley, I have to say I’m not convinced this webinar was wholly coherent.

If you call something a Serendipity Innovation Engine I want to know how it relates to serendipity: that is, the chance identification of novel discoveries.

If you present a layered model, I expect the layers to relate (probably hierarchically) to one another. It would be more valuable to talk about the four elements of this model separately and be clearer about what each represents. For example, “Health and Wellness” occurs as a Technology Cluster (why?). It’s also a Mega Trend in a layer where Social Trends also sits; surely people’s concern about Health and Wellness is a social trend? Each layer seems to mix social, technical and other concerns.

I learned a  more useful framework when teaching the OU’s Personal Development course. This really is layered. The two internal layers (this is for personal development) are one’s immediate environment, and other elements of your working organisation. Then Zone 3 (near external) encompasses competitors, customers/clients, suppliers and local influences. Zone 4 (far external) includes national and international influences: social, technological, economic, environmental and political (STEEP). On this framework you can chart all the changes discussed in today’s webinar and, I think, more easily draw conclusions!

Links:
• Frost & Sullivan Growth Innovation & Leadership
• Google buys Nest Labs for $3.2bn …, The Guardian, 13 Jan 2014
• STEEP framework: Sheila Tyler, The Manager’s Good Study Guide (third edition, 2007). The Open University. Pages 198-202

What to make of Heartbleed?

I watched the BBC News report last night about the security hole in Open SSL. With its conclusion that everyone should change all their passwords, now … and the old chestnut that you should keep separate passwords for every service you use, never write them down, and so on. Thankfully by this morning common sense is beginning to prevail. The Guardian passes on advice to check if services have been patched first; and offer a link to a tool that will check a site for you.

First, as they say, other Secure Socket Layer implementations are available. While a lot of secure web connections do rely on Open SSL, it’s not by any means universal.

Second, as always, dig behind the news. As Techcrunch did. This is the first vulnerability to have its own website and “cool logo”; this was launched by Codenomicon in Finland which started by creating notes for its own internal use and then took what it calls a “Bugs 2.0” approach to put their information out there. I remember doing something similar way back in Year 2000 days. Incidentally, the Open SSL report (very brief) credits Google Security for discovering the bug. It also identifies the versions which are vulnerable. (There’s a note there that says that if users can’t upgrade to the fixed version, they can recompile Open SSL with -DOPENSSL_NO_HEARTBEATS which, I’m guessing, gives a clue as to the naming of the bug.)

If you want real information, then, go to Heartbleed.com. The Codenomicon Q&A is posted there. In brief: this is not a problem with the specification of SSL/TLS; it’s an implementation bug in OpenSSL. It has been around a long time, but there’s no evidence of significant exploitation. A fix is already available, but needs to be rolled out.

What was clear, too, is that the BBC reporter (and some others) don’t understand the Open Source process. The Guardian asserts that “anyone can update” the code, and leads readers to suppose that someone can maliciously insert a vulnerability. Conspiracy theories suggest that this might even be part of the NSA’s attack on internet security. But of course that ain’t the case. Yes, anyone can join an Open Source project: but code updates don’t automatically get put out there. Bugs can get through, just as they can in commercial software: but testing and versioning is a pretty rigorous process.

Also, this is a server-side problem not an end-user issue. So yes, change your passwords on key services that handle your critical resources  if you’re worried but it might be worth, first, checking whether they’re likely to be using Open SSL. Your bank probably isn’t. There’s a useful list of possibly vulnerable services on Mashable (Facebook: change it; LinkedIn: no need; and so on)

And what do you do about passwords? We use so many online services and accounts that unless you have a systematic approach to passwords you’ll never cope. Personally, I have a standard, hopefully unguessable password I use for all low-criticality services; another, much stronger, for a small handful of critical and really personal ones; and a system which makes it fairly easy to recover passwords for a range of intermediate sites (rely on their Reset Password facility and keep a record of when this has been last used). But also, for online purchases, I use a separate credit card with a deliberately low credit limit. Don’t just rely on technology!

Links:
• Heartbleed, The First Security Bug With A Cool Logo, TechCrunch, 9 Apr 2014
• Heartbleed bug, website from Codenomicon (Finland) – use this site for onward references to official vulnerability reports and other sources
• OpenSSL project
• The Heartbleed Hit List, Mashable, 9 Apr 2014
Heartbleed: don’t rush to update passwords, security experts warn, Alex Hearn, The Guardian, 9 Apr 2014
• Heartbleed bug: Public urged to reset all passwords, Rory Cellan-Jones (main report), BBC, 9 Apr 2014
Test (your) server for Heartbleed, service from Filippo Valsorda as referenced in The Guardian. I’m unclear why this service is registered in the British Indian Ocean Territory (.io domain) since Filippo’s bio says he is currently attending “hacker school in NYC”. On your own head be it.

Facebook at 10, Microsoft at 40

OK, a slight stretch for a snappy headline but these have been two lead stories in the last few days.

Others will comment with more depth and more knowledge than I can on either Facebook’s tenth anniversary or the appointment of Satya Nadella to succeed Steve Ballmer (and, of course, Bill Gates) at the head of Microsoft. But I was remembering, quite a while ago now, a META Group event in London when the Web was just arriving and disintermediation was a new word. The speaker took a look at the banking industry, with new on-line start-ups starting to eat the lunch of the established financial institutions.

The point was this. The new entrants invested, typically, in just two things: infrastructure, and software development. Existing players had institutional weight; they had enterprises to keep in existence with all the corporate overheads that accumulate over time. with shareholders and stockmarket expectations and dividends. They needed to cut costs to compete with the new lean players. And (doesn’t it still happen?) they would target the IT budget. So the area of investment which differentiated their new competitors was precisely where they were dis-investing.

Microsoft is fast approaching 40. It’s a solid, established player with corporate overheads, strategies, shareholders. Is it still as lean and sharp as the company which turned on a sixpence (a dime, if you’re American; a 5p piece for the youngsters) when it “got” the Internet and realised that MSN and AOL were not going to be where most of the traffic went. Enter Internet Explorer, competing with Netscape; and the rest is history.

Well … we can look at areas in the recent past where that hasn’t been repeated. Smartphones? a lot of Windows phones have been sold, but Android and iPhone are the big players and an Office 365 subscription gives access to Office mobile software on these platforms as well as Windows. But on the other hand: Office 365 is a good model, for both consumers and Microsoft, because it converts intermittent capital costs for what is still essential software into predictable operational costs. And while capital versus operational is the language of the enterprise, where Microsoft’s heart arguably is these days, the concept works for individual licences. There are undoubtedly challenges, but a CEO with an Indian background may have the right insight and vision to work round all that unavoidable corporate baggage.

What about Facebook? Facebook has got to the stage where it is acquiring the corporate baggage (shareholders and so on). It’s had to face up to public perception, particularly over issues like personal online security. Both companies now find themselves covered in the main news sections and financial pages, like any other corporation, rather than only in  geek-tech reporting. They’ve gone mainstream.

So Facebook has new competitors in the social media space, sharper and newly innovative where Facebook is unavoidably solidifying. Microsoft is in a stable, continuing enterprise market which it understands; it appears not to understand the consumer market so well. Facebook is in precisely that consumer market, although a lot of enterprises use it to communicate with their own consumers. It’s a fashion market. What’s coming next? and how can Mark Zuckerberg stay ahead of the game?

No links here; just a personal opinion, and you can find lots of links with some easy searching!

Business Process Improvement

Working for GlaxoSmithKline IT, after the 2000 merger, developed my familiarity with business process improvement (small letters) and with Six Sigma methods and metrics. I would never call myself an expert. Routine training was to Green Belt level, without taking the qualifying exam, and I don’t have the instincts which make a leading practitioner able to pick the right tools to adopt for any specific need.

But it taught me a lot, which can be applied well beyond IT. First: as a previous CEO used to say, “If you don’t keep score, you’re only practising”. So, to drive and verify and improvement, you need metrics. But pick the right ones, which will show you where you are. Establish your baseline before you start doing anything. Use the metrics to demonstrate the change (you hope!). And when the improved process has reached the status of business-as-usual, you can probably drop the measure. It’s no longer needed.

Second: a saying that was drummed into us. “Don’t tinker!”. Don’t make changes on the basis of “I think …” without the analysis. Don’t over-react to one-off incidents: processes have variability, and some outliers will happen naturally.

And third: develop and demonstrate your own (internal IT) understanding and improvements before you try to work with the rest of the business. IT has, perhaps, an unique overview of what goes on across the company, and is almost always a participant in any business improvement project. So there’s good leverage there: but you have to gain credibility first. It takes a lot to get to the point where, when a business leader asks for an IT development, you can say “Why? What improvement are you driving? Who will own it? How will you measure it?”

Well: tomorrow I’m facilitating a Corporate IT Forum event on Business Process Improvement (BPI). I’m expecting the twin threads of, first, identifying and improving IT’s own processes; and, second, putting that experience and expertise at the service of the business as a whole. Where are the sources of information and analysis?

Gartner have a Leaders Key Initiative on BPI. The overview, as recent as July this year, has a natty graphic showing the BPI practitioner as a juggler (operations, transformation, skills, technology and innovation) under pressure from both business and technology forces. They offer a number of tools for maturity assessment “across IT disciplines” (what about the rest-of-business?); key metrics (that’s IT spending and staffing, not how to measure a process); and best practices across several competencies. It seems, though, towards the end to lapse back into business process management (BPM) not BPI.

There isn’t a lot in the Gartner blogs, but a useful post from Samantha Searle earlier this year challenges us to avoid the word “Process” (unless your business-side colleagues are process engineers or in manufacturing). That kind of gells with the observation that Gartner probably, under the covers, maintain an IT-oriented focus because Process is very present in the key initiative!

Similarly I don’t find a great deal in Forrester specifically around BPI. But there’s a stronger focus on the interplay of IT expertise and whole-business improvement. A recent report, for example, discusses the shift from “a tactical process improvement charter” to a more strategic role across the enterprise. This requires a plan “for optimizing the BPM practice to deliver on new strategic drivers and business objectives”. That sounds more like it.

Interestingly, a search collected a link to Cambridge University which I expected to be to the business school or computer science. But it’s to their internal management services division with a one-page (one-slide, really) graphic and definition of BPI. Take a look. But the Judge Institute of Management Studies does indeed have a Centre for Process Excellence and Innovation, also worth reviewing.

There’s a lot of material you can find by searching. Too much to survey. Assess with care!

Links:
• Business Process Improvement Leaders Key Initiative Overview, Gartner, 25 Jul 2013 (search Gartner for ID:G00251230)
• 10 New Year Resolutions for BPM Practitioners #2: Don’t Mention the “P-word …, Samantha Searle, Gartner blogs, 8 Feb 2013
• Optimize Your Business Process Excellence Program To Meet Shifting Priorities, Clay Richardson, Forrester report, 6 Jun 2013
• Business Process Improvement, University of Cambridge, Management and Information Services Division (undated)
• Centre for Process Excellence and Innovation, Judge Institute, University of Cambridge

Working with others (2)

On Thursday (4th July) I’m facilitating a Corporate IT Forum event called Collaborating with Third Parties (the working title, reflected in its URL, was “Beyond the Firewall”). As it happens this is something I have ideas about. I’ll need to work quite hard not to impose them on the group, since it’s the group’s shared learning that’s important.

Quite a long time ago now, a group of us in BP’s long-disbanded IT Research Unit worked with Imperial College, AEA Harwell (as it was), ICL (remember the British computer company?) and, in due course, many others looking at management architectures for widely distributed systems. That’s to say, where components developed by and hosted by different organisations came together to comprise composite systems which did useful work. In the late 1980s this was not a well understood way of doing applications.

In today’s Internet-enabled world, third-party components are everyday reality. Any vendor who accepts credit card transactions over the Internet, for example, may create their own payment system: but they may equally well wedge in a widget from someone else, who understands and has resolved the issues around payment protection and the compliance and standards embodied in PCI. Whoever processes their payments is almost guaranteed to then invoke either Mastercard or Visa’s online verification service. That payment, then, passes through at least two and probably three different systems before the vendor collects their money. No one organisation has responsibility for the overall system. And it doesn’t matter if you’re an organisation the size of Amazon, eBay or Tesco: when you need a card transaction verified, you don’t have a serious say in how this is done. You interface to Verified by Visa, and you do it their way or not at all.

None the less if you’re Amazon or, in the USA, WalMart, you do have a lot of clout. And if you want to do online supply chain stuff with WalMart, again, however big you are as a multinational global supplier, you do it their way.

These kind of interactions are not equal-handed. One party dominates. I wouldn’t, myself, call these interactions collaborative.

Here’s the other model. In the oil industry (back to BP again) joint ventures are commonplace. You set up a joint operating company, quite likely, with its own capital and operating and management structures: but you want to share expertise and experience and decisions even-handedly so the JV needs to draw on both companies’ information. This doesn’t happen if one of the companies puts its arm round its geology information, for example, and refuses to let the other see it.

More subtly, it doesn’t happen if one company insists that data from the JV is stored in my data centre on my servers and access is controlled by my LDAP directory. It may be stored in your data centre on your servers because that’s the best place. But you have at the least to trust your partners to have access as easily as your own people. They must also be able to decide who, from their side, is allowed access: and preferably to just set it up without referring to you.

It’s similar to what Euan Semple says about conversations. He quotes David Weinberger to the effect that “Conversations only happen between equals”; and he elaborates this. “If two people are not prepared to see each other as equal, at least for the duration of their interaction with each other, then what they are having is not a conversation”.

It’s the same for a collaborative relationship. If you want to decide whether a relationship is truly collaborative: I think this is the same as asking whether control is symmetrical. If you were in their place, and they in yours, would you be able to work in the model you’ve set up?

If I’m wrong about this, I’ll find out on Thursday. What do you think?

Links:
• Collaborating with Third Parties, Corporate IT Forum workshop, 4 Jul 2013
• Euan Semple (2012), Organisations don’t Tweet, people do, John Wiley, Chichester. Page 110 ff.
• PCI (Payment Card Industry) Security Standards: the PCI Security Standards Council
Working with others (1): feeling pleased with myself (ITasITis, 1 Jul) was about something quite different!