Technology management: Ovum’s perspective at BCS

Being much less active these days, and more remote from the capital, it’s rarely that I get to London for professional events. But I was in town this week for a BCS Elite/North London Branch event. It was wet and dismal when I arrived, just to remind me what London can be like …

The event was a presentation (Climbing Technology Mountains – A Practical Guide) by a couple of senior Ovum analysts giving their perspective on technology management in today’s business environment. Tim Jennings, who presented first, used to be Research Director at Butler Group before its acquisition by Data Monitor in 2005 and eventual transition to become Ovum’s analyst research group. Ovum now advertises itself as part of the Business Intelligence Division of Informa PLC, following a merger last year; there have been several evolutions since the Data Monitor acquisition!

As former Butler client, I was pleased to re-encounter Tim; he is now Chief Research Officer at Ovum. His presentation focussed on the strategic approach to today’s technology challenges (“Making headway with digital innovation & transformation”) with a mountaineering preamble and theme. The main headilines included: transforming IT capability; modernizing legacy systems; building the modern workplace; managing security, identity and privacy; adopting cloud services (at least in a hybrid model); connecting the physical world (the “Internet of Things”); exploiting business information; and enhancing the customer experience.

Most of the ideas were (dare I say) long familiar. But there are newer concepts coming through: the recognition of DevOps as well as agile to promote fast-moving adaptation; and a view on how far the Internet of Things is now reaching – not just into transport, logistics and retail but, for example, in healthcare and beyond. Big Data too is reaching a stage of maturity where it’s no longer about development and implementation but about exploitation in the hands of the end user through advanced desktop tools. But well-discussed challenges are still around: the role of what’s become known as “Shadow IT”, for example; the right architecture for hybrid cloud; the challenges of SaaS, as it empowers users but, by that very fact, works against attempts to simplify the application portfolio; giving workers the appropriate level of control of their own technology (whether through BYOD or other means); and recognising that IT transformation at scale is problematic. Though when an audience member reflected on the corporate challenge inherent in upgrading hundreds (or thousands) of desktop Windows machines, the response was “Yes, but Microsoft has just upgraded 40 million over the Internet”. The problem is corporate IT’s tendency to customise and lock down; maybe this finally has to go, so that auto-update can be allowed to just work. Vanilla is cheaper!

Perhaps the most interesting and newest concept in the discussion was that of the role of identity and identity management. Identifying the individual (and perhaps not just the individual user/customer/vendor/regulator person in all their roles, but also the individual device on the edge of the network) is both a key challenge and a significant enabler if it can be handled right. This topic was subsumed into security and attack/response strategies but it shouldn’t be: it’s perhaps one of the most crucial. This apart, by and large the impression was that the issues which were live when I was myself working directly in enterprise IT (which is now several years ago) are still the principal themes of analyst thinking. Despite the urgency we used to attach to issues such as BYOD, the “open enterprise”, SaaS or cloud services it seems life has not moved on that fast if Ovum are accurately reflecting their clients’ issues.

Richard Edwards, a Principal Research Analyst, followed up with a focus on knowledge workers and how to “re-platform” them. Some interesting discussion on what makes a knowledge worker; one of the key characteristics is a desire for autonomy (“knowledge workers often gear their workspace towards better individual business outcomes, albeit not necessarily with the blessing of management or line-of-business”). If the provided tools don’t get the job done, we knowledge workers have always found work-arounds, using our own technology if need be. There’s a trade-off therefore, between providing us with the flexibility to work the way we choose and managing the real issues of security, regulatory environments and backup. In the end, though, for any enterprise it is global disruption rather than corporate strategy which shapes the way we work (“If change is happening faster on the outside than on the inside, then the end nigh”).

Richard commented that the tools (and methods, I guess) used by knowledge workers shape the products and processes of their enterprise. This may be a surprise. For me, these elements of the discussion were the most rewarding part of the evening.

Also it was good to see Ovum Research in action. Ovum’s research output remains hidden entirely behind its paywall, which not even Gartner does these days, so opportunities are few; but you can download their research agenda from their home page.

• Climbing Technology Mountains, BCS event, 16 Sep 2015 (content may be added later)
• Ovum research for buyers (enterprise CIOs)

Corporate Executive Board – now just “CEB”

Corporate Executive Board is a family of executive-level and senior management insight services extenbding across corporate IT and into major corporate functions (sales, finance, legal and so on). In the last several years (since 2012) it has presented itself as “CEB” rather than “Corporate Executive Board”. The parent company has now renamed itself accordingly, following the trend in recent years to name companies just by initials rather than by the meaningful name they were derived from. CEB doesn’t now seem to refer to its executive councils as Executive Boards; they have become Leadership Councils, and there is perhaps more focus on IT teams other than at executive level.

Oddly enough, though, they haven’t made the change online. Although the press release cites the corporate website as “”, this still switches to the old “” rather than vice versa.

The shape of the CEB offering is somewhat different from that of which I used to be a client. Looking at the IT portfolio, there are Leadership Councils which encompass CIO, Applications, Enterprise Architecture, Information Risk, Infrastructure, PMO and Midsized Companies. There is a Learning and Development section: IT Leadership; Business Analysis; Prokect Management; Risk Management; and Service Management. And there are an IT Roadmap Builder tool and IT Talent Assessment support.

There has  been some shift, in the recent years, from the Executive Board model of researched sharing in whichinformation is solicited from members, organised, and published back to the membership. A CEB strategy paper would previously comprise tools and insights attributed to the members who contributed them; and this was a key part of its value. On a sample of one, the IT Roadmap white paper, this no longer appears to be the case. The delivery model is no longer differentiated from that of the major players (Gartner et al) and has reverted to being analyst-delivered.

There is, however, now more open-access content than previously. It was never the case that, as a non-subscriber, I could download a whitepaper as I have just done. There are also now a collection of blogs, and an analysis of these may follow in a future post.

Corporate Executive Board has been evolving. Keep an eye on its space.

• The Corporate Executive Board Company Now “CEB Inc.”, CEB press release, 27 May 2015
• Corporate Executive Board (linked as CEB Global; watch whether this link continues to reset back to
• CEB Blogs

On news, social media and responsibility

The Guardian this morning is published under a new editor. Katharine Viner takes over from Alan Rusbridger, and she takes charge of an institution which is very different from the one Rusbridger inherited from Peter Preston in 1995.

Rusbridger yesterday published a farewell to his readers: now no longer just readers, but also both members and contributors to the conversations which The Guardian facilitates. In the internet age, some papers instituted paywalls: Rusbridger cites Murdoch’s Times, which claims around 280,000 daily readers. The Guardian took the opposite stance, opening up its content to an international readership. It is now the second most widely read online Enlish-language news “paper” worldwide: around seven million people read it online. For myself, I still subscribe to the paper edition: but the smartphone app has taken over from the website as my preferred means of access when, as recently, I am overseas. Even the BBC is not so accessible from abroad.

But the point of this post is to encourage you to read Rusbridger’s farewell in its entirety (and it’s quite long). It contains thoughtful, stimulating analysis of issues such as the place of the social web in interactive journalism – bringing forth a new role, combining journalism with the skill of forum moderation. There’s the continuing role of ethical reporting in holding people to account (including, as seen recently, its own industry peers). Illustrating the trend to online, there’s a comment that the new presses, bought when the paper changed format, were “likely to be the last we ever bought”.

He recalls The Guardian‘s first website, which “didn’t fall into the trap of simply replicating online what we did in print”; in my own career I led my company’s strategy towards the Internet and the emergent World-Wide Web, and I recognise these issues. In due course the paper has developed its interactive model, opening up for response and comment from its online readership as an important part of continuous publishing.

Wikileaks, the phone hacking scandals, Edward Snowden and more; recognition, through the Pulitzer Prize; and successes such as the curtailment of News Corporation’s monopolistic ambtions and, more recently, that the US “phone dragnet hat had secretly violated the privacy of millions of Americans every day since October 2001” has been shut down. Interesting sideline: the link to this in Rusbridger’s article is null, and I couldn’t find a recent news article but, in the interactive Comment is Free section, there’s a discussion from the American Civil Liberties Union dating from April 2014.

I’ve scratched the surface. For those of us looking at the ethics as well as the potential of information creation and sharing – and we are all publishers now – Rusbridger’s farewell should be required reading.

• ‘Farewell, readers’: Alan Rusbridger on leaving the Guardian after two decades at the helm, The Guardian, 29 May 2015
• Obama is cancelling the NSA dragnet. So why did all three branches sign off? Jameel Jaffer, American Civil Liberties Union, in Comment is Free, The Guardian, 25 March 2014
• other references in the articles

Nepal: an IT response

As well as the straightforward humanitarian agencies involved in relief following the now twin earthquakes in Nepal, this morning’s inbox alerted me to another important effort.

I’ve used Mapbox, in tandem with Google Maps, to provide the venues map for the Brighton Early Music Festival. Google Maps got a lot more complex at the last upgrade, and the development interface even for a simple published map is not so easy or friendly. Mapbox can import output from a Google map (which was my starter) and creates, to my mind, a simpler and clearer map with a more useful marker capability: the flags on the map can be numbered or lettered at will (where Google’s can only be in a simple sequence), to link to a list published alongside. With this map linked to a stand-alone Google map which provides the usual directions, search nearby and so on, I think our concert-goers have the best of both worlds.

Mapbox, or Open Street Map, is an open source project. Today’s email flagged up its role in providing fast-response mapping for disasters such as Nepal. The email tells me:

Within just hours of the earthquake in Nepal the Humanitarian OpenStreetMap Team (HOT) rallied the OpenStreetMap community. Over 2,000 mappers quadrupled road mileage and added 30% more buildings. We designed print maps to aid post-earthquake relief efforts, chronicled satellite imagery collection over the area, and used Turf.js to identify the hardest-hit buildings and roads.

This is the strength of Open Source as a community effort. It can mobilise people for this kind of task on a scale that a commercial organisation cannot. You don’t have to be in Nepal; the work is to digitise satellite imagery, and the Nepal project wiki can get anyone established in the team.

Oh, and of course the resources (particularly servers and software) come under strain. So if you are not minded to donate to the Disasters Emergency Committee or one of its agencies, perhaps you can contribute time or a donation to support OSM’s Humanitarian OSM Team in this work.

• 2015 Nepal Earthquake page from the Open Street Map wiki
• BREMF venues (Mapbox embedded map, with link to Google) for Brighton Early Music Festival
• Mapbox and OpenStreetMap
Why I hate the new Google Maps, ITasITis, 17 Apr 2014

Location services move indoors: Apple’s iBeacon

An incidental headline in Outsell’s information market monitoring email brought my attention to Apple’s new iBeacon technology, announced last year.

We’ve long been used to the idea that the smart devices we carry around with us might/can detect nearby things of interest: for example, alerting us to an offer from a store nearby. Location services, based on GPS, on your current WiFi connection, or on triangulation from your mobile signal, do this. So can active RFID.

But indoor location is difficult. Current technology is an updated version of the old nautical dead reckoning. It notes where you are when you lose your accurate GPS/cellular/WiFi positioning, and uses motion sensors to track.

iBeacon is different. It’s a nearer-proximity application and is based on Bluetooth detection of your smartphone. Apple says: Instead of using latitude and longitude to define the location, iBeacon uses a Bluetooth low energy signal, which iOS devices detect. So you need Bluetooth turned on as well as having an appropriate app loaded. This leaves you a modicum of control, I guess.

What alerted me was Outsell’s note that London-based online community specialist Verve has added Apple’s iBeacon technology to its Community Panel app, allowing it to track individual members as they travel into and around stores fitted with the iBeacon device. The report, from “MrWeb”, is firmly in the market research space. This is very much a retailer’s app; it tracks the device in detail through a store, identifying where the user spends time – and how long they stay there – and possibly triggering instant marketing surveys on that basis.

Verve is a newish (2008) company. They describe themselves as “The community panel for research”. Their business is the creation of community panels, acting as consultants to companies needing consumer-focussed research. There’s no  indication, therefore, of what incentives are offered to users to join panels; but one might assume instant offers would be the least of it. There is some client information in their “About Us” section (but one client is T-Mobile, which hasn’t existed independently since around the time Verve were formed, so one wonders …).

Apple’s developer website suggest a range of applications:

From welcoming people as they arrive at a sporting event to providing information about a nearby museum exhibit, iBeacon opens a new world of possibilities for location awareness, and countless opportunities for interactivity between iOS devices and iBeacon hardware

A link will take you through to a video from the 2014 WorldWide Developers Forum. This is awkward to get at: unless you’re using Safari on a recent MacOS you will need to download the file to play it. But it’s worth it; it takes you on a journey from existing RF triangulation, adding motion sensors when indoors and out of effective range, to the new beacon-based technology. And on the way it suggests more user-oriented applications, such as finding your way roung Heathrow Airport; or through an unfamiliar hospital on a family visit. Watch about the first 15 minutes, before it routes to coding stuff for developers.

Technically, interesting; a new twist on location services. Practically useful; but watch out (as always) for what it may do to your privacy. As they say: enjoy!

• iOS: understanding iBeacon, Apple
• iBeacon for Developers, Apple Developer website
• Verve Adds iBeacon Tech to Panel App, Mr Web Daily Rresearch News Online, 5 Mar 2015
• Verve: community panel research
Taking Core Location Indoors, Nav Patel, Apple WWDC, June 2014. Page down to find the expanded link

Turing Lecture 2015: The Internet Paradox (links updated)

Following a move, I’m no longer close enough to London to easily attend the BCS and IET’s prestige Turing lecture in person. So this year, for the first time, I will be attending online.

Robert Pepper is VP Global Technology Policy at Cisco. His topic: The Internet Paradox: How bottom-up beat(s) command and control. The publicity promises “a lively discussion on how the dynamics of technology policy and largely obscure decisions significantly shaped the Internet as the bottom-up driver of innovation we know today … Dr. Pepper will cover the next market transition to the Internet of Everything and the interplay between policy and technology and highlighting early indicators of what the future may hold for the Internet.

I’m expecting a good objective discussion. As I learned many years ago, listening to Peter Cochrane when he was head of BT’s research centre, those who provide technical infrastructure don’t have a reason to hype up the different services which will run on it. Quite the opposite: they need to assess investment to satisfy demand, but not exceed it. Let’s see what we see. I’ll update this blog as we go, and probably abbreviate it tomorrow.

Starting on time: Liz Bacon, BCS President, is on stage. An unexpected extra: Daniel Turing, Alan Turing’s nephew, is introducing the Turing Trust with a mention of The Imitation Game, the Turing film, and of The BCS’s role in rebuilding Turing’s codebreaking machine (“the bomb”). The Trust recycles first-used computers to less well off countries. In our move last year, I passed quite a lot of old equipment to Recycle-IT who ethically re-use or dispose of un-reusable kit.

Now the main speaker (bio online). He describes himself as a “recovering regulator”; regulation is the intersection of policy and technology. Big iron to nano-compute, and we haven’t even seen the Apple Watch yet! This (and the cost/power changes) drives decentralisation of computing. Alongside, 1969: 4 “internet” locations (packet switched) on the west coast. By 1973, extended outside continental USA (London, Hawaii). 1993: global.

1994-5 the US Government outsourced (privatised) the network. NSF had been created. Restrictions were dropped to permit commercial use; and other governance was created. In the diagram, the biggest nodes (most traffic) are Google and Facebook; but China is coming up fast!

An alternative view: in stages. 1: connectivity (email, search). 2: networked economy; 3, Immersive. 99% of the world, though, is still unconnected. 1000 devices with IP addresses in 1984; forecast 20 bn by 2020. 50bn if you include non-IP such as RFID chips. Internet of Everything will encompass people, processes, data and things. Such as, by 2018, four IP modules on each of 256million connected cars. Such as, sensor clothing for athletes. I have a 1986 news clip from MIT Media Lab about the prototypes for exactly this. The quote was: “Your shoes may know more about you than your doctor does“.

Things create data which, through process, can positively affect people. But only 0.5% of data is being analysed for insights! There’s an example from nutrition. Take a photo of a product in the supermarket, and see if it’s appropriate (for example, no alcohol with your prescription). Or the “Proteus pill” to help with older people’s medication, which the FDA has already approved. Or the Uber cab app.

So that’s the technology. Now, on to policy and governance.

Internet governance developed bottom-up and is not centralised; it’s a multi-stakeholder global ecosystem of private, governments (lots of them!) and intergovernmental, providers, researchers, academics and others. There’s a diagram of those actually involved, which will be quite useful when I can retrieve it readably. First RFC was from ARPAnet in 1969. The first IETF met in 1986. ITU’s World Conference in 2012 saw proposals from some member states to regulate the Internet, and these were rejected. In 2014 the (US Dept of Commerce) proposal is to transition IANA to become a multi-stakeholder global body, so that the US finally cedes control of the network it inaugurated.

Now: as many of us know, the international standards process we currently have is done by consensus and can take years. Contrariwise, the IETF works by “Rough consensus and run code” (everlasting beta). Much faster. Based on RFCs that come in, and with a combination of online and face-to-face meetings. There are NO VOTES (Quakerism works in a similar way); “rough consensus” in IETF is assessed by hum!

Robert shows a slide of a “Technology Hourglass” (citing Steve Deering, 2001; Deering is also a Cisco person. I can’t find the actual reference). IP, at the centre, is in essence the controlling/enabling standard. Above (applications) and below (infrastructure) there can be innovation and differentiation. (My comment: in the same way, both 19th century rolling stock and modern trains can run on today’s network.) The suggestion: it’s a martini glass because at the top there’s a party going on!

There’s no need to ask permission to innovate! This is the Common Law approach: you can do anything that’s not prohibited. The UK has almost 1.5 million people working in this area. They are here because of Common Law: European countries have the reverse (you need permission). The information economy now dominates the previous waves of service, industry and agriculture.

Internet is a General Purpose Technology, like printing and transport and the telephone. Other things are built on it. Increasing broadband provision links to growth: this is not correlational, it is causal. Digital-technology innovation drives GDP growth in mature economies (McKinsey); the impact is on traditional sectors enabled by the digital.

Third: the paradox. There’s decentralisation of compute, to individuals, to nanodevices, and to stakeholders. But right now, governments want to reverse this approach and take control; to re-create silos, have forced localisation of standards, content and devices. This is already the case with some classes of data in some countries.

The issues: (1) extending connectivity to those who are not connected. (2) safety, security and privacy – where there clearly is a role for government, but be clear that these are not just internet issues. Others on a slide about Internet of Everything. Some governments are well-intentioned but not well informed; others, more dangerously, were the reverse. And old-tech assumptions (how you charge for phone service, for example) doesn’t match the new realities; the product is connectivity (not voice).

Swedish study: if you can’t transfer data, you can’t trade (nor have global companies). Localisation of data will impact severely on the global economy. Note: Economist Intelligence Unit looked at some proposals; 90% of the authoritarian regimes voted for new internet regulations on a multilateral basis, 90% of democracies against. Enough! We are at a crossroads where the Net could take either direction, and they are not equal.

Final quote: Neils Bohr. How wonderful we have met with a paradox. Now we have some hope of making progress!

I’m not going to try and capture Q&A. Heading over to Twitter. Watch the webcast; I’ll post the URL in an amendment when it’s up on the IET website.

Has it been an objective discussion? In one sense yes. But in another, Robert Pepper clearly has a passionate belief in the model of governance which he is promoting. What’s been shared is experience, insight and vision. Well worth a review.

• BCS/IET Turing Lecture 2015: online report (BCS); or view the webcast replay from The IET
Proteus Digital Health including a video on their ingestible sensor
Watching the Waist of the Protocol Hourglass, Steve Deering, seminar 18 Jan 1998 at Carnegie-Mellon University (abstract only)
Turing Trust
Recycle-it (don’t be confused; other organisations with similar names exist on the web)

LinkedIn in the news (and its hidden resources)

Two media notes from LinkedIn this week: an enterprise which I always take an interest in because, as well as being a user, I visited them in Silicon Valley some years ago.

Through Outsell, which is a media analyst and (among other things) monitors analyst firms, I was connected to an article on VB which covers a LinkedIn tool called Gobblin. It’s been developed to gobble up, and improve LinkedIn’s use of, the wide range of sources which it uses. With many different inputs to reconcile (a task I’ve done bits of, on a much smaller scale, in the past), the development is clearly driven by necessity.

VB calls it “data ingestion software”. The interesting thing is that LinkedIn doesn’t treat these kinds of developments as proprietary. So the announcement explains that the software will be released, available to all kinds of other enterprises with similar needs, under an open-source licence.

Almost the same day, Outsell also flagged a report that LinkedIn is expanding its reach to embrace younger members (high-school students, in US terms) and will provide a specific capability for higher education institutions to promote themselves. This will, of course, increase the data ingestion requirement.

Interestingly, I had to use Google to find LinkedIn’s press release archive; there’s no link to corporate information on the regular user page so far as I can see. And there are no press releases showing at the moment related to either of these news items. However, via Twitter, I found a discussion of Gobblin from analyst GigaOM with, in turn, a link to another “hidden” section of the LinkedIn website: LinkedIn Engineering. That’s the primary source and it has diagrams and a useful discussion of the analysis and absorption of unstructured “big data”. Interesting to me, because I cut my database teeth on text databases when I moved from University computing to enterprise IT.

When I visited LinkedIn, on a Leading Edge Forum study tour, they were still a start-up and it wasn’t clear whether they had a viable business model or met a real need. It was their presentation then which decided me to sign up. Well, a good ten years on the company is still not in profit although revenue, in the last quarterly results, had increased by almost half year-on-year. The business model is still standing, at least.

• LinkedIn
• LinkedIn details Gobblin …, VB News, 25 Nov 2014
• LinkedIn expands for high school students, universities, Monterey Herald Business, 19 Nov 2014
• LinkedIn explains its complex Gobblin big data framework, GigaOM, 26 Nov 2014
• Gobblin’ Big Data With Ease, Lin Qiao (Engineering Manager), LinkedIn Engineering, 25 Nov 2014<
• LinkedIn Announces Third Quarter 2014 Results, LinkedIn press release, 20 Oct 2014
• Look for LinkedIn information here: Press Center; and Engineering