Turing Lecture 2015: The Internet Paradox

Following a move, I’m no longer close enough to London to easily attend the BCS and IET’s prestige Turing lecture in person. So this year, for the first time, I will be attending online.

Robert Pepper is VP Global Technology Policy at Cisco. His topic: The Internet Paradox: How bottom-up beat(s) command and control. The publicity promises “a lively discussion on how the dynamics of technology policy and largely obscure decisions significantly shaped the Internet as the bottom-up driver of innovation we know today … Dr. Pepper will cover the next market transition to the Internet of Everything and the interplay between policy and technology and highlighting early indicators of what the future may hold for the Internet.

I’m expecting a good objective discussion. As I learned many years ago, listening to Peter Cochrane when he was head of BT’s research centre, those who provide technical infrastructure don’t have a reason to hype up the different services which will run on it. Quite the opposite: they need to assess investment to satisfy demand, but not exceed it. Let’s see what we see. I’ll update this blog as we go, and probably abbreviate it tomorrow.

Starting on time: Liz Bacon, BCS President, is on stage. An unexpected extra: Daniel Turing, Alan Turing’s nephew, is introducing the Turing Trust with a mention of The Imitation Game, the Turing film, and of The BCS’s role in rebuilding Turing’s codebreaking machine (“the bomb”). The Trust recycles first-used computers to less well off countries. In our move last year, I passed quite a lot of old equipment to Recycle-IT who ethically re-use or dispose of un-reusable kit.

Now the main speaker (bio online). He describes himself as a “recovering regulator”; regulation is the intersection of policy and technology. Big iron to nano-compute, and we haven’t even seen the Apple Watch yet! This (and the cost/power changes) drives decentralisation of computing. Alongside, 1969: 4 “internet” locations (packet switched) on the west coast. By 1973, extended outside continental USA (London, Hawaii). 1993: global.

1994-5 the US Government outsourced (privatised) the network. NSF had been created. Restrictions were dropped to permit commercial use; and other governance was created. In the diagram, the biggest nodes (most traffic) are Google and Facebook; but China is coming up fast!

An alternative view: in stages. 1: connectivity (email, search). 2: networked economy; 3, Immersive. 99% of the world, though, is still unconnected. 1000 devices with IP addresses in 1984; forecast 20 bn by 2020. 50bn if you include non-IP such as RFID chips. Internet of Everything will encompass people, processes, data and things. Such as, by 2018, four IP modules on each of 256million connected cars. Such as, sensor clothing for athletes. I have a 1986 news clip from MIT Media Lab about the prototypes for exactly this. The quote was: “Your shoes may know more about you than your doctor does“.

Things create data which, through process, can positively affect people. But only 0.5% of data is being analysed for insights! There’s an example from nutrition. Take a photo of a product in the supermarket, and see if it’s appropriate (for example, no alcohol with your prescription). Or the “Proteus pill” to help with older people’s medication, which the FDA has already approved. Or the Uber cab app.

So that’s the technology. Now, on to policy and governance.

Internet governance developed bottom-up and is not centralised; it’s a multi-stakeholder global ecosystem of private, governments (lots of them!) and intergovernmental, providers, researchers, academics and others. There’s a diagram of those actually involved, which will be quite useful when I can retrieve it readably. First RFC was from ARPAnet in 1969. The first IETF met in 1986. ITU’s World Conference in 2012 saw proposals from some member states to regulate the Internet, and these were rejected. In 2014 the (US Dept of Commerce) proposal is to transition IANA to become a multi-stakeholder global body, so that the US finally cedes control of the network it inaugurated.

Now: as many of us know, the international standards process we currently have is done by consensus and can take years. Contrariwise, the IETF works by “Rough consensus and run code” (everlasting beta). Much faster. Based on RFCs that come in, and with a combination of online and face-to-face meetings. There are NO VOTES (Quakerism works in a similar way); “rough consensus” in IETF is assessed by hum!

Robert shows a slide of a “Technology Hourglass” (citing Steve Deering, 2001; Deering is also a Cisco person. I can’t find the actual reference). IP, at the centre, is in essence the controlling/enabling standard. Above (applications) and below (infrastructure) there can be innovation and differentiation. (My comment: in the same way, both 19th century rolling stock and modern trains can run on today’s network.) The suggestion: it’s a martini glass because at the top there’s a party going on!

There’s no need to ask permission to innovate! This is the Common Law approach: you can do anything that’s not prohibited. The UK has almost 1.5 million people working in this area. They are here because of Common Law: European countries have the reverse (you need permission). The information economy now dominates the previous waves of service, industry and agriculture.

Internet is a General Purpose Technology, like printing and transport and the telephone. Other things are built on it. Increasing broadband provision links to growth: this is not correlational, it is causal. Digital-technology innovation drives GDP growth in mature economies (McKinsey); the impact is on traditional sectors enabled by the digital.

Third: the paradox. There’s decentralisation of compute, to individuals, to nanodevices, and to stakeholders. But right now, governments want to reverse this approach and take control; to re-create silos, have forced localisation of standards, content and devices. This is already the case with some classes of data in some countries.

The issues: (1) extending connectivity to those who are not connected. (2) safety, security and privacy – where there clearly is a role for government, but be clear that these are not just internet issues. Others on a slide about Internet of Everything. Some governments are well-intentioned but not well informed; others, more dangerously, were the reverse. And old-tech assumptions (how you charge for phone service, for example) doesn’t match the new realities; the product is connectivity (not voice).

Swedish study: if you can’t transfer data, you can’t trade (nor have global companies). Localisation of data will impact severely on the global economy. Note: Economist Intelligence Unit looked at some proposals; 90% of the authoritarian regimes voted for new internet regulations on a multilateral basis, 90% of democracies against. Enough! We are at a crossroads where the Net could take either direction, and they are not equal.

Final quote: Neils Bohr. How wonderful we have met with a paradox. Now we have some hope of making progress!

I’m not going to try and capture Q&A. Heading over to Twitter. Watch the webcast; I’ll post the URL in an amendment when it’s up on the IET website.

Has it been an objective discussion? In one sense yes. But in another, Robert Pepper clearly has a passionate belief in the model of governance which he is promoting. What’s been shared is experience, insight and vision. Well worth a review.

Links:
• BCS/IET Turing Lecture 2015 (replay link to be added when available)
Proteus Digital Health including a video on their ingestible sensor
Watching the Waist of the Protocol Hourglass, Steve Deering, seminar 18 Jan 1998 at Carnegie-Mellon University (abstract only)
Turing Trust
Recycle-it (don’t be confused; other organisations with similar names exist on the web)

Embedding a blog in a website

I’ve just posted an update to the website for the Lewes Passion Play. Editing HTML for a news panel has served the purpose for several years, but with a performance in less than three months there will be more going on. So I wanted to embed a blog feed, as this will enable more people to update the news feed directly. Preferably a Google Blogger blog as this is easier for people to access and at least one potential contributor already uses Google.

Quite a palaver and one or two dead ends. For a start I didn’t want to go down the <iframe> route and embed the entire blog page, because that would confusingly duplicate menu items and links, required for the free-standing blog.

So I started with WordPress, since that’s where this blog is. And WordPress does have a built-in embed code generator. But here I learned the difference between a timeline and a full embed. WP delivers just the first part of a blog post; readers have to click through in order to get the whole post. So, sorry, not what I wanted.

Online reports suggested using Tumblr. It looked promising, but in the end the interface didn’t look easily useable for non-specialists (my potential contributors). And I got into a bind, because I lost the password and (unlike most similar sites) Tumblr won’t just send a reset link to the email they have on record for the account.

But I discovered a great service called feed2js. This will take an RSS feed, which Blogger delivers easily. The website creates a Javascript embed which will deliver a feed of the complete articles. Better still, it has embedded CSS so you can style it (hint: if this is only going on one page of your site then create a separate style sheet and link it just to this page). Yes, there are one or two niggles but it works and I’m pleased with the result!

Links:
• Lewes Passion Play, and see the native Blog
• feed2js.org
• Tumblr
Blogger feed URLs, Blogger help

How complex can it be to open a new savings account ?

We’ve recently gone through the exercise of opening online access saving accounts, looking for online instant access accounts with something more than a derisory rate of interest. The exercise has been instructive and at some times extraordinarily frustrating. Terms and Conditions varied from a couple of pages to around forty. It’s worth sharing a few observations which relate, it seems to me, to pseudo-security and to not thinking from the customer’s perspective.

There was one genuine complication. We have recently moved house. Online identity confirmation uses electoral registers, so we don’t show up: and most providers therefore asked for some form of additional confirmation. I don’t have a problem with that, but some make it easy and some don’t!

I’ll name one provider: Virgin Money. Their online process ran like clockwork, their checks were easily completed, and we were up and running in better than even time. The documentation was brief and a model of clarity. And, since they provide the account with an “ordinary” sort code and account number, the initial deposit could be made easily by the third party who was holding our funds.

It’s a pity the others couldn’t take a leaf out of Virgin’s book.

Most of them asked for paper documentation, which is fair enough: typically a certified copy of a passport and a driving licence would do. Certification, like a passport photo, could be done by pretty much any professional: but our first attempt, asking our own bank to do it, met with a refusal. They will only do it for their own products – not even their own customers. The Post Office will do it, for a fee, which is a good solution if you’re new to an area and haven’t yet acquired a wide circle of professional friends. One provider, linked to a major supermarket (one which is somewhat in the news at the moment) wouldn’t even tell us what documents they would ask for until the account had been opened and the initial deposit made. Some were quite quick to send postal correspondence, others much slower. Access codes of course also arrived in the post: fair enough, I count that as good practice.

Then there’s the “linked account” issue. Many savings providers, especially the ones that aren’t clearing banks, require that you nominate a “linked” bank account which must already exist in your name. Some insist that you sign a direct debit in their favour from this account, so you’re not transferring money to them; they’re claiming it off you and you’re subject to their processes. I guess this may avoid the limit which most banks quite properly put on online transfers.

And the rules vary. Some will only accept deposits from this linked account. Some will only pay out to it. Some will only pay interest into it, and some will only add interest to the deposit. All these arcane rules get in the way of what you actually want to do, which is to deposit a sum of money and earn interest.

Third, one account had persistent problems trying to get through the login sequence using Internet Explorer on Windows 8 – hardly an uncommon platform. Firefox on Mac was fine! For another attempt, we persistently failed to get to the starting gate on the online system at all, even after three separate interactions with their tech helpdesk; guess what, they didn’t get the business.

So don’t ever believe a deposit account which says it only takes half an hour to set up. For a start, do make sure you read the T&Cs, and that you can live with how you will be able to deposit money and get it back (including on account closure). Expect to spend up to an hour reading the T&Cs, and another hour working through the setup process. Expect the security checks, other confirmations and postal correspondence to take at least a week and possibly two.

But here’s the key question. If Virgin can make it quick, easy and efficient – and yet, presumably, secure and compliant – why does any other organisation have to make it so complex and frustrating? IT people: don’t let your organisation swamp your interface work with un-necessary complexity!

Links (just one this week)
• Virgin Money: Instant Access e-Saver. See how simple it is!

LinkedIn in the news (and its hidden resources)

Two media notes from LinkedIn this week: an enterprise which I always take an interest in because, as well as being a user, I visited them in Silicon Valley some years ago.

Through Outsell, which is a media analyst and (among other things) monitors analyst firms, I was connected to an article on VB which covers a LinkedIn tool called Gobblin. It’s been developed to gobble up, and improve LinkedIn’s use of, the wide range of sources which it uses. With many different inputs to reconcile (a task I’ve done bits of, on a much smaller scale, in the past), the development is clearly driven by necessity.

VB calls it “data ingestion software”. The interesting thing is that LinkedIn doesn’t treat these kinds of developments as proprietary. So the announcement explains that the software will be released, available to all kinds of other enterprises with similar needs, under an open-source licence.

Almost the same day, Outsell also flagged a report that LinkedIn is expanding its reach to embrace younger members (high-school students, in US terms) and will provide a specific capability for higher education institutions to promote themselves. This will, of course, increase the data ingestion requirement.

Interestingly, I had to use Google to find LinkedIn’s press release archive; there’s no link to corporate information on the regular user page so far as I can see. And there are no press releases showing at the moment related to either of these news items. However, via Twitter, I found a discussion of Gobblin from analyst GigaOM with, in turn, a link to another “hidden” section of the LinkedIn website: LinkedIn Engineering. That’s the primary source and it has diagrams and a useful discussion of the analysis and absorption of unstructured “big data”. Interesting to me, because I cut my database teeth on text databases when I moved from University computing to enterprise IT.

When I visited LinkedIn, on a Leading Edge Forum study tour, they were still a start-up and it wasn’t clear whether they had a viable business model or met a real need. It was their presentation then which decided me to sign up. Well, a good ten years on the company is still not in profit although revenue, in the last quarterly results, had increased by almost half year-on-year. The business model is still standing, at least.

MLinks:
• LinkedIn
• LinkedIn details Gobblin …, VB News, 25 Nov 2014
• LinkedIn expands for high school students, universities, Monterey Herald Business, 19 Nov 2014
• LinkedIn explains its complex Gobblin big data framework, GigaOM, 26 Nov 2014
• Gobblin’ Big Data With Ease, Lin Qiao (Engineering Manager), LinkedIn Engineering, 25 Nov 2014<
• LinkedIn Announces Third Quarter 2014 Results, LinkedIn press release, 20 Oct 2014
• Look for LinkedIn information here: Press Center; and Engineering

Master Data Management: sources and insights

Tomorrow I will be facilitating my last Corporate IT Forum event. After five years or so I’m standing down from the team, having valued the Forum first as a member and then, since my first retirement, being on the team. Tomorrow’s event is a webinar, presenting a member’s case study on their journey with Master Data Management (MDM).

There was a phase of my career when I was directly concerned with setting up what we’d now call Master Data for a global oil company. We were concerned to define the entities of interest to the enterprise. When systems (databases and the associated applications) were set up to hold live data and answer day to day or strategic questions, we wanted to avoid the confusions that could so easily arise. everyone thinks they know what a particular entity is. It ain’t necessarily that simple.

A couple of examples.

When we began the journey, we thought we’d start with a simple entity: Country. There are fewer than a couple of hundred countries in the world. We needed to know which country owned, licenced and taxed exploration and production. And everyone knows what a country is, don’t they?

Well, no. Just from our own still-almost-united islands: a simple question. Is Scotland (topically) a country? Is the Isle of Man? Is Jersey? In all those cases, there are some areas (e.g. foreign policy) where the effective answer is no; they are part of the single entity the United Kingdom. But in others (e.g. tax, legal systems, legislature) they are quite separate. And of course the list of countries is not immutable.

So: no single definitive list of countries. No standard list of representative codes either: again, do we use GB? or UK? Do we use international vehicle country codes, or Internet domain codes, or … What codes would be used in data coming in from outside? And finally: could we find an agreed person or function within the Company who would take responsibility for managing and maintaining this dataset, and whose decisions would be accepted by everyone with an interest and their own opinions.

And talking of data coming in from outside: I carried out a reconciliation exercise between two external sources of data on exploration activities in the UK North Sea. You’d think that would be quite well defined: the geological provinces, the licence blocks, the estimates of reserves and so on. record keeping in the UK would surely be up to the game.

But no: the two sources didn’t even agree on the names and definitions of the reservoirs. Bringing the data from these sources together was going to be a non-trivial task requiring geological and commercial expertise.

Then again, we went through a merger and discovered that two companies could allocate responsibility for entities (and for the data which represented them) quite differently within their organisations.

So: this is a well developed topic in information systems. Go back to a Forrester blog in 2012: analyst Michelle Goetz maintains forcefully that MDM is not about providing (in some IT-magic way) a Single Source of Truth. There ain’t no such animal. MDM is a fundamental tool for reconciling different data sources, so that the business can answer useful questions without being confused by different people who think they are talking about the same thing but aren’t, really.

It may be a two year old post, but it’s still relevant, and Michele Goetz is still one of Forrester’s lead analysts in this area. Forrester’s first-ever Wave for MDM solutions came out in February this year. It’s downloadable from some of the leading vendors (such as SAP or Informatica). There’s also a recent Wave on Product Information Management which is tagged “MDM in business terms”, and might be worth a look too. Browse for some of the other stuff.

Gartner have a toolkit of resources. Their famed Magic Quadrant exists in multiple versions e.g. for Product information and for Customer Data. I’d be unsure how the principles of MDM vary between domains so (without studying the reports) I’m not clear why the separation. You might do better with the MDM overview, which also dates from 2012. You will find RFP templates, a risk framework, and market guides. Bill O’Kane and Marcus Collins are key names. For Gartner subscribers, a good browse and an analyst call will be worthwhile.

Browse more widely too. Just one caution: MDM these days also means Mobile Device Management. Don’t get confused!
Links:
• Master Data Management Does Not Equal The Single Source Of Truth, Michele Goetz, Forrester blog, 26 Oct 2012
• The Forrester Wave™: Master Data Management Solutions, Q1 2014, 3 Feb 2014 (download from Informatica, link at foot of page
• PIM: MDM on Business Terms, Michele Goetz, 6 Jun 2014
• Master Data Management, Marcus Collins, Gartner, 9 Jul 2012

Benefits realisation: analyst insight

I’m facilitating an event tomorrow on “Optimising the benefits life cycle”. So as always I undertook my own prior research to see what the mainstream analysts have to offer.

Forrester was a disappointment. “Benefits Realization” (with a z) turns up quite a lot, but the research is primarily labelled “Lead to Revenue Management” – that is, it’s about sales. There is some material on the wider topic, but it dates back several years or longer. Though it’s always relevant to remember Forrester’s elevator project pitch from Chuck Gliedman: We are doing A to make B better, as measured by C, which is worth X dollars (pounds, euros …) to the organisation.

There is a lot of material from both academic researchers and organisations like PMI (Project Management Institute). But in the IT insight market, there seems to be remarkably little (do correct me …) except that the Corporate IT Forum, where I’ll be tomorrow, has returned to the issue regularly. Tomorrow’s event is the latest in the series. The Forum members clearly see this as important.

But so far as external material is concerned, this blog turns into a plug for a recent Gartner webinar by Richard Hunter, who (a fair number of years ago) added considerable value to an internal IT presentation I delivered on emerging technologies for our enterprise. I’m not going to review the whole presentation because it’s on open access from Gartner’s On Demand webinars. But to someone who experienced the measurement-oriented focus of a Six-Sigma driven IT team, it’s not a real surprise that Richard’s key theme is to identify and express the benefits before you start: in business terms, not technology-oriented language, and with an expectation that you will know how to measure and harvest the benefits. It’s not about on-time-on-budget; it’s about the business outcome. Shortening a process cycle from days to hours; reducing the provision for returns; and so on.

If this is your topic, spend an hour reviewing Richard’s presentation (complete with family dog in the background). It will be time well spent.

Links:
• Getting to Benefits Realization: What to Do and When to Do It, Richard Hunter, Gartner, 7 Aug 2014 (go to Gartner Webinars and search for Benefits Realization)
• Corporate IT Forum: Optimising the Benefits Lifecycle (workshop, 16 Sep 2014)

Analyst Directory update

It’s a long time since the InformationSpan blog index has been updated – not since February. To be fair, I had a look in May but there were too few changes to be significant. However, there’s now enough to report, and the index has been thoroughly reviewed and updated.

First, Gartner: a handful of new analysts have appeared. The main comments, though, relate to past acquisitions.

I’ve finally removed almost all references to AMR, but in true Gartner fashion there are some inconsistencies. If you look on Gartner’s Research marketing page, there is of course Gartner for Supply Chain Professionals, created out of the former AMR service. All traces of AMR seem to have disappeared until you look also at the Gartner for Enterprise Supply Chain Leaders service. The flyer for this service is headed “AMR Enterprise Supply Chain Leaders” and is replete with references to AMR services. It’s dated 2010, just after the acquisition; but it’s still on the system. I did not find any other reference to a service called Gartner for Enterprise Supply Chain Leaders.

Burton service have also been fully absorbed; most of the Burton analysts have left, the IT1 tag also seems to have disappeared, and one of the remaining accessible legacy blogs has moved to inaccessible. However, six Burton blogs can still be found and I’ve discovered there are also TypePad profiles linked to them. There’s also still one accessible (but moribund) Gartner IT1 blog, and a fair sprinkling (as always) of blogs left over from other analysts who have left.

There have been more changes to the Forrester page. First, perhaps most significantly: Forrester seem to have shed their Business Technology tag. It was a good one, but didn’t catch on; and I suppose George Colony has decided to go with the market. These services are now referred to as Technology Management.

There have, too, been some changes within Forrester’s categories. Business Process and Content & Collaboration seem to have become moribund (no new content for over two years), and there remain a number of still-extant blog names which redirect somewhere else (and have done so for some time). Interestingly, within the Marketing & Product Strategy group, there’s a blog which had been dormant since 2008 but Consumer Product Strategy has acquired a new posting recently. Forrester seem better than Gartner at tidying up when analysts leave, but there are three or four still-extant blogs from departed analysts.

I reviewed the Others page too. I haven’t added any new analyst sources (suggestions??) but Erica and Sam Driver’s ThinkBalm content has now been lost. Charlene Li’s Altimeter group now has a fully integrated blog section within the main website (not new, but I haven’t noted it before) as well as personal blogs maintained by Charlene herself and some colleagues. I have, though, included Euan Semple’s The Obvious which offers so many of us great insights and ideas. If George Colony hadn’t already grabbed Counterintuitive as his blog title, it would be a good alternative for Euan!

No Links here, but click the link at the head or right hand side of this blog to go to the InformationSpan Analyst Blogs Index.