On news, social media and responsibility

The Guardian this morning is published under a new editor. Katharine Viner takes over from Alan Rusbridger, and she takes charge of an institution which is very different from the one Rusbridger inherited from Peter Preston in 1995.

Rusbridger yesterday published a farewell to his readers: now no longer just readers, but also both members and contributors to the conversations which The Guardian facilitates. In the internet age, some papers instituted paywalls: Rusbridger cites Murdoch’s Times, which claims around 280,000 daily readers. The Guardian took the opposite stance, opening up its content to an international readership. It is now the second most widely read online Enlish-language news “paper” worldwide: around seven million people read it online. For myself, I still subscribe to the paper edition: but the smartphone app has taken over from the website as my preferred means of access when, as recently, I am overseas. Even the BBC is not so accessible from abroad.

But the point of this post is to encourage you to read Rusbridger’s farewell in its entirety (and it’s quite long). It contains thoughtful, stimulating analysis of issues such as the place of the social web in interactive journalism – bringing forth a new role, combining journalism with the skill of forum moderation. There’s the continuing role of ethical reporting in holding people to account (including, as seen recently, its own industry peers). Illustrating the trend to online, there’s a comment that the new presses, bought when the paper changed format, were “likely to be the last we ever bought”.

He recalls The Guardian‘s first website, which “didn’t fall into the trap of simply replicating online what we did in print”; in my own career I led my company’s strategy towards the Internet and the emergent World-Wide Web, and I recognise these issues. In due course the paper has developed its interactive model, opening up for response and comment from its online readership as an important part of continuous publishing.

Wikileaks, the phone hacking scandals, Edward Snowden and more; recognition, through the Pulitzer Prize; and successes such as the curtailment of News Corporation’s monopolistic ambtions and, more recently, that the US “phone dragnet hat had secretly violated the privacy of millions of Americans every day since October 2001″ has been shut down. Interesting sideline: the link to this in Rusbridger’s article is null, and I couldn’t find a recent news article but, in the interactive Comment is Free section, there’s a discussion from the American Civil Liberties Union dating from April 2014.

I’ve scratched the surface. For those of us looking at the ethics as well as the potential of information creation and sharing – and we are all publishers now – Rusbridger’s farewell should be required reading.

Links:
• ‘Farewell, readers’: Alan Rusbridger on leaving the Guardian after two decades at the helm, The Guardian, 29 May 2015
• Obama is cancelling the NSA dragnet. So why did all three branches sign off? Jameel Jaffer, American Civil Liberties Union, in Comment is Free, The Guardian, 25 March 2014
• other references in the articles

Nepal: an IT response

As well as the straightforward humanitarian agencies involved in relief following the now twin earthquakes in Nepal, this morning’s inbox alerted me to another important effort.

I’ve used Mapbox, in tandem with Google Maps, to provide the venues map for the Brighton Early Music Festival. Google Maps got a lot more complex at the last upgrade, and the development interface even for a simple published map is not so easy or friendly. Mapbox can import output from a Google map (which was my starter) and creates, to my mind, a simpler and clearer map with a more useful marker capability: the flags on the map can be numbered or lettered at will (where Google’s can only be in a simple sequence), to link to a list published alongside. With this map linked to a stand-alone Google map which provides the usual directions, search nearby and so on, I think our concert-goers have the best of both worlds.

Mapbox, or Open Street Map, is an open source project. Today’s email flagged up its role in providing fast-response mapping for disasters such as Nepal. The email tells me:

Within just hours of the earthquake in Nepal the Humanitarian OpenStreetMap Team (HOT) rallied the OpenStreetMap community. Over 2,000 mappers quadrupled road mileage and added 30% more buildings. We designed print maps to aid post-earthquake relief efforts, chronicled satellite imagery collection over the area, and used Turf.js to identify the hardest-hit buildings and roads.

This is the strength of Open Source as a community effort. It can mobilise people for this kind of task on a scale that a commercial organisation cannot. You don’t have to be in Nepal; the work is to digitise satellite imagery, and the Nepal project wiki can get anyone established in the team.

Oh, and of course the resources (particularly servers and software) come under strain. So if you are not minded to donate to the Disasters Emergency Committee or one of its agencies, perhaps you can contribute time or a donation to support OSM’s Humanitarian OSM Team in this work.

Links:
• 2015 Nepal Earthquake page from the Open Street Map wiki
• BREMF venues (Mapbox embedded map, with link to Google) for Brighton Early Music Festival
• Mapbox and OpenStreetMap
Why I hate the new Google Maps, ITasITis, 17 Apr 2014

Location services move indoors: Apple’s iBeacon

An incidental headline in Outsell’s information market monitoring email brought my attention to Apple’s new iBeacon technology, announced last year.

We’ve long been used to the idea that the smart devices we carry around with us might/can detect nearby things of interest: for example, alerting us to an offer from a store nearby. Location services, based on GPS, on your current WiFi connection, or on triangulation from your mobile signal, do this. So can active RFID.

But indoor location is difficult. Current technology is an updated version of the old nautical dead reckoning. It notes where you are when you lose your accurate GPS/cellular/WiFi positioning, and uses motion sensors to track.

iBeacon is different. It’s a nearer-proximity application and is based on Bluetooth detection of your smartphone. Apple says: Instead of using latitude and longitude to define the location, iBeacon uses a Bluetooth low energy signal, which iOS devices detect. So you need Bluetooth turned on as well as having an appropriate app loaded. This leaves you a modicum of control, I guess.

What alerted me was Outsell’s note that London-based online community specialist Verve has added Apple’s iBeacon technology to its Community Panel app, allowing it to track individual members as they travel into and around stores fitted with the iBeacon device. The report, from “MrWeb”, is firmly in the market research space. This is very much a retailer’s app; it tracks the device in detail through a store, identifying where the user spends time – and how long they stay there – and possibly triggering instant marketing surveys on that basis.

Verve is a newish (2008) company. They describe themselves as “The community panel for research”. Their business is the creation of community panels, acting as consultants to companies needing consumer-focussed research. There’s no  indication, therefore, of what incentives are offered to users to join panels; but one might assume instant offers would be the least of it. There is some client information in their “About Us” section (but one client is T-Mobile, which hasn’t existed independently since around the time Verve were formed, so one wonders …).

Apple’s developer website suggest a range of applications:

From welcoming people as they arrive at a sporting event to providing information about a nearby museum exhibit, iBeacon opens a new world of possibilities for location awareness, and countless opportunities for interactivity between iOS devices and iBeacon hardware

A link will take you through to a video from the 2014 WorldWide Developers Forum. This is awkward to get at: unless you’re using Safari on a recent MacOS you will need to download the file to play it. But it’s worth it; it takes you on a journey from existing RF triangulation, adding motion sensors when indoors and out of effective range, to the new beacon-based technology. And on the way it suggests more user-oriented applications, such as finding your way roung Heathrow Airport; or through an unfamiliar hospital on a family visit. Watch about the first 15 minutes, before it routes to coding stuff for developers.

Technically, interesting; a new twist on location services. Practically useful; but watch out (as always) for what it may do to your privacy. As they say: enjoy!

Links:
• iOS: understanding iBeacon, Apple
• iBeacon for Developers, Apple Developer website
• Verve Adds iBeacon Tech to Panel App, Mr Web Daily Rresearch News Online, 5 Mar 2015
• Verve: community panel research
Taking Core Location Indoors, Nav Patel, Apple WWDC, June 2014. Page down to find the expanded link

Turing Lecture 2015: The Internet Paradox (links updated)

Following a move, I’m no longer close enough to London to easily attend the BCS and IET’s prestige Turing lecture in person. So this year, for the first time, I will be attending online.

Robert Pepper is VP Global Technology Policy at Cisco. His topic: The Internet Paradox: How bottom-up beat(s) command and control. The publicity promises “a lively discussion on how the dynamics of technology policy and largely obscure decisions significantly shaped the Internet as the bottom-up driver of innovation we know today … Dr. Pepper will cover the next market transition to the Internet of Everything and the interplay between policy and technology and highlighting early indicators of what the future may hold for the Internet.

I’m expecting a good objective discussion. As I learned many years ago, listening to Peter Cochrane when he was head of BT’s research centre, those who provide technical infrastructure don’t have a reason to hype up the different services which will run on it. Quite the opposite: they need to assess investment to satisfy demand, but not exceed it. Let’s see what we see. I’ll update this blog as we go, and probably abbreviate it tomorrow.

Starting on time: Liz Bacon, BCS President, is on stage. An unexpected extra: Daniel Turing, Alan Turing’s nephew, is introducing the Turing Trust with a mention of The Imitation Game, the Turing film, and of The BCS’s role in rebuilding Turing’s codebreaking machine (“the bomb”). The Trust recycles first-used computers to less well off countries. In our move last year, I passed quite a lot of old equipment to Recycle-IT who ethically re-use or dispose of un-reusable kit.

Now the main speaker (bio online). He describes himself as a “recovering regulator”; regulation is the intersection of policy and technology. Big iron to nano-compute, and we haven’t even seen the Apple Watch yet! This (and the cost/power changes) drives decentralisation of computing. Alongside, 1969: 4 “internet” locations (packet switched) on the west coast. By 1973, extended outside continental USA (London, Hawaii). 1993: global.

1994-5 the US Government outsourced (privatised) the network. NSF had been created. Restrictions were dropped to permit commercial use; and other governance was created. In the diagram, the biggest nodes (most traffic) are Google and Facebook; but China is coming up fast!

An alternative view: in stages. 1: connectivity (email, search). 2: networked economy; 3, Immersive. 99% of the world, though, is still unconnected. 1000 devices with IP addresses in 1984; forecast 20 bn by 2020. 50bn if you include non-IP such as RFID chips. Internet of Everything will encompass people, processes, data and things. Such as, by 2018, four IP modules on each of 256million connected cars. Such as, sensor clothing for athletes. I have a 1986 news clip from MIT Media Lab about the prototypes for exactly this. The quote was: “Your shoes may know more about you than your doctor does“.

Things create data which, through process, can positively affect people. But only 0.5% of data is being analysed for insights! There’s an example from nutrition. Take a photo of a product in the supermarket, and see if it’s appropriate (for example, no alcohol with your prescription). Or the “Proteus pill” to help with older people’s medication, which the FDA has already approved. Or the Uber cab app.

So that’s the technology. Now, on to policy and governance.

Internet governance developed bottom-up and is not centralised; it’s a multi-stakeholder global ecosystem of private, governments (lots of them!) and intergovernmental, providers, researchers, academics and others. There’s a diagram of those actually involved, which will be quite useful when I can retrieve it readably. First RFC was from ARPAnet in 1969. The first IETF met in 1986. ITU’s World Conference in 2012 saw proposals from some member states to regulate the Internet, and these were rejected. In 2014 the (US Dept of Commerce) proposal is to transition IANA to become a multi-stakeholder global body, so that the US finally cedes control of the network it inaugurated.

Now: as many of us know, the international standards process we currently have is done by consensus and can take years. Contrariwise, the IETF works by “Rough consensus and run code” (everlasting beta). Much faster. Based on RFCs that come in, and with a combination of online and face-to-face meetings. There are NO VOTES (Quakerism works in a similar way); “rough consensus” in IETF is assessed by hum!

Robert shows a slide of a “Technology Hourglass” (citing Steve Deering, 2001; Deering is also a Cisco person. I can’t find the actual reference). IP, at the centre, is in essence the controlling/enabling standard. Above (applications) and below (infrastructure) there can be innovation and differentiation. (My comment: in the same way, both 19th century rolling stock and modern trains can run on today’s network.) The suggestion: it’s a martini glass because at the top there’s a party going on!

There’s no need to ask permission to innovate! This is the Common Law approach: you can do anything that’s not prohibited. The UK has almost 1.5 million people working in this area. They are here because of Common Law: European countries have the reverse (you need permission). The information economy now dominates the previous waves of service, industry and agriculture.

Internet is a General Purpose Technology, like printing and transport and the telephone. Other things are built on it. Increasing broadband provision links to growth: this is not correlational, it is causal. Digital-technology innovation drives GDP growth in mature economies (McKinsey); the impact is on traditional sectors enabled by the digital.

Third: the paradox. There’s decentralisation of compute, to individuals, to nanodevices, and to stakeholders. But right now, governments want to reverse this approach and take control; to re-create silos, have forced localisation of standards, content and devices. This is already the case with some classes of data in some countries.

The issues: (1) extending connectivity to those who are not connected. (2) safety, security and privacy – where there clearly is a role for government, but be clear that these are not just internet issues. Others on a slide about Internet of Everything. Some governments are well-intentioned but not well informed; others, more dangerously, were the reverse. And old-tech assumptions (how you charge for phone service, for example) doesn’t match the new realities; the product is connectivity (not voice).

Swedish study: if you can’t transfer data, you can’t trade (nor have global companies). Localisation of data will impact severely on the global economy. Note: Economist Intelligence Unit looked at some proposals; 90% of the authoritarian regimes voted for new internet regulations on a multilateral basis, 90% of democracies against. Enough! We are at a crossroads where the Net could take either direction, and they are not equal.

Final quote: Neils Bohr. How wonderful we have met with a paradox. Now we have some hope of making progress!

I’m not going to try and capture Q&A. Heading over to Twitter. Watch the webcast; I’ll post the URL in an amendment when it’s up on the IET website.

Has it been an objective discussion? In one sense yes. But in another, Robert Pepper clearly has a passionate belief in the model of governance which he is promoting. What’s been shared is experience, insight and vision. Well worth a review.

Links:
• BCS/IET Turing Lecture 2015: online report (BCS); or view the webcast replay from The IET
Proteus Digital Health including a video on their ingestible sensor
Watching the Waist of the Protocol Hourglass, Steve Deering, seminar 18 Jan 1998 at Carnegie-Mellon University (abstract only)
Turing Trust
Recycle-it (don’t be confused; other organisations with similar names exist on the web)

LinkedIn in the news (and its hidden resources)

Two media notes from LinkedIn this week: an enterprise which I always take an interest in because, as well as being a user, I visited them in Silicon Valley some years ago.

Through Outsell, which is a media analyst and (among other things) monitors analyst firms, I was connected to an article on VB which covers a LinkedIn tool called Gobblin. It’s been developed to gobble up, and improve LinkedIn’s use of, the wide range of sources which it uses. With many different inputs to reconcile (a task I’ve done bits of, on a much smaller scale, in the past), the development is clearly driven by necessity.

VB calls it “data ingestion software”. The interesting thing is that LinkedIn doesn’t treat these kinds of developments as proprietary. So the announcement explains that the software will be released, available to all kinds of other enterprises with similar needs, under an open-source licence.

Almost the same day, Outsell also flagged a report that LinkedIn is expanding its reach to embrace younger members (high-school students, in US terms) and will provide a specific capability for higher education institutions to promote themselves. This will, of course, increase the data ingestion requirement.

Interestingly, I had to use Google to find LinkedIn’s press release archive; there’s no link to corporate information on the regular user page so far as I can see. And there are no press releases showing at the moment related to either of these news items. However, via Twitter, I found a discussion of Gobblin from analyst GigaOM with, in turn, a link to another “hidden” section of the LinkedIn website: LinkedIn Engineering. That’s the primary source and it has diagrams and a useful discussion of the analysis and absorption of unstructured “big data”. Interesting to me, because I cut my database teeth on text databases when I moved from University computing to enterprise IT.

When I visited LinkedIn, on a Leading Edge Forum study tour, they were still a start-up and it wasn’t clear whether they had a viable business model or met a real need. It was their presentation then which decided me to sign up. Well, a good ten years on the company is still not in profit although revenue, in the last quarterly results, had increased by almost half year-on-year. The business model is still standing, at least.

MLinks:
• LinkedIn
• LinkedIn details Gobblin …, VB News, 25 Nov 2014
• LinkedIn expands for high school students, universities, Monterey Herald Business, 19 Nov 2014
• LinkedIn explains its complex Gobblin big data framework, GigaOM, 26 Nov 2014
• Gobblin’ Big Data With Ease, Lin Qiao (Engineering Manager), LinkedIn Engineering, 25 Nov 2014<
• LinkedIn Announces Third Quarter 2014 Results, LinkedIn press release, 20 Oct 2014
• Look for LinkedIn information here: Press Center; and Engineering

Master Data Management: sources and insights

Tomorrow I will be facilitating my last Corporate IT Forum event. After five years or so I’m standing down from the team, having valued the Forum first as a member and then, since my first retirement, being on the team. Tomorrow’s event is a webinar, presenting a member’s case study on their journey with Master Data Management (MDM).

There was a phase of my career when I was directly concerned with setting up what we’d now call Master Data for a global oil company. We were concerned to define the entities of interest to the enterprise. When systems (databases and the associated applications) were set up to hold live data and answer day to day or strategic questions, we wanted to avoid the confusions that could so easily arise. everyone thinks they know what a particular entity is. It ain’t necessarily that simple.

A couple of examples.

When we began the journey, we thought we’d start with a simple entity: Country. There are fewer than a couple of hundred countries in the world. We needed to know which country owned, licenced and taxed exploration and production. And everyone knows what a country is, don’t they?

Well, no. Just from our own still-almost-united islands: a simple question. Is Scotland (topically) a country? Is the Isle of Man? Is Jersey? In all those cases, there are some areas (e.g. foreign policy) where the effective answer is no; they are part of the single entity the United Kingdom. But in others (e.g. tax, legal systems, legislature) they are quite separate. And of course the list of countries is not immutable.

So: no single definitive list of countries. No standard list of representative codes either: again, do we use GB? or UK? Do we use international vehicle country codes, or Internet domain codes, or … What codes would be used in data coming in from outside? And finally: could we find an agreed person or function within the Company who would take responsibility for managing and maintaining this dataset, and whose decisions would be accepted by everyone with an interest and their own opinions.

And talking of data coming in from outside: I carried out a reconciliation exercise between two external sources of data on exploration activities in the UK North Sea. You’d think that would be quite well defined: the geological provinces, the licence blocks, the estimates of reserves and so on. record keeping in the UK would surely be up to the game.

But no: the two sources didn’t even agree on the names and definitions of the reservoirs. Bringing the data from these sources together was going to be a non-trivial task requiring geological and commercial expertise.

Then again, we went through a merger and discovered that two companies could allocate responsibility for entities (and for the data which represented them) quite differently within their organisations.

So: this is a well developed topic in information systems. Go back to a Forrester blog in 2012: analyst Michelle Goetz maintains forcefully that MDM is not about providing (in some IT-magic way) a Single Source of Truth. There ain’t no such animal. MDM is a fundamental tool for reconciling different data sources, so that the business can answer useful questions without being confused by different people who think they are talking about the same thing but aren’t, really.

It may be a two year old post, but it’s still relevant, and Michele Goetz is still one of Forrester’s lead analysts in this area. Forrester’s first-ever Wave for MDM solutions came out in February this year. It’s downloadable from some of the leading vendors (such as SAP or Informatica). There’s also a recent Wave on Product Information Management which is tagged “MDM in business terms”, and might be worth a look too. Browse for some of the other stuff.

Gartner have a toolkit of resources. Their famed Magic Quadrant exists in multiple versions e.g. for Product information and for Customer Data. I’d be unsure how the principles of MDM vary between domains so (without studying the reports) I’m not clear why the separation. You might do better with the MDM overview, which also dates from 2012. You will find RFP templates, a risk framework, and market guides. Bill O’Kane and Marcus Collins are key names. For Gartner subscribers, a good browse and an analyst call will be worthwhile.

Browse more widely too. Just one caution: MDM these days also means Mobile Device Management. Don’t get confused!
Links:
• Master Data Management Does Not Equal The Single Source Of Truth, Michele Goetz, Forrester blog, 26 Oct 2012
• The Forrester Wave™: Master Data Management Solutions, Q1 2014, 3 Feb 2014 (download from Informatica, link at foot of page
• PIM: MDM on Business Terms, Michele Goetz, 6 Jun 2014
• Master Data Management, Marcus Collins, Gartner, 9 Jul 2012

Benefits realisation: analyst insight

I’m facilitating an event tomorrow on “Optimising the benefits life cycle”. So as always I undertook my own prior research to see what the mainstream analysts have to offer.

Forrester was a disappointment. “Benefits Realization” (with a z) turns up quite a lot, but the research is primarily labelled “Lead to Revenue Management” – that is, it’s about sales. There is some material on the wider topic, but it dates back several years or longer. Though it’s always relevant to remember Forrester’s elevator project pitch from Chuck Gliedman: We are doing A to make B better, as measured by C, which is worth X dollars (pounds, euros …) to the organisation.

There is a lot of material from both academic researchers and organisations like PMI (Project Management Institute). But in the IT insight market, there seems to be remarkably little (do correct me …) except that the Corporate IT Forum, where I’ll be tomorrow, has returned to the issue regularly. Tomorrow’s event is the latest in the series. The Forum members clearly see this as important.

But so far as external material is concerned, this blog turns into a plug for a recent Gartner webinar by Richard Hunter, who (a fair number of years ago) added considerable value to an internal IT presentation I delivered on emerging technologies for our enterprise. I’m not going to review the whole presentation because it’s on open access from Gartner’s On Demand webinars. But to someone who experienced the measurement-oriented focus of a Six-Sigma driven IT team, it’s not a real surprise that Richard’s key theme is to identify and express the benefits before you start: in business terms, not technology-oriented language, and with an expectation that you will know how to measure and harvest the benefits. It’s not about on-time-on-budget; it’s about the business outcome. Shortening a process cycle from days to hours; reducing the provision for returns; and so on.

If this is your topic, spend an hour reviewing Richard’s presentation (complete with family dog in the background). It will be time well spent.

Links:
• Getting to Benefits Realization: What to Do and When to Do It, Richard Hunter, Gartner, 7 Aug 2014 (go to Gartner Webinars and search for Benefits Realization)
• Corporate IT Forum: Optimising the Benefits Lifecycle (workshop, 16 Sep 2014)