Frost & Sullivan on Brexit

Continuing to track what the analysts are saying about Brexit: Frost and Sullivan (F&S) have just notified A Post Brexit View of the UK High Tech Sector, published from their Digital Transformation practice.

They point out that the technology sector accounts for around 10% of the UK’s GDP but see four key challenges. First: a brake on migration is not certain, since free movement of EU labour goes with access to the single market. But if it materialises, then F&S see UK-based technology firms struggling to find the right skills to drive businesses forward; and EU citizens currently working here may migrate out, to other parts of Europe.

Second: many firms, particularly US-based ones in our sector, have come to the UK at least in part because it provided a good gateway into the enormous European market. Once this is no longer the case, will they migrate to other locations (e.g. Frankfurt, Paris, …). This will be more of a problem if single-market access is sacrificed to control of immigration.

Third, as I’ve already commented: leaving the EU does not necessarily mean less red tape. F&S highlight data protection, as I did previously. “Pan-European contracts will need to be renegotiated, and IP/trademarks may require separate treatment for the UK and the EU”. We will be divorced from the creation of the regulatory environment, and F&S suggest firms may find it harder “to navigate legislation and ensure they abide with the varying rules in different countries”; and the UK will without question have less clout than it does as part of the EU.

Fourth, F&S point out that the European Investment Fund is the largest investor in UK venture capital firms. Will this funding stream remain in place? and if not, can it be replaced out of the fabled £350m per week, along with everything else?

Links:
• A Post Brexit View of the UK High Tech Sector, Frost & Sullivan, undated but publicised 14 Jul 2016; the link is to a download form (if this doesn’t work search
• What about the EU? ITasITis, 24 Jun 2016
• An aggregation of post-referendum comments, ITasITis, 2 Jul 2016

Nepal: an IT response

As well as the straightforward humanitarian agencies involved in relief following the now twin earthquakes in Nepal, this morning’s inbox alerted me to another important effort.

I’ve used Mapbox, in tandem with Google Maps, to provide the venues map for the Brighton Early Music Festival. Google Maps got a lot more complex at the last upgrade, and the development interface even for a simple published map is not so easy or friendly. Mapbox can import output from a Google map (which was my starter) and creates, to my mind, a simpler and clearer map with a more useful marker capability: the flags on the map can be numbered or lettered at will (where Google’s can only be in a simple sequence), to link to a list published alongside. With this map linked to a stand-alone Google map which provides the usual directions, search nearby and so on, I think our concert-goers have the best of both worlds.

Mapbox, or Open Street Map, is an open source project. Today’s email flagged up its role in providing fast-response mapping for disasters such as Nepal. The email tells me:

Within just hours of the earthquake in Nepal the Humanitarian OpenStreetMap Team (HOT) rallied the OpenStreetMap community. Over 2,000 mappers quadrupled road mileage and added 30% more buildings. We designed print maps to aid post-earthquake relief efforts, chronicled satellite imagery collection over the area, and used Turf.js to identify the hardest-hit buildings and roads.

This is the strength of Open Source as a community effort. It can mobilise people for this kind of task on a scale that a commercial organisation cannot. You don’t have to be in Nepal; the work is to digitise satellite imagery, and the Nepal project wiki can get anyone established in the team.

Oh, and of course the resources (particularly servers and software) come under strain. So if you are not minded to donate to the Disasters Emergency Committee or one of its agencies, perhaps you can contribute time or a donation to support OSM’s Humanitarian OSM Team in this work.

Links:
• 2015 Nepal Earthquake page from the Open Street Map wiki
• BREMF venues (Mapbox embedded map, with link to Google) for Brighton Early Music Festival
• Mapbox and OpenStreetMap
Why I hate the new Google Maps, ITasITis, 17 Apr 2014

Insight sector not immune: Gigaom closes

Several commentators have picked up the report that Gigaom and Gigaom Research have become insolvent and closed down.

I haven’t myself been a Gigaom user, even at the free subscription level, so no analysis of what went wrong. But Outsell re-linked the report from USA Today which, although it’s not from the tech press, is a fair summary in a few paragraphs of ths history of the company.

There are, it seems, no plans to file for bankruptcy protection or to re-launch. Gigaom’s tech content is still accessible on the website, but it’s not impossible this would be removed at quite short notice. Clients especially: review, and download!

Links:
• About Gigaom, Gigaom website, 9 Mar 2015
• Tech site Gigaom closes as creditors take over assets, USA Today, 9 Mar 2015

LinkedIn in the news (and its hidden resources)

Two media notes from LinkedIn this week: an enterprise which I always take an interest in because, as well as being a user, I visited them in Silicon Valley some years ago.

Through Outsell, which is a media analyst and (among other things) monitors analyst firms, I was connected to an article on VB which covers a LinkedIn tool called Gobblin. It’s been developed to gobble up, and improve LinkedIn’s use of, the wide range of sources which it uses. With many different inputs to reconcile (a task I’ve done bits of, on a much smaller scale, in the past), the development is clearly driven by necessity.

VB calls it “data ingestion software”. The interesting thing is that LinkedIn doesn’t treat these kinds of developments as proprietary. So the announcement explains that the software will be released, available to all kinds of other enterprises with similar needs, under an open-source licence.

Almost the same day, Outsell also flagged a report that LinkedIn is expanding its reach to embrace younger members (high-school students, in US terms) and will provide a specific capability for higher education institutions to promote themselves. This will, of course, increase the data ingestion requirement.

Interestingly, I had to use Google to find LinkedIn’s press release archive; there’s no link to corporate information on the regular user page so far as I can see. And there are no press releases showing at the moment related to either of these news items. However, via Twitter, I found a discussion of Gobblin from analyst GigaOM with, in turn, a link to another “hidden” section of the LinkedIn website: LinkedIn Engineering. That’s the primary source and it has diagrams and a useful discussion of the analysis and absorption of unstructured “big data”. Interesting to me, because I cut my database teeth on text databases when I moved from University computing to enterprise IT.

When I visited LinkedIn, on a Leading Edge Forum study tour, they were still a start-up and it wasn’t clear whether they had a viable business model or met a real need. It was their presentation then which decided me to sign up. Well, a good ten years on the company is still not in profit although revenue, in the last quarterly results, had increased by almost half year-on-year. The business model is still standing, at least.

MLinks:
• LinkedIn
• LinkedIn details Gobblin …, VB News, 25 Nov 2014
• LinkedIn expands for high school students, universities, Monterey Herald Business, 19 Nov 2014
• LinkedIn explains its complex Gobblin big data framework, GigaOM, 26 Nov 2014
• Gobblin’ Big Data With Ease, Lin Qiao (Engineering Manager), LinkedIn Engineering, 25 Nov 2014<
• LinkedIn Announces Third Quarter 2014 Results, LinkedIn press release, 20 Oct 2014
• Look for LinkedIn information here: Press Center; and Engineering

SAPphire and Supernova: two reasons for a visit to Constellation

R “Ray” Wang’s Constellation group is worth watching anyway. But just now there are a couple of good reasons.

First, if you’re a SAP user, they have coverage of the recent SAPphire conference. Remember that Ray’s primary expertise, from his days at Forrester, is in ERP. Just go to Constellation and search for “Sapphire 2014” for pre- and post-event analysis. There are of course also replays and other notes on the SAP website, if you want to go back to the originals.

Secondly, they are launching the call for this year’s Supernova innovation awards. Again, worth watching if your focus includes the what, how and who of innovation in business. As I’ve commented before, I’m not clear on the relationship between this Supernova event and the one formerly hosted by Kevin Wehrbach of the Wharton Business School (University of Pennsylvania) but Wehrbach’s Supernova hasn’t happened since 2010 and was described by him in 2012 as “on hold”.

Note, by the way, that their URL has changed from constellationrg.com to just constellationr.com.

Links:
• Constellation: search for Sapphire 2014
• Call for Applications: SuperNova Awards for leaders in disruptive technology, Courtney Sato, Constellation, 17 Jun 2014
• SAPPHIRE NOW 2014 (SAP Events)

Why I hate the new Google Maps

I finally allowed myself to be pushed into using the new Google Maps instead of the old familiar one.

Here are all the things that I cannot do as easily as previously.

1 – have it open by default with my own location rather than the blanket map of the USA

2 – immediately find my own list of custom maps. It’s an extra click and I have to know that it appears as a drop down from the search bar. Custom maps have become a lot more complicated to create and manage, too, with “layers” and so on. And there’s a different set of marker icons, differently styled from the old ones. So modifying an existing map, such as the one I maintain for Brighton Early Music Festival, won’t be straightforward if I want to maintain consistent styling.

3 – sharing has changed. It used to be simple: create a map, and embed the HTML provided. Now, for example, the Brighton Early Music Festival map doesn’t properly display the venue markers. Never had a problem before. Still working on this one!

4 – “search nearby” was a simple click from the pin marker on the old version. These pin markers have got “smart” which means that if I search for Victoria Coach Station, when I click or hover on the pin what I get is a list of all the coach services which leave from there. If I right click, I get three options: Directions to here; Directions from here; and What’s here, which doesn’t seem to do anything. If I search for Ebury Street (essentially the same location) I get a pin with no smart hover at all. But the marker does not now pop up nearby information, Directions, Save and Search Nearby options.

5 – no accessible help without going out to separate web pages; and even then the instructions don’t make sense. For example, Google says that “Search nearby” is on a drop down you find by clicking the search box. No, it doesn’t. Not in Firefox. It does, though, appear to work in Chrome. I don’t like being pushed to a different browser.

6 – having found Search nearby, I get given (of course) a set of strange, supposedly related, links. Well I suppose this is what Google does. But for me, it gets in the way.

7 – extra panels and drop-downs obscure parts of the map I’m trying to look at

Now all this, and more, is partly the natural response to changing a familiar application. Let’s assume that overall the product is fuller-featured and more flexible than the old version, and its links to the rest of Google’s information are more capable. But software vendors in general are not always good at user-oriented upgrades. Keep the backward compatibility unless there’s a really, really good reason not to. Icon redesigns, and added complexity in the user interface, are not good reasons.

I’m exploring alternatives. Apple’s new map application doesn’t have near the same level of functionality, and older offerings such as Streetmap haven’t really moved on either. But for (UK) route planning, for example, I’m now using either AA or RAC route planner – which still have the simple, straightforward A-to-B interface.

Links:
• Google Maps (new version)
• How to search “nearby” in new Google Maps? Google Forum, 11 Jun 2013
• Google Removes “Search Nearby” Function From Updated Google Maps, contributor to Slashdot, 16 Jan 2014
• Route planners from the AA and RAC
Streetmap (UK)

What to make of Heartbleed?

I watched the BBC News report last night about the security hole in Open SSL. With its conclusion that everyone should change all their passwords, now … and the old chestnut that you should keep separate passwords for every service you use, never write them down, and so on. Thankfully by this morning common sense is beginning to prevail. The Guardian passes on advice to check if services have been patched first; and offer a link to a tool that will check a site for you.

First, as they say, other Secure Socket Layer implementations are available. While a lot of secure web connections do rely on Open SSL, it’s not by any means universal.

Second, as always, dig behind the news. As Techcrunch did. This is the first vulnerability to have its own website and “cool logo”; this was launched by Codenomicon in Finland which started by creating notes for its own internal use and then took what it calls a “Bugs 2.0” approach to put their information out there. I remember doing something similar way back in Year 2000 days. Incidentally, the Open SSL report (very brief) credits Google Security for discovering the bug. It also identifies the versions which are vulnerable. (There’s a note there that says that if users can’t upgrade to the fixed version, they can recompile Open SSL with -DOPENSSL_NO_HEARTBEATS which, I’m guessing, gives a clue as to the naming of the bug.)

If you want real information, then, go to Heartbleed.com. The Codenomicon Q&A is posted there. In brief: this is not a problem with the specification of SSL/TLS; it’s an implementation bug in OpenSSL. It has been around a long time, but there’s no evidence of significant exploitation. A fix is already available, but needs to be rolled out.

What was clear, too, is that the BBC reporter (and some others) don’t understand the Open Source process. The Guardian asserts that “anyone can update” the code, and leads readers to suppose that someone can maliciously insert a vulnerability. Conspiracy theories suggest that this might even be part of the NSA’s attack on internet security. But of course that ain’t the case. Yes, anyone can join an Open Source project: but code updates don’t automatically get put out there. Bugs can get through, just as they can in commercial software: but testing and versioning is a pretty rigorous process.

Also, this is a server-side problem not an end-user issue. So yes, change your passwords on key services that handle your critical resources  if you’re worried but it might be worth, first, checking whether they’re likely to be using Open SSL. Your bank probably isn’t. There’s a useful list of possibly vulnerable services on Mashable (Facebook: change it; LinkedIn: no need; and so on)

And what do you do about passwords? We use so many online services and accounts that unless you have a systematic approach to passwords you’ll never cope. Personally, I have a standard, hopefully unguessable password I use for all low-criticality services; another, much stronger, for a small handful of critical and really personal ones; and a system which makes it fairly easy to recover passwords for a range of intermediate sites (rely on their Reset Password facility and keep a record of when this has been last used). But also, for online purchases, I use a separate credit card with a deliberately low credit limit. Don’t just rely on technology!

Links:
• Heartbleed, The First Security Bug With A Cool Logo, TechCrunch, 9 Apr 2014
• Heartbleed bug, website from Codenomicon (Finland) – use this site for onward references to official vulnerability reports and other sources
• OpenSSL project
• The Heartbleed Hit List, Mashable, 9 Apr 2014
Heartbleed: don’t rush to update passwords, security experts warn, Alex Hearn, The Guardian, 9 Apr 2014
• Heartbleed bug: Public urged to reset all passwords, Rory Cellan-Jones (main report), BBC, 9 Apr 2014
Test (your) server for Heartbleed, service from Filippo Valsorda as referenced in The Guardian. I’m unclear why this service is registered in the British Indian Ocean Territory (.io domain) since Filippo’s bio says he is currently attending “hacker school in NYC”. On your own head be it.

Horses for Sources: what’s with outsourcing

I’m on a webinar by HfS Research: my first direct encounter with Phil Fersht’s organisation. It’s a where-are-we-going session called “Outlook for the Extended Enterprise”. This post will update live, as we go.

Primarily we’re discussing “extended’ in the sense of multiple outsourced operations, not of industry alliances and cooperative business. HfS’s own research, done in conjunction with KPMG, seems to be painting quite a poor picture of outsourcing value beyond running standard operations. “Talent, technology and analytics value”, Phil asserts, are frequently absent. Once the initial savings are off the books, value doesn’t develop in, for example, exploiting “big data”.

Business-enablement of IT is a gap. I’m beginning to feel like this conversation might have happened equally any time in the last ten, perhaps 20 years. What’s interesting is a breakdown of “BPO maturity” into four quartiles. There seems to be a gap which companies are about to cross to get into the top quartile.

What are the problems? Fear of change; lack of vision; silo operations. The espoused change is to a centre-led organisation; the pros and cons of this haven’t been discussed though. The point’s already been made that perhaps not all enterprises can achieve effective globally-managed business services (which means IT, HR and so on). Maybe that should be “… nor should they”?

Microphone being passed to Ed Caso of Wells Fargo Securities. He’s a senior analyst and has just switched the screen to presenter split-screen. Finally got into proper presentation mode. He’s offering a survey, I think, of the key providers in the outsource market. It’s the sort of analysis which Gartner and the others started out in … Some comments about the financial situation in India and its impact; changes in some providers. And a note that a lot of early 10-year contracts are coming up for review and re-tender. There are visa and immigration issues in several major economies, which might drive more work offshore as it becomes harder to identify skilled staff entitled to work in the home country.

Enterprise-wide sourcing is linked to wider awareness of options, a portfolio approach (provider, location and skills) rather than single-source, hybrid cloud usage, and worries about data security post-Snowden (see my previous post on this). And the providers are further challenged by SMAC (Social, Mobile, Analytics, Cloud): opportunities for the providers, but long term contracts don’t fit the speed of technology development. There’s still a tendency to be more comfortable with deliverables-based contracting rather than value-based.

Another change of speaker: Mike Friend of HfS. Where Caso was US-focussed, Friend is looking at Europe in the context of some fiscal optimism. There’s a prediction for IT oursourcing to grow at around 3.5% through the next four years, and BPO 6.1%, led by the UK market and particularly public sector spending. He’s mentioning a lot of individual companies.

So where do we go? Charles Sutherland of HfS takes over on process automation – that is, avoiding direct people costs – invoking more capable and “friendly” tools. This is still in the context of sourcing: looking for providers who can offer this as a way forward. It’s a potential differentiator in the market. Sutherland is encouraging buyers to look beyond simple cost. He’s suggesting what the signs might be that this is moving in the market, through 2014.

And the final speaker: Ned May of HfS on “the impact of digital”: the SMAC stack again, emphasising the need to embrace all four elements. The speaker does accept that “digital is not new” but I thought it had been around at least since the inauguration of the Web in the mid 1990s. The examples seem to be describing how what goes round comes around, perhaps with a new view of its capabilities. Experimentation will change to planned projects, but skunkworks projects will be of value. This isn’t just a technology change, it’s a mindset change. Some people have been saying this for a long time!

And finally: workforce issues, Christa Degna Manning. Who doesn’t seem to be accessible … emphasising the importance of a back channel for management issues on web calls! The issue is HR outsourcing as, like other areas, this moves to second/third generation outsourcing. Perhaps no longer primarily to support the HR practitioner, but to support and develop the employee.

The key question is whether this is still same-old outsourcing, or whether the trends discussed earlier apply here too. That is,  to look for what the webinar regards as higher-maturity outsourcing: the role of talent, for example, and long term benefits; managing contractors and non-employees; connection through collaboration technologies and perhaps to the world of crowd-sourcing and micro-work contracting (think Amazon Mechanical Turk). I’m reminded of John Adair’s long-established Venn diagram depicting management as the intersection of Task, Team and Individual.

Webcast preview link: http://www.horsesforsources.com/the-hfs-2014-outlook_012814. A replay link when I have it.

Over time, but a couple of quick questions to wrap up. The question of handling IP (I presume this means the IP that the outsource process generates). Providers like to be able to re-use (perhaps by back-licensing) processes, for example, developed within a contract.  A bit more elaboration about “digital”. I clearly need to figure out what HfS mean when they say “digital” but I think it means digitally-captured business information from, perhaps, unconventional, distributed, and big-data sources. And a question about how this works in a shared services model (which is not the same as global business services, even within the one enterprise).

Time to drop off the call. I’ll add some reflections, and tidy this up, tomorrow.

Facebook at 10, Microsoft at 40

OK, a slight stretch for a snappy headline but these have been two lead stories in the last few days.

Others will comment with more depth and more knowledge than I can on either Facebook’s tenth anniversary or the appointment of Satya Nadella to succeed Steve Ballmer (and, of course, Bill Gates) at the head of Microsoft. But I was remembering, quite a while ago now, a META Group event in London when the Web was just arriving and disintermediation was a new word. The speaker took a look at the banking industry, with new on-line start-ups starting to eat the lunch of the established financial institutions.

The point was this. The new entrants invested, typically, in just two things: infrastructure, and software development. Existing players had institutional weight; they had enterprises to keep in existence with all the corporate overheads that accumulate over time. with shareholders and stockmarket expectations and dividends. They needed to cut costs to compete with the new lean players. And (doesn’t it still happen?) they would target the IT budget. So the area of investment which differentiated their new competitors was precisely where they were dis-investing.

Microsoft is fast approaching 40. It’s a solid, established player with corporate overheads, strategies, shareholders. Is it still as lean and sharp as the company which turned on a sixpence (a dime, if you’re American; a 5p piece for the youngsters) when it “got” the Internet and realised that MSN and AOL were not going to be where most of the traffic went. Enter Internet Explorer, competing with Netscape; and the rest is history.

Well … we can look at areas in the recent past where that hasn’t been repeated. Smartphones? a lot of Windows phones have been sold, but Android and iPhone are the big players and an Office 365 subscription gives access to Office mobile software on these platforms as well as Windows. But on the other hand: Office 365 is a good model, for both consumers and Microsoft, because it converts intermittent capital costs for what is still essential software into predictable operational costs. And while capital versus operational is the language of the enterprise, where Microsoft’s heart arguably is these days, the concept works for individual licences. There are undoubtedly challenges, but a CEO with an Indian background may have the right insight and vision to work round all that unavoidable corporate baggage.

What about Facebook? Facebook has got to the stage where it is acquiring the corporate baggage (shareholders and so on). It’s had to face up to public perception, particularly over issues like personal online security. Both companies now find themselves covered in the main news sections and financial pages, like any other corporation, rather than only in  geek-tech reporting. They’ve gone mainstream.

So Facebook has new competitors in the social media space, sharper and newly innovative where Facebook is unavoidably solidifying. Microsoft is in a stable, continuing enterprise market which it understands; it appears not to understand the consumer market so well. Facebook is in precisely that consumer market, although a lot of enterprises use it to communicate with their own consumers. It’s a fashion market. What’s coming next? and how can Mark Zuckerberg stay ahead of the game?

No links here; just a personal opinion, and you can find lots of links with some easy searching!

Insight providers and market evaluation

This is a slightly extended version of a response in LinkedIn to Michael Rasmussen, who has published some thought (“a rant”) about Gartner’s Magic Quadrant.

MQ is a highly influential and long established analyst tool. As an insight services user in enterprise IT, I made use of MQs regularly and would also review similar tools such as Forrester’s Wave when a purchasing decision was being made. Like anything else, it’s essential to know just what a tool like this is, how it’s created and what it does and does not convey. The same is true of Gartner’s Hype Cycle, as I’ve commented elsewhere.

Michael highlights several concerns about Gartner’s recently updated MQ in his own area of considerable expertise, that is, global risk and compliance (GRC). Do read his original, which I won’t attempt to summarise; see the link below. Here’s my response.


Michael, having read the whole post in your blog, a couple of comments from a user’s perspective. First: I wholly agree that Forrester’s Wave value is in the open availability both of the evaluation criteria and of the base data; it would be fantastic to see the same from Gartner. This isn’t just an issue of general open-ness. Since a user can adjust the weightings on the Forrester evaluations, it becomes a much more practical tool.

Second, I remember the moment of revelation when I realised there is a whole industry out there called Analyst Relations, that is, people employed by (big) vendors to influence the analysts. Users often don’t realise that’s how the insight market works.

Third, new approaches do emerge. I’d be interested in your take on Phil Fersht’s Blueprint methodology at Horses for Sources (HfS).

My own analysis of the insight market itself classifies providers in various dimensions. One of these looks at reach, both geographic and content: from global generalists (Gartner for example) through to niche (often start-ups – you yourself have progressed from niche to global specialist since you left Forrester). Perhaps tools like the Wave or MQ should have similar dimensions so that the innovative new providers can be properly assessed.


To add a couple more points. As a technology innovation researcher, I was always well aware that small start-ups often offered innovative options which larger vendors didn’t have or hadn’t got round to. But you took the risk of the enterprise falling apart, failing to deliver, or just failing. Experimental technologies always carry risk and the options are tactical (innovation for shorter-term business benefit) not strategic. Gartner I’m sure would assert that innovation is handled by their Vision dimension in the MQ but, as Mike points out, there are thresholds and other elements which mean that these tools don’t make it into MQs. HfS makes innovation explicit.

Second, in business-critical areas which are highly specific to your business area it’s unlikely that an insight provider will know as much as you do. Don’t automatically assume that a MQ or any other tool will deliver the right answer. Use the tools most certainly, but be prepared to reason your way to, argue for and adopt a solution which is at odds with what the tools say. You must of course be able to justify this, but the general answer may not be right for you.

Links:
• Gartner GRC Magic Quadrant Rant, Part 3, Mike Rasmussen, GRC Pundit, 23 Oct 2013
• The HfS Blueprint Methodology Explained, Jamie Snowden and others, HfS Research, Oct 2013
GRC 20/20 research (Mike Rasmussen)