How to build an AI driven business

I’m a big fan of the Harvard Business Review, so I wasn’t entirely sure whether to take this article seriously: “Please don’t hire a chief artificial intelligence officer” The gist of it is:

AI strategy is for firms where the product itself is AI (Google, Microsoft etc). Non-tech firms should just make sure the techies are talking to business managers, and AI implementation will take care of itself.

I wasn’t entirely convinced by this, so thought about a typical ‘AI issue’ I see every day:

What would happen in your firm under the following scenario: Sales team from XYZ ‘Cognitive’ corp comes to sell you their latest system: ‘Cleverbot’. Your tech guys and business unit manager all agree this will cut operational costs, so you should buy it. You’re close to signing a contract when the XYZ sales person announces that for the ‘Cleverbot’ to work, they need to train their cloud based ‘brain’ with your customer data.

architecture-906736_1920

Your data will be anonymised, so what’s the harm? – after all, this is an opportunity to ‘monetise’ your data assets. The small print mentions that this cloud based ‘brain’ is used by other firms, and XYZ owns all the IP of the trained model.

To make an informed decision you need to get a handle on:

  • The value of the data you are sharing with XYZ corp.
  • Competitive gain or loss from sharing the data with XYZ corp. Who else is using XYZ’s ‘brain’. Who benefits from your data? Who might benefit from this data in the future?
  • Opportunity cost with respect to the alternatives: Is XYZ’s product a square peg you’re about hammer into a round hole?
  • Regulatory issues around data privacy, governance (GDPR, right to be forgotten). Where is the ‘brain’ geographically located? What are the legal implications?
  • Product life cycle. Costs, staffing. This will always be higher than the vendor suggests.

All of these issues have strategic ramifications, some of which have hugely asymmetric pay-offs. Who takes responsibility? This is a difficult question to answer, because it’s a problem most firms haven’t encountered before.

Widespread adoption of AI marks the stage in the corporate technology game where business models become inseparable from their use of technology. AI is driving corporate technology to evolve from information technology into operating technology that defines or enables the core business proposition. It re-weights the balance of power within organisations and radically alters their competitive landscape. This is already recognised in many institutions by the special treatment ‘Digital’ teams receive (dress how they want, cool offices etc.). Why?  – Because Digital is operating technology. It’s a profit driver, not a cost.

So embedding AI into your firm is too important to be left to chance, let alone be driven by people trying to sell you software licenses. So what’s the answer?

The first thing to note is that the AI industry is in its infancy. There are some amazing products and services out there, but most of this software doesn’t integrate well, can be difficult to train, and many use-cases are dreamt up by PhDs who have little understanding of the realities of business. There is a lot of what I would call ‘naive tech-solutionism’: these are point solutions to simple problems that look nice to venture capitalists but add questionable value to the end user. This means a lot will change over the next 5 years, so the key is having a flexible approach:

1. Be organisationally agile. This means an agile approach to experimenting with the possibilities of AI. It’s a cliche, but it’s best summed up by ‘fail fast’. The world is changing rapidly and risks have to be taken. Some will necessarily turn sour, so avoid blame, recriminations and just move on to the next thing. Good risk management and clear decision taking is the key here. More and more companies have two-speed approaches to technology, one that separates operating from the more traditional information technology.

So how do you design and deliver new AI processes? They key is to embed data scientists in your business:

2. Each business unit needs an embedded data science capability. I use this term loosely to include people with Machine Learning expertise and sometimes broader AI capabilities like Natural Language Processing. They key is that each business has team members who understand the theory and practice of data driven models and applications. These are the guys who will spot opportunities, architect new operating models, in some cases develop technology, and in other cases get the right tech vendors involved. Just as importantly, they will call bullsh*t on non-technical dreamers, and unscrupulous tech vendors, saving a lot of time and money.

Building a data science capability is expensive and difficult. There is currently huge demand for business savvy data scientist types, so they are hard to hire and retain. The most effective approach at the moment is to train current employees. Introduce basic data science training for graduate hires. Over time, data science will become a core business skill, so start today:

3. Build a data scientific culture in your firm. Competitive advantage comes from getting the right balance of business expertise and technical experience. In my experience, the most powerful results come from training AI’s about messy, complex problems, but this involves managing the interaction between business subject matter experts and your technology effectively. This is a long way from the dream many vendors sell of dumping all your data into a ‘cognitive ‘ system and letting the software do all the work.

So how do you build a data scientific culture?

4. Large firms need AI ‘centres of excellence’. A widespread phenomena in large quantitative firms (such as investment banks) is to develop pockets of high quality expertise, each doing their own things with AI and machine learning. Whilst this is very much in the spirit of business embedded AI, it risks duplication of work, excessive software costs and often doesn’t realise the value of your (valuable) in-house experts for the benefit of the wider firm. There needs to be some way of connecting and rationalising the activities of your AI leaders. This doesn’t mean putting them all together in a separate office, but there needs to be an organisational structure to align these people.

With a ‘core team’ of AI and Machine learning experts, you can start to develop policies, governance and escalation procedures:

5. Get some proper governance in place before you ‘release the demon’. Some well known pundits have referred to the invention of AI as humanity ‘releasing the demon’.  Whilst the AGI this refers to is a long way off, you could still do your organisation a lot of harm with an ad-hock approach to AI. This list is not exhaustive but you need policies that cover:

  1. Privacy and ethics when using customer data for model / agent training
  2. Rationalisation of AI technology estate
  3. Machine learning model validation policies and lifecycle management
  4. Unleashing autonomous agents ‘into the wild’
  5. AI specific cybersecurity issues

Finally, there needs to be C-suite strategy level thinking about the use of AI in the business. This could be the remit of a ‘Chief AI officer’, or a committee composed of current C-suite executives. The key thing is:

6. It is imperative that you have sufficient technical expertise at a senior enough level to properly manage your AI policies and assets. Whilst you don’t need to be a machine learning expert to grapple with many of the strategic issues AI raises, there are a number of areas where decision making without requisite expertise will be flawed. For non-tech organisations with a traditional technology estate, this means that hiring a suitably qualified ‘Chief AI officer’ is a no-brainer from a risk / reward perspective. The key thing is appropriately setting the remit for the role. Start by focussing on the issues enumerated above, and broaden into business strategy over time if necessary.

Whether with a ‘Chief AI officer’, or a C-suite AI committee, firms urgently need to address AI transformation at a business strategy level. Understanding the strategic value (or lack thereof) of their data, now or in the future, when used as the basis for training AIs could be pivotal to future success.

There also needs to be a sober assessment of the risks of ‘tech solutionism’. Going ‘all in’ on digitisation could cause the inadvertent commoditisation of your business by lowering barriers to entry from large tech firms trying to ‘disrupt you’. There are strategic risks to both ignoring AI and fully embracing it.

The bottom line is that these are a new set of strategic issues, some with existential implications. They need to be actively explored, monitored and managed by appropriately qualified individuals. Just letting AI organically ‘happen’ in your firm is not an option.


Leave a comment