Data and AI – a match made in heaven, or a cause for concern?
By Gavin Meggs, Director of Data, Insight and Analytics at O2.
In the previous blog in this series on Big Data we discussed the importance of a good data strategy and the benefits that it can bring to an organisation. We identified four key areas for focus. One development that has led to rapid advances in the way we are able to appreciate insightful data is through the use of Artificial Intelligence (AI). However if we consider Elon Musk’s concerns, alongside the late Stephen Hawking’s comments on the potential for AI to doom the human race to extinction, how should we really be looking at AI to help your organisation?
The concept of AI is certainly not new, with the first machine learning algorithms dating back to the 1960s. Requiring vast spaces these machines were specialised, and were used for code cracking as well as scientific experiments. Recently AI has reached the mainstream, and now plays an integral role in Big Data project design. So what has changed?
Three converging trends have made it possible:
- The rapid growth of data and the speed with which it is being generated, collected and stored.
- The vast increase in computer processing power, enabling us to exploit larger volumes of data.
- The development in machine learning technology that can analyse complex datasets.
We have focused on the first two trends in previous blog posts. Turning to the final trend, as presented to O2 by Tom Pringle, Head of Application Research at Ovum, “Data is the fuel which powers 90%+ of realistic artificial intelligence use cases”. Machine learning has transformed the way that we analyse data, and is based on the principle that systems using complex algorithms can learn from data by understanding context and association, even when that association is not entirely obvious to start with. It allows systems to perform rapid, deeper analysis of more complex data sets, identifying patterns and making faster decisions without the need for human interaction. More importantly it thrives on more data, refining the output as it detects and understands trends and outcomes.
Examples of machine learning and complex data analysis are all around us. Your personalised Amazon homepage was created using AI. The same applies to your Netflix recommendations. Banks use it to identify unusual transactions in their efforts to prevent fraud. Social media news feeds are peppered with advertising and sponsored content based on AI models that learn about your likes and preferences.
These examples are just the beginning, and no one would say that the technology is perfect quite yet. The machine learning models are good, but we are not yet at a stage when they can be left to learn entirely without “constructive” human intervention. Consider Microsoft’s teenage AI chatbot, Tay. Launched as an experiment in ‘conversational understanding’ in March 2016, Tay was a Twitter bot designed to engage people through ‘casual and playful conversation’. The idea, at least, was simple: The more users chatted to Tay, the smarter she got. Perhaps predictably, Tay became a target for trolls and online troublemakers, who within hours persuaded her to tweet hateful and abusive comments. Tay left Twitter within 24 hours of launch.
However, even in the time since Microsoft’s experiment in March 2016 the technology has come a long way. With the size of data continuing to grow at an exponential rate, and all of it needing to be understood contextually, it makes sense for the convergence of Big Data with AI to continue at speed. The resulting models are able to analyse bigger, more complex data and deliver faster, more accurate results on a vast scale.
Deep Learning (DL) has also dramatically improved the capability of AI systems in recent years. DL algorithms create artificial ‘neural networks’, which can learn and make intelligent decisions on their own, based on the outcomes it has witnessed from making previous decisions – very similar to how a child learns that an oven is hot by touching it. It’s an approach to AI that is being used to develop autonomous, self-learning systems which are being employed in a number of industries.
Google DeepMind, for example, applied the technology to optimise the cooling system of Google’s data centres. A range of sensor data was collected, including temperatures, the number and location of open windows, the routing of network data traffic and the workloads of individual machines. DeepMind’s AI system trained itself to control these variables in order to reduce cooling energy consumption, and achieved a 40% reduction, resulting in a 15% saving in the overall cost of running the data centre.
The more immediate impact of AI relates to how well it can support traditional functions such as customer enquiries and support. At O2 we recognise the importance of AI, and the possibilities it brings for tomorrow’s digital consumer. In March 2018 we launched O2 Ask, an artificial and cognitive intelligence powered digital assistant that answers many of the questions customers have about their accounts. Customers can use O2 Ask to check their mobile and data usage, view bills, make payments, change tariffs, buy more data and add or remove Bolt-Ons. Find out more about O2 Ask here.
The lessons seem pretty clear – if we leave AI to learn exclusively from the general public, as happened with Microsoft’s Tay, the biggest threat to AI seems to be ourselves. For now, we ought to explore how secure systems such as O2 Gateway can provide safe fixed, mobile and wi-fi networks for the access and collection of data, and our IoT solutions to use AI to provide smart, accurate and rapid ways to make clear business decisions.
Contact us now to discuss how we can ensure you have the right scalable networks for today and tomorrow, and the lessons we’re identifying with using AI to deliver better experiences for staff, the businesses and people we support. Get in touch