Pygmalion was a Greek myth about a sculpture who brought one of his sculptures to life. The myth has become reality: Modern-day Pygmalions live in the realm of data science where they are deploying AI to bring automation and autonomy to many facets of our lives.
Whilst there are a lot of fanciful headlines and hyperbole about the latest algorithm, the reality is that to deploy a machine learning model in an operational environment, it needs to be trained well on relevant data, and if the environment changes, to continue to be trained so that it adapts.
In the world of customer interactions and customer experience, there are many machine learning techniques being applied e.g. to try to automate customer services and contact centers, processes, as well as garner insight from the ever-growing ocean of voice of customer data such as surveys, complaints, reviews, call logs and social media. The challenges of building accurately predicting models are hard enough, but also the signals are changing all the time in both nature and mix: New products get launched, new ways of talking about the same things appear, new channels require new data structures (e.g. business chat and chatbots) and perhaps more significantly, customer expectations are changing all the time, sometimes driven by experiences outside the industry in question. For example it's no exaggeration to say that the simplification of devices by the likes of Apple and ease of shopping from Amazon has led to change in expectation, and indeed the expectation of change. In a recent study by Accenture, only 7% of brands are exceeding customer expectations and 25% do not meet expectations.
This leaves the machine learning experts in a quandary: How can businesses develop machine learning models which automate processes and contact centers not just today, but reliably ongoing? How can they get continually rich insight from models when the data are changing around them?
Download the full White Paper for free here
Please leave this field empty.
Comments