’m watching the television right now and after the advert break the programme titles return and a large letter P looms in the top right-hand corner of the screen. Now I know that this stands for ‘promotion’ and that products featured within the programme have paid to be there. So now I’m on product placement standby. The next time the screen cuts to someone doing the washing up, the camera lingers just that little bit longer than normal on a product, with the brand in full view, glistening in the sunlight, bedazzled with bubbly froth. I think to myself: ‘product placement alert!’ pleased that I have not been duped and have detected the subtle advert. There then often follows a small discussion with myself as to whether I am now a) more likely to buy that brand b) less likely to buy that brand, which is then closely followed by c) “those pesky advertisers, it’s worked, I’ve spent the last three minutes thinking about washing up liquid”.
So what’s next in the world of subtle marketing? Well seemingly many are turning to chatbots. The bots have landed so I’m officially on botwatch. I like to think of myself as a pretty tech-savvy service experience explorer (!) so when I’ve got a problem with a product or service, and I click on their website to investigate, and they offer the opportunity to ‘chat online to one of our advisors’ I’ll often give it a go, ready to boldly step where (I imagine) not so many other users have gone before (I have no data to substantiate this claim, I’m sure lots of users give the chats a go).
The experience is often painful. Again, the internal monologue starts: “Bet they’re in a service centre on the other side of the world, that’s why they’re not understanding what I’m saying”. “They’re probably up against some serious chat targets and are managing eight chat conversations at one time”. Some general: “Argh! Why is this taking so long?” and/or “it’s not that difficult to understand, I expressed myself very clearly”. Then, finally: “I wonder if I’m talking to a bot”.
This is a screenshot of a relatively recent online chat with Apple whilst keen to avoid the quest that is securing a Genius Bar appointment.
Hmm. “Laughing! I am certainly human” was not the most convincing reply to my question, Apple.
By the way, nice try including the word ‘totally’ in your next response in an attempt to throw me off the scent.
I have no idea whether I was talking to a bot, and I’d hope that when chatbots are fully integrated into society, they won’t ‘lie’ and pretend to be human, but this particular experience certainly felt like it. I suppose there is some good practice in repeating and confirming whether they had correctly understood the nature of my problem, but it felt too much, too perfect, too machine-like. I was trying to talk to an expert, and it didn’t feel like it. It felt forced. Suddenly I needed to know whether I was talking to a machine or talking to a human being. Why did this matter to me? Well, authenticity I suppose. As a consumer, as a human, I want to feel in control and fully cognizant of what is happening to me, particularly when I can’t see the person to whom I am talking. The minute I start to doubt what’s happening, my trust in the company, in the brand, the ability to fix my problem, begins to erode.
This isn’t the only ‘online chat’ I’ve had where I’ve really wondered whether it was an automated chatbot or a human being. Why is that? Well, I’ve seen chatbots being demonstrated at service conferences and they can be VERY convincing. I am fully aware of their abilities and potential. I can also imagine the lure of an automated, humanless ability to respond quickly to customers, providing the businesses with the low-cost information that they desperately crave.
I wrote a blog last year about “The Intangible Balance Scorecard” and both of the models I referred to, “The Elements of Value” model by Almquist, Senior and Bloch, 2016 and the “SCARF” model by David Rock, explore the difficulties in quantifying and harnessing the HUMAN elements of value such as a “sense of belonging” and “fairness” – all essential parts of our service experience – all elements which chatbots will find it very difficult to replicate.
But to be honest, should they have to? If you look at the lower levels of the “Elements of Value” pyramid, a good chatbot can fulfil lots of these functional needs very well. They should “inform”, “reduce effort”, “reduce hassles” and “save time”. The mistake may be when companies seek to recruit a chatbot to fulfil our emotional, life changing and societal needs (give it time) and also, critically, if they aren’t being authentic about the experience.
So to summarise, I think that there’s a real need for a capital C to loom over the beginning of any chatbot discussion. I don’t mind talking to a machine as long as I know that I’m talking to a machine. Please don’t try to fake it. To possess such knowledge appeals to my Status, Certainty, Autonomy, Relatedness and Fairness. When I have visibility of what’s actually happening, I’ll happily discuss my service requirements with you, Mr or Ms Chatbot, and I’ll thank you for your time and any support that you give.