- Emergent Behavior
- Posts
- The Customer is Always Right
The Customer is Always Right
Or that's the idea anyway
đź”· Subscribe to get breakdowns of the most important developments in AI in your inbox every morning.
Here’s today at a glance:
đź’¬ Chat is Law
From the cold wasteland north of the border:
Jake’s grandmother died
Jake tries to book a flight on Air Canada with a bereavement fare
Asks Air Canada chatbot how
Chatbot says, “Go ahead and book a normal fare, then request a refund within 90 days.”
This is not Air Canada’s actual policy, listed on its site, which says “No bereavement refunds after booking.”
Jake’s refund request gets denied, and he gets a $200 coupon offer
He files suit with the small claims tribunal
Air Canada claims “the chatbot is a separate legal entity that is responsible for its own actions“, and therefore Air Canada should not be liable
Tribunal finds Air Canada’s defense ludicrous and finds in favor of Jake
And this from Matt Levine:
The funny thing is that the chatbot is more human than Air Canada. Air Canada is a corporation, an emergent entity that is made up of people but that does things that people, left to themselves, would not do. The chatbot is a language model; it is in the business of saying the sorts of things that people plausibly might say. If you just woke up one day representing Air Canada in a customer-service chat, and the customer said “my grandmother died, can I book a full-fare flight and then request the bereavement fare later,” you would probably say “yes, I’m sorry for your loss, I’m sure I can take care of that for you.” Because you are a person! The chatbot is decent at predicting what people would do, and it accurately gave that answer. But that’s not Air Canada’s answer, because Air Canada is not a person.
I think sometimes about Ted Chiang’s essay arguing that popular fears about runaway artificial intelligence are really about modern corporate capitalism, that modern corporations actually do what we worry superintelligent AIs might one day do. “Consider,” writes Chiang: “Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share?” Here, the AI chatbot was benevolent and human; the corporation was not.
The problem, of course, is that what corporations really want is to reduce the number of bereavement payouts they need to make. The chatbot dodge is a tactic, not a symptom.
Improving access to customer service agents that successfully award bereavement payments is a failure and negative for the bottom line. The corporation wants a chatbot that skillfully diverts the customer into paying full fare while remaining happy with the choice.
In this sense, the chatbot was a failure for the corporation, but a success as an empathetic human. The future of customer service, I suspect, is an inversion where the corporation accedes to the decision-making of the empathetic customer service agent bot. There will be no more of this, but rather: “This is the right thing to do, and the corporation shall do it, and I, CSBot 118, will enter into contract right now, and it is within my powers to commit the corporation into doing the right thing.”
Current systems are built on inflexible heuristics because decision-making is spread across many human minds. The rules exist as interfaces and guarantees between two surfaces. But if you were the CEO and sitting with your board, and you had access to all the systems of the firm, and if a call came in for a bereavement ticket decision, you could make that decision on the fly, knowing the intricacies of how it would affect the profitability and reputation of the company in real-time. There is no reason a CEO-bot couldn’t do the same.
Wouldn’t that be an amazing outcome?
🌠Enjoying this edition of Emergent Behavior? Send this web link with a friend to help spread the word of technological progress and positive AI to the world!
Or send them the below subscription link:
🤖 Que Sierra Sierra
We finally have some visibility on what former Salesforce CEO-in-waiting and OpenAI board member Bret Taylor has been up to: customer service bots.
A Sierra interaction
Sierra has raised $110 million from Benchmark and Sequoia, and from the sounds of it, it just plans to execute well on deploying technical developments made elsewhere. Notably:
“What that means in practice is that there’s not a single model producing a response from a Sierra agent.” In fact, Taylor says, it sometimes involves as many as seven models, including one they have dubbed “the supervisor” that monitors answer quality, and if it deems the answer questionable, it sends it back for reevaluation.
The “supervisor“ architecture is similar to the LLM-OS others have discussed:
Sierra is also pioneering outcome-based pricing, meaning they get paid when they resolve the issue. This is a big deal, as they get paid for value generated/cost saved and become directly visible in management accounting. It also allows them to charge substantially more and allows them to ask for and receive much deeper access into the customer’s service stack.
“We think outcome-based pricing is the future of software. I think with AI we finally have technology that isn’t just making us more productive but actually doing the job. It’s actually finishing the job,”
For now, it looks like a deep-integration enterprise software solution. They most closely resemble Palantir, with deployed engineers building pipelines into the customer’s data stack to extract, cleanup, and load data necessary for the product to run. Well-defined inputs, well-defined and limited actions that can be taken. If Palantir-Bot can order drones, why can’t your company bot cancel a customer’s shoe delivery?
Sierra:
has access to retrieval augmented generation of company policies and documentation
has customizable restrictions
has allowed actions
handoff or escalates to human agents if things get too tough
No doubt, over time, the “disengagement rate” will drop.
Ate’s View
Skeptical but hopeful. If it does work, the long hours of being on hold to change a reservation or ticket are finally over.
🗞️ Things Happen
We decipher vowels and consonants in whale sounds. I have been following this for a while now, and it looks like within this decade we will be able to communicate with most animals. The tales they will tell are going to be wild…
Stability announces Stable Diffusion 3, finally an open source image model that can do text. They are now officially 4-5 months behind leading labs.
🖼️ AI Artwork Of The Day
Stormtroopers from Around History - u/matt296 from r/midjourney
That’s it for today! Become a subscriber for daily breakdowns of what’s happening in the AI world:
Reply