Prototyping Booking.com’s Chatbot for Better Conversations

Prashant Khanchandani
Booking.com — UX Design
5 min readJun 24, 2019

--

Every day thousands of people visit Booking.com to plan their trips. They find us via our website, apps, and sometimes even chat. The Booking.com chatbot can help them find and book an accommodation for their next trip.

When I learned I was going to work with the Booking.com chatbot team, I was excited to design and prototype flows in this experimental new platform. This is where I hit my first roadblock.

Prototyping tools work best when testing interactions based on taps, clicks or gestures. It is nearly impossible to use these tools for the text-heavy conversational flows of a chatbot

The rest of this article will outline everything we tried, and the system we finally developed to test our ideas. I have also included a handy list of things for you to consider if you are designing a chatbot or a personal assistant.

Prototypes using suggested responses

The first few prototypes were made using InVision. Due to the limitations of the tool, users couldn’t type whatever they wanted. Instead, they would see possible responses they could pick from.

We quickly learned this method had its flaws. Firstly, even though we had suggestions, most users would intuitively try typing before picking an option. Secondly, we were biasing their responses.

We weren’t happy with this, this far from the type-whatever-you-want experience we wanted to deliver.

Wizard of Oz tests

You must be thinking “Erm, have you tried pretending to be the bot?” That’s exactly what we did next. We set up a Wizard of Oz test, which meant we pretended to be the computer. This kind of testing helps validate complex ideas without actually building them.

Once we had a rough flowchart of how we wanted the conversation to go, it was time for one of us to pretend to be a bot. We were happy with these tests, users were engaged and they had more relevant conversations about their upcoming trip.

A part of the flow chart we used for the test

When we started documenting what we learnt, we realised the flaw with this approach. Even though this experience was close to the one we wanted to deliver, the conversations themselves were too human. A bot that worked like this would be impossible to build within our timeframe.

Writing a script

Even though the Wizard of Oz test was really helpful in testing the overall conversational flow, it needed more structure. To do this, we opened up the word processor and wrote a script. The script was a document that listed all the messages we could foresee users sending and the bot’s response for them.

The script covering all possible user responses

We created a detailed script like this for the entire flow we wanted to test. Making sure to cover edge cases, error states and even adding some conversational fillers (Like “Got it!”, “Hold on, this may take a minute”). Once we had all of this ready, we ironed out the kinks in our script by trying it out on some of our colleagues.

When testing with users, we stuck to copy-pasting responses from our document. This process turned out to be ideal for us since we were able to test technically-sound prototypes with users.

Things to remember when prototyping your bot

As promised, here’s the list of things I wish I had known before starting this — we’ve done them all so you don’t have to!

Have a script and stick to it

You will be tempted to help your user during the test, even if it means diverging from the script a little. Your job is not to help them complete the task, it’s to test if the script can. You can always improve it once the test is over.

Collaborate on your script

It goes without saying that you need to share your work with your team. But especially so when it comes to the script, to make sure what you are designing is technically sound, and has the right copy and tone of voice.

Test variations of the copy

Your script could have multiple responses for the same query. This is the right time to test different words, tone of voice and even punctuation. If you are lucky (as I was) and have a copywriter on your team, involve them in the process.

Optimise your setup

When executing a Wizard of Oz test, you are pretending to be a machine, you need to be prepared to be as fast as one. For all my tests, I had 3 windows open and visible at all times. One with the chat interface, second with the script, and last with the Booking.com search.

My setup (from left to right); the Booking.com search, the script and the chat window

Try it with your colleagues first

Once you have your script and setup ready, spend some time trying them out on your co-workers. This was really helpful for us to find missing scenarios in the script, and helped us practice responding (almost) as fast as the bot.

This process taught me that sometimes, research methods need to be adapted to the product and problem at hand. We had a clear idea of the flows we wanted to test and the questions we needed answered. This helped us iterate our methods until we had something that worked for us.

Thank you Cătălin Bridinel, Chris Cameron and the Chat2Book team.

--

--