Codementor Events

Let's build AI powered contextual chatbots with RASA: part-3

Published May 02, 2020
Let's build AI powered contextual chatbots with RASA: part-3

Hey there !!!!
Welcome to the third this "Let's build AI powered contextual chatbots with RASA" series. In this part we will build our understanding of working of chatbot in Rasa ๐ŸŽŠ.

Before we start....
If you haven't gone through the first and second blog posts of this series then please read them once because there I have covered the basic terminologies related to RASA and how to setup an environment to work with RASA.
In the first blog post I covered:

  • Rasa terminologies and analogically explained how a chatbot works.
    In the second blog post I covered:
  • Rasa Core, Rasa NLU and how to setup an environment to work with RASA.

A bit about me ๐Ÿ˜„

I am a professional python developer working with W3sols (A web and mobile app development company)
I am experimenting with AI and ML based tech stack these days.

So... time is short and a lot to do lets get started ๐Ÿ‘๐Ÿป

  • We have already created an environment to work on our chatbot and we had activated it as well. It was called restoEnv.
  • Now, lets create a rasa project. (make sure you followed the steps in second blog post)

In terminal/cmd prompt

(restoEnv)$ rasa init

The above cmd will create and initialize a rasa project and your cmd prompt will look like below. Just press enter and Y whenever prompted.

Screen Shot 2020-05-01 at 3.33.14 PM.png

After a lot of processing work you will be prompted:

(restoEnv)$Do you want to speak to the trained assistant on the command line? ๐Ÿค–  (Y/n)

press Y/y to chat with your very first chatbot๐Ÿ˜„

Screen Shot 2020-05-01 at 3.37.19 PM.png

Above is the picture of conversation with very basic chatbot that rasa has provided for you.

Congratulations๐Ÿ‘๐Ÿป you just created your very first chatbot.

OK!!! let's exit our bot. Press ctrl+C on keyboard to exit your very first chatbot.
I will cover a lot of stuff from now on, so be patient and try to understand ๐Ÿ˜„.

Our rasa project structure looks like:
Screen Shot 2020-05-01 at 3.51.37 PM.png

Perfect!!!
Let's tweak our chatbot's stories.md. It resides in data directory within the chatbot project.
Screen Shot 2020-05-02 at 10.59.29 AM.png

If you remember stories actually define how the flow of conversation will go.
"*" infront defines the intent of sentence.
"-" infront is used to call an action
## infront is used to define a label for each story. One can give it any name

Now, relate the above picture of stories.md with the picture of conversation with chatbot.

  • I entered "Hi"--------> that is "greet" intent
    • but wait how the bot knows "Hi" comes under "greet" intent?
    • Answer to this lies in nlu.md. As we know nlu processes the naltural language elements, so our input "Hi" was processed by the nlu and put under a certain intent. Check the picture for nlu.md below.

Screen Shot 2020-05-02 at 11.13.27 AM.png

We can see words like hey, hi, hello and so on... under ##intent: greet. So, whenever we enter any of the words above nlu identifies the intent of that words as greet. After this chatbot comes back to stories.md and checks what should be done if intent is "greet". It sees that ok! I have to call action "utter_greet" and where does this action's definition resides? It resides in domain.yml. Check pictur of domain.yml below.

Screen Shot 2020-05-02 at 11.20.38 AM.png

Under responses (The response to be given by the chatbot) one can see there exists "utter_greet". Now, whatever is the definition of that action (here it "Hey, How are you?". That output will occur on the screen. After that next step in the story flow is followed and it goes on until the last step in a story is reached.

๐Ÿ˜… Confused? Worry not, read through the explanation above twice or thrice you will undestand better everytime but if you are still confused write in the comments, I will help ๐Ÿ˜„.

For now, I will wrap up as I want all of you to understand the concepts well. For better understanding I will create a completely different blog post for our Restaurant search bot.
But so that you know, we will use zomato api for fetching restaurants. You can read the documentation for that before we start. Next blog post will be a direct dive into code, so I created this additional blog post in between to explain the working well.
If you liked how I explained stuff please like the blog post,share it and comment.
So, for now HAPPY LEARNING ๐Ÿ˜„.

Discover and read more posts from Dharvi
get started
post commentsBe the first to share your opinion
Show more replies