Better report for Rasa chatbot model analysis
Rasa is a very powerful AI Framework for building contextual assistants, but you have to know how to set up your NLU, and improve your intent recognition. You can generate a report that will show what your model looks like using the rasa test command. In the results folder, the intent-errors.json will show where intents are getting mixed up, but doesn't show you intents that are nearly being confused, and an alteration in your NLU examples can cause them to start failing. All of your examples are influencing the neural net that is classifying what the user typed, in one way or another. Imagine a huge spider web connecting all examples with intents... if you "move" one example, all the intents classifications will move, albeit some almost imperceptibly. I'll be showing you how to analyze intent classification and figuring out where to improve your model, and taking you step-by-step through writing a python script to help figure out what's happening. So how do