So I'm going to show you an example with a test bot we did with our friends from Bolton, on Rio Mare tuna (leader of Tuna in Italy and the Adriatic 🦈🦈

The goal is to make a bot that allows you to :

  1. send it a picture (a can of tuna or sardines for example)
  2. it analyzes the photo via Computer Vision (the one from Microsoft) and tries to find words
  3. if it finds the word sardine, he sends you sardine recipes, if he finds the word tuna, he sends you tuna recipes

Let's gogogogo πŸš€πŸš€

My Action skill has three resources like that πŸ˜€

I grab the image in validation (I don't ask for validation), and then either I find a word or I find nothing.

My validation triggers the workflow named "Vision" as soon as I get an answer:

Let's take a closer look at this workflow πŸ”ŽπŸ”Ž

So I have a webhook on Microsoft's computer vision in word POST

I simply feed it the URL of my image, which is {{tag.image.0.url}}.

Aaaand I don't forget to store the answer back in a string πŸ‘‡πŸ‘‡

Computer Vision's API returns a complete JSON with the words found.

Filter the answers according to the API response

All I have to do is return to my answer resource (Vision competency).

Rionino has 4 types of products (tuna, salmon, sardine, shrimp).

So I enter 4 messages and 4 carousels with the desired content.

Then I just have to add a filter: vision_response contains {{name of the fish}}.

Here an example with Tonno! πŸš€πŸš€

Did this answer your question?