3 Skill tests, Online Radio, Time and Dates, and Gas prices

So I followed up the tips I got here from this forum and made 3 skills according to the Syntax from the rhasspy-hermes-app
but I am kind of confused…
You can see the skills in action here:

and read the code here


(I didnt check the respo if all is working it should though… it is only a small part of all the skills I made this sommer) but you can see what I did… 3 almost the exact same scripts…in the old snips way… an action script where the intents are get and an extra script the with class of the skill.

While splitting up the three skills I realized… thats nice … but how do I get 3 scripts started at the same time asynchronosly without having to build a service for each script, and how can I stop all scripts at the same time… this is of course more of a python question but since we had some discussion already here about skills it would be great to know what your thoughs have been or are considering adding a lot of skills to rhasspy at the same time…

in the repro I added the file skillStarter.py which works but it only works once… and it feels crappy…

Nice work!

I have actually been thinking about creating a rhasspy-hermes-app command to generate a systemd script or Dockerfile for an app, and maybe even install it:

Would that be helpful?

Thanks @koan!
I am not sure if it is a good idea to give each skill a service for a lot of could could be shared which now isnt and what if one has 100 skills that would mean 100 services or docker cotainers… wow… On the other hand using 1 big intent handling file isn’t maintainable either… I dont know how snips did it and we cant check that anymore but they had a visual environment where each could deploy her or his skills, add them to their project and each could download then the trained environment including the skills. The Alice Project also has something simular but in another way. At Alice you can add and delete skills from a visual overview.
I btw also had another problem with the use of “app” in the action-scripts. When using for example
app = HermesApp(“AnyApp”)
I could not pass app to another class and then publish a notification later… but that is presumable me not being a great python programmer :slight_smile:

As far as I remember snips simply used a virtualenv for each python skill and the training data got combined when the training was performed online. However when they planned to release their snips air products, they did plan to use some means of containerisation for the individiual skill aswell for security reasons. So e.g. not all skills can access the microphone stream over mqtt (at least thats what Rand Hindi told me back then).

I did never try to use 100 docker services, but is this actually a problem? Since docker is using the cgroups and namespaces to isolate processes from each other there shouldn’t be much overhead involved. I guess the biggest thing would be that python is started 100 times, but thats the same when using it from a virtualenv.

Project Alice is implementing skills in a completely different way to Rhasspy. In rhasspy skills simply listen to the corresponding mqtt topic (or one of the other ways it is shared) and can therefore be implemented in any language. It is completely up to the user where and how he implements them. In Project Alice on the other hand skills have to be written in Python and are imported by the main application. When a intent is detected Alice will go through all skills that are imported from A-Z and ask them whether they want to react to the intent. So e.g. when skill A reacts to the intent skill B will not be able to react anymore.

Thanks for replying @maxbachmann! I read your name everywhere in all three projects :slight_smile: you are famous!
Okay if that is the way to go to, I’ll update the test skills repro and add the ability for others to make seperate services of each. And perhaps spilt up each skill in a seperate repro.

@koan if you are adding the automatically service creation to the rhasspy hermes app, will this also include the ability to automaticly add the slots and sentices needed for the skill created? that would be nice I think.

I agree with @maxbachmann: I wouldn’t worry about the overhead of systemd services or Docker containers.

Of course the approach of ProjectAlice is equally valid, it’s just that each approach comes with its advantages and disadvantages. I’m more leaning towards the “each app is a separate service/container” approach, because of its flexibility, maintainability and security.

This is actually something I have been thinking about too:

But I’m not sure yet what the best approach is here. Feel free to add your thoughts in the GitHub issue. Besides, it’s currently not yet possible with the Hermes protocol. But as soon as this functionality gets added to the Rhasspy core, I will certainly implement this in rhasspy-hermes-app.

I also agree with @koan and @maxbachmann. A container for each skill is the simplest way to handle skills created and maintained by a community or independant devs (security, deployment, maintenance, etc).

One particular pain point with community skill installation will probably be training data collisions (« stop playback » ? which intent? Spotify? Radio? Mopidy?).

This make me think that maybe some context management can be useful.

Maybe Rhasspy could provide a context manager that let skills register/unregister values per siteId and let intents provide some rules to match on these values.

I fear the INI format/syntax currently used for the dataset will soon become too limited…

Just thinking… :thinking:

I guess I agree too eventhough I think this way there will be a lot of boilerplate code… for example the intent methods I used… as soon as the intents change one needs to change them in all skills…

@fastjack Since the intent methods are asynchronisly it shouldnt be a problem to use the same intents in different skills… right? The stop intent I used for the internet radio is the same intent I use for my mp3 music player that did work…

In your case, you only have one “stop” intent so there is no problem.

If different skills (made by different authors) require a “stop” intent, each one need to provide it with probably a lot of utterance similarities but different names. Which one will be triggered by the NLU then?

Same for various intent like “yes”, “no”, “cancel”, “pick a date”, etc… There is already some kind of intent filtering for a continued dialogue session but not for the initial intent.

Allowing the skill to declare that for a specific siteId, it has the “focus” can help the NLU trigger the most adequat intent on this siteId…

Ex:

  • Start playing music from Spotify on siteId kitchen
  • Playback starts
  • Set context "spotify_playing=true" on top of siteId kitchen context array
  • “Pause” => spotifyPause match "spotify_playing=true" so it triggers, tuneinPause does not match "tunein_playing=true" so it is discarded)
  • Start playing radio from TuneIn on siteId kitchen
  • Playback starts
  • Add context "tunein_playing=true" on top of SiteId kitchen context array
  • “Pause” (tuneinPause match "tunein_playing=true" first so it triggers, spotifyPause also match but with lower priority as tuneinPause already matched so it is ignored)
  • “Resume” (tuneinResume match "tunein_playing=true" so it triggers, spotifyResume does not match… You get the idea…)

The Alexa SDK use something similar.

It make skill development more complicated (because you have to maintain the context properly) but I don’t see how to achieve community skills deployment without some kind of intent isolation system (especially for the NLU).

Cheers.

1 Like