Rhasspy mobile app

I state that I am not an experienced programmer but I still wanted to help with the development of this beautiful project. I started developing a prototype mobile app for Rhasspy, for now, interface with the rest api but later I would like to expand it for MQTT support. It made it with flutter to have multi-platform support but for now, I can only compile for android given the lack of a physical macOS device. Feel free to comment and recommend the best solution to implement a mobile app. Excuse me for my English.

12 Likes

What I would really love in regards to Android apps are two thing:

  • Being able to turn an old smartphone into satellite
  • Being able to take out my phone whereever I am and submit a voice command, ideally trhough a fast accessible widget

I managed to run the app on an old samsung phone running android v5.1.1 all functions worked including mqtt.
Thanks for the idea of the widget I will try to implement it. in the future I would like to add the possibility to activate it through a wake word.

Cool. I never created an Android app. Quite interested but too busy to learn it.

I really love your app. It’s working really good in my setup and it’s way better than screaming snowboy or other wake words in my living room.
I’m excited to see how the wake word implementation will work.
I’m using rhasspy with home assistant and returning text that will be spoken through the audio jack from my raspberry is nice.
Is it somehow possible to have that text spoken by the app? As it says Speech to text in the feature list. I tried some rhasspy configurations but I can’t get it to work.
It it’s not possible this feature would be quite nice especially when using the whole thing in different rooms.
Keep up the great work :slight_smile:

If you mean that the ability to speak text when there is no active session (so when you didn’t click the microphone) there is the possibility to use hermes/dialogueManager/startSession by sending the payload


{
"init": {
"text": "some text", the text to be spoken
"type": "notification"
},
"siteId": "siteId", the siteId in which the text will be spoken
"customData": null,
"lang": null
}

so the text will be pronounced by the app if the device has active screen on otherwise Android disables for apps the ability to access the network to save battery but you can fix this by changing its battery-saving settings for the app.

About the Wake Word soon i want to release the new version that will include the ability to use the various wake word already available on rhasspy thanks to the ability from the app to share audio through UDP.

1 Like

That was a fast response :slight_smile:
I can set a text in the homeassistant intent which will then go through the text to speech and played by the jack output of the pi.
For example: What’s the temperatur in the the living room? -> Pi starts talking: There are 22°C in the living room. But that speaker is located in the living room. So having this audible feedback from the mobile would be great.
So Im clicking the microphone. Speak the request and will get an audible response from the mobile.

This feature is implemented I use the app with home assistant, send the intent through the events then thanks to Appdaemon send an Endsession or Continuesession after that the text is spoken by the app. If you can share additional information about your configuration so I can understand what is the problem.

Hey I really like your app, so far its working on my end :). But I wasn’t able to figure out how the wake word feature works… I have started the wake word feature but when I use the wake word nothing happens. Do you have a guide or sth that can help me out here?

For now the wake work consistenze only in send udp packet contents audio data so the detection is not done locally, but in this way you can use all the wake word detectors available in rhasspy just insert in the app ip and the port to which you want to send the audio and on Rhasspy add in UDP Audio (Input) ip (of machine where is rhasspy):port:siteId. if you are on docker you have to open an additional port before run the container “-p port:port/udp”.

1 Like

Hey @razzo04 congratulations on the wonderful idea and application. Could you please share your Rhasspy setting (profile) or or some simple guide? Im trying to use your app, rhasspy and node-red. App connected to rhasspy api and to mqtt without issues. Speech to text working, but i have a problem with intend handling and text to speech.

EDIT: When i set siteid as default in the your application, i can hear a voice output from the Rhasspy, but still get error message “no one managed the intent…”

I added an explanation for how to configure the app. Be patient with my bad English.
However the app gives the error “no one managed the intent” if within 4 seconds after receiving the intent doesn’t receive an endSession or continueSession message from rhasspy.

1 Like

Thanks for explanation, i have to check wheres the problem with endSession or continueSession. Maybe its because i use api/text-to-speech for responses in the node-red.

Yes it is possible, I have never tried the app with node-red and api/text-to-speech since I used python and MQTT but in the future I will run further tests to improve compatibility.

Did you try to implement local wake word?

I haven’t tried yet, but in the future I’ll try.

1 Like

@MihataBG
I noticed that with the latest versions of rhasspy has been modified messages
endSession and continueSession removing the field siteId so sometimes the app showed
the message “no one managed the intent” even if this had arrived correctly. Let me know if the problem is solved with the latest version of the app.

@razzo04 With the latest version of app and version 2.5.6 of Rhasspy im still receiving the message, but i think that the problem is from using api/text-to-speech. Maybe i have to check how can use python and MQTT in my case. What topic you use for text-to-speech?

I don’t send text-to-speech manually but sending from AppDaemon endSession or continueSession rhasspy takes care of sending the request for text pronunciation. if you want to send the request manually you can do it through the topic “hermes/tts/say” and as payload

{
"text":"",
"siteId":"" // must be the same in the app settings
}

Hi, I really appreciate your work. But I’m having the same issue like MihataBG. I have to type in the siteid of the rhasspy base in order to get tts to work. My assumption is that if I use nodered (or the web UI), the message first goes through rhasspy and because of that has the siteid (in my case) “base” and not the one the app would react to. Does anyone knows a way to tell the tts service what siteid it should use or am I understanding something wrong?