Wolframalpha and Home Assistant, is it possible?

Hi There! First post here. I love rhasspy! I have successfully set it up to control home assistant, but now i would like to incorporate wolframalpha as a general knowledge api so that I can ask it general questions. I have two questions about this:

  1. Has anyone successfully connected to the wolframalpha api? Did you have to write your own json parser to get the answer?

  2. Is it difficult to send sentences without an obvious intent to a different api for processing?

Thank you for your assistance.

1 Like

No idea to question one but I can try to answer the 2nd one.

The short answer is yes, it is difficult.

Long answer:
The only way that I think it is possible now would be open transcription. I tested it and it understands quite a bit but at least on my pi4, it is slow. I also don’t think that it is possible to put unknown values into slots, so you would have to do something like this:
Have one rhasspy running without open transcription and have an intent like “Ask Wolfram Alpha”. Then, once that intent is recognized you have your script start recording and then send the 2nd part of that question to a 2nd rhasspy installation with open transcription enabled, get that parsed (or at least transcribed) and send the text to wolfram alpha. Doing it this way means that you can’t say it all in one sentence, you have to pause until your script starts recording and I would suggest playing some kind of sound or a question so you know when to speak.

this does sound interesting - is this the api you had in mind: https://products.wolframalpha.com/short-answers-api/documentation/

the problem will be correctly understanding things like numbers in open transcription mode - but the way seems feasible.

Thank you for your reply. That does sound overly complicated. I wonder if home assistant could forward the transcript instead somehow.

One way this could be done:

  1. Listen for hermes/asr/textCaptured and store the session ID
  2. If you see a hermes/nlu/intentNotRecognized for that session, send the text to the different API
  3. Create your own hermes/nlu/intent message from the response

This could be from a NodeRED flow or a Python script.

Thanks for the response @synesthesiam !

I started to look into this, but the problem is that my speech to text (currently pocketsphinx) seems to want to only transcribe words that are in my intent configuration. If I click open transcription, I cannot discern intent for HA. So if I ask something like, “what color is the sky” the recognized speech is way different, and no intent is assigned. Any ideas on this?

1 Like

DeepSpeech or Kaldi will work better for open transcription.

there is a nice cli that works like wolframalphas interface, but uses google for its answers:

I started to port it to python, because the initial setup didn’t work for me:

once I get a final version, i’ll try open transcription as well.