Jump to content

Amazon Echo and ISY


madmartian

Recommended Posts

Ross I just posted a new version on my web site.  In your ini file set the JSON Save parameter to true.  There are some other new entries in the ini file, but they are set to the default values so you can add them if you do not have them, but it will not affect the test.

 

Send me the two files as before and if possible a screen snapshot if it fails.

 

Thanks 

Link to comment

I just put the new, hopefully final version of the AWS_Config program on my web site.  It is version 6.0.0

 

The problem was mine.  I was not handling the chunking of the data as indicated by the header clause. The result was that the chunking indices were being left in the data stream. If the index, a hex number as a string e.g. 8000 for an index of 32768, , fell in some part of the JSON where it disrupted the syntax; the parser would fail.

 

The ini file is set with the JSON Save parameter set to false. If you need to send me debugging information make a run with that parameter set to true, and send me the aws_config.log file and the aws_JSON.txt file. The aws_JSON.text file allows me to reproduce the exact conditions for your specific Harmony hub.

 

If all is working well, you cant set logging to none instead of All. That will make the process run much faster as nothing of any size will be written to the log file, only key messages and error messages.

Link to comment

Finally got a chance to install your latest version.

 

Startup went without a hitch. 

 

All activities are now seen and are working (fairly) well with the Echo. (The activities run fine, but Alexa responds that it could not contact XXXX - go figure - not a problem with your app.) One of the best things is that the Harmony remote does respond as well so that the remote screens change to reflect the status of the activity.

 

The only issue I'm having now is when I attempt to assign a device "button" to the emulator, I get the errors shown in the screen shots and captured in the logs which are in the zip file I attached. 

 

I have hopes of using Echo to make voice changes such as Vol Up/Down, Mute, Play, Pause, etc. But it doesn't look like it would be possible to send a string of numbers in order to change channels.

 

But I can easily live with just the activities being voice controlled if this final issue takes too much of your time. Thanks again.

AWS_JSON.zip

Link to comment

Ross,  I am retired so I generally have enough time to chase puzzles which I really enjoy doing. Previously I had no way of duplicating the exact conditions encountered by a AWS_Config user.  That is now resolved with the AWS_JSON.txt file. 

 

I will look into your issue today or tomorrow.  I will give it a quick try now to at least see that I can duplicate the issue, and let you know if I need anything beyond what you have sent me.

 

I will get back to you shortly.

 

As you know I do not have a harmony hub. I run the ISY to control about 95% the HA things in my home. I control just about everything by voice using the UDI skill. I am handling IR controlled devices using a Proxy I wrote that runs on an RPi. The Proxy accepts network commands from the ISY as simple TCP strings. The TCP strings consist of a command plus parameters. For an IR controlled device the command is "DOIR" or just "IR" followed by a set of comma separated parameters to state the Global cache to use, the port on that Global Cache, the name of the timing file , and the name of the specific command in the timing file.  The proxy then issues the correct timing sequence to the specified GC unit and handles all issues. There is a command named MACRO which is similar, but allows for the specification of multiple IR commands which are sent the the GC unit with correct timing and feedback processing. I use the Macro command to do things like change channels. There is a separate program to build the correct timing strings for the Global Cache using as input several different IR command formats (e.g. .irp, Pronto Hex, . . .)

 

The proxy can handle other device types that use RS-232 or complex TCP/UDP sequences. In fact the proxy can be made to do anything I want.  In effect, it is my simplified Harmony Hub, but under my full control so it never goes obsolete when things change.

 

On the subject of voice control, for network commands I just have simple ISY programs whose Then part puts out an ISY network resource command along with an Else part that is sometimes used. The Then part generally issues the correct string for "Turn On" and the Else part when used the correct string for "Turn Off". In the case of a device with only a Power toggle command I issue the same string for both Then (on) and Else (off).

 

In my home about 85% of the time the response to a voice command occurs within 1 second of when I finish speaking.  Sometimes (the other 15%) there is a delay of a few seconds which I believe is due to network or cloud congestion. 

Link to comment

Corrected version (6.0.3) has been placed on my web site.  I neglected to quote a JSON substring for the Add function which had changed in the code to accommodate a request from blueman2.

 

Blueman2 had BWS make a change that allows for multiple buttons in a single button URL.  He uses this example, to send channel numbers for example. I added it to the emulator so that a URL of the form x-y-z-. . . will send the emulator a JSON list structure which is then interpreted as button x, button y, button z. . . .  The way AWS_Config works it does not care what x y z are they can be numbers or the names of other buttons as long as the names do not contain the "-" which I use as a separator.  In effect this allows you to command the Harmony to send a Macro.

 

Let me know how you make out with version 6.0.3

Link to comment

Tremendous work!

 

The individual device buttons now are assigned without error.

 

I am not a coder and am amazed at the system you have put together. I am just able to learn the bare minimum needed to make my system work.

 

A couple of questions to allow me to properly make use of your great tool:

 

I noticed when I assigned my receiver Volume up command (obviously just pulled from the Harmony "database") your app's Device Editor has 2 Press URL boxes and the top (assume it is ON) listed Volume UP and the bottom (assume it is OFF) listed Volume down. (your doc file calls this the antonym entry)

 

Would it be best practice to simply use the Friendly Name "Volume" and tell Alexa to Turn On Volume to increase and Turn OFF Volume to decrease?

 

As for the Blueman's capability to essentially create a macro, would that mean I just need to unlock the button's URL and string together multiple button press codes? If so, what is the format I need to follow?

 

If that works as I think, I guess I could create channel macros for my favorite channels and simply use the channels name to execute, is that right?

 

At first I didn't think I would need to run multiple emulators, but now that your tool is working great and has more functionality, I may need to figure out how to set up multiple ones with BWS's latest version (unless the old limit of 25 on the hue is not a factor anymore)

 

I'm glad all my issues and questions are keeping your retirment active and aren't a thorn in your side!  

 

Cheers mate!

Link to comment

To answer your questions:

 

I noticed when I assigned my receiver Volume up command (obviously just pulled from the Harmony "database") your app's Device Editor has 2 Press URL boxes and the top (assume it is ON) listed Volume UP and the bottom (assume it is OFF) listed Volume down. (your doc file calls this the antonym entry)

Would it be best practice to simply use the Friendly Name "Volume" and tell Alexa to Turn On Volume to increase and Turn OFF Volume to decrease?

 

Look in the ini file under the Harmony Hub. You will see an entry for antonyms.  If you select a device button that contains an antonym set (either member) it populets it as the On URL and the sets up the Off URL to be the opposite as stated in the Antonym. Using just the friendly name with Alexa should work but I am not sure it will.  It does work with the UDI skill.  In my system (UIDI skill) I have a friendly name "kitchen lights" and I say Turn on the kitchen lights or turn off the kitchen lights.

 

 

As for the Blueman's capability to essentially create a macro, would that mean I just need to unlock the button's URL and string together multiple button press codes? If so, what is the format I need to follow?

 

I believe that is correct.  Just name the button codes and separate them with a dash. I suggest you discuss this with blueman2 as I know very little about it.  I implemented it for him and he went on vacation so he has not had a chance to test it. He wshould be back in about a week or so.  Just PM him.

 

If that works as I think, I guess I could create channel macros for my favorite channels and simply use the channels name to execute, is that right?

That sounds right to me. but I am not sure.

 

At first I didn't think I would need to run multiple emulators, but now that your tool is working great and has more functionality, I may need to figure out how to set up multiple ones with BWS's latest version (unless the old limit of 25 on the hue is not a factor anymore)

 

I do not think the 25 limit is a factor but I do not know for sure as I am not using the emulator at this time.

 

Setting up  multiple emulators is trivial IIRC.  You jst need the start up in rc.local to look some thing like:

 

# The following is for multiple emulators on the same machine
echo "Starting the Echo Bridge Emulators" 
nohup java -jar -Dvera.address=192.168.1.229 -Dupnp.config.address=$_IP -Dserver.port=8081 -Dupnp.response.port=50001 -Dupnp.device.db=/home/pi/echobridge/data/device81.db -Ddev.mode=true /home/pi/echobridge/current.jar > /home/pi/echobridge/logs/log81.txt 2>&1 &
 
nohup java -jar -Dvera.address=192.168.1.229 -Dupnp.config.address=$_IP -Dserver.port=8082 -Dupnp.response.port=50002 -Dupnp.device.db=/home/pi/echobridge/data/device82.db  /home/pi/echobridge/current.jar > /home/pi/echobridge/logs/log82.txt 2>&1 &
 
nohup java -jar -Dvera.address=192.168.1.229 -Dupnp.config.address=$_IP -Dserver.port=8083 -Dupnp.response.port=50003 -Dupnp.device.db=/home/pi/echobridge/data/device83.db  /home/pi/echobridge/current.jar > /home/pi/echobridge/logs/log83.txt 2>&1 &
 
nohup java -jar -Dvera.address=192.168.1.229 -Dupnp.config.address=$_IP -Dserver.port=8084 -Dupnp.response.port=50004 -Dupnp.device.db=/home/pi/echobridge/data/device84.db -Dharmony.address=10.0.0.1 -Dharmony.user=<userid> -Dharmony.pwd=<password> /home/pi/echobridge/current.jar > /home/pi/echobridge/logs/log84.txt 2>&1 &
 
I believe only one entry needs to have the harmony info. <userid> and <password> should be your Harmony info.  I suggest you discuss this with blueman2
Link to comment

Barry,

 

I am successfully using your new 6.0.3 version with the HA-bridge 2.0.7 version.  It now reads all the multi-button press data and allows me to copy from one bridge instance to another.  For those who are new to this, the multi-button press for the harmony allow me to say "Alexa, turn on CNN" and it will change channels of my DirecTV receiver to CNN channel using Harmony Hub. It also allow me to say "Alexa, turn on loudness" to increase the volume by 5 button presses, or down by saying turn it off.  

 

Thanks so much!!  I am a very happy camper again.  I plan to stay on HA-bridge 2.0.7 for now on out, perhaps until I finally move completely to the Portal.  But for now, my system is tweaked just right and I plan to leave it alone and just enjoy it!

 

Thanks again, Barry.  You are very generous with your time and skills.  I am even more in your debt!!

 

Blueman2

Link to comment

I assume that the reason you are still using the bridge is due to the Harmony capability. I have completely switched to the ISY connected home skill and am totally satisfied. Could you possibly use both? The bridge to deal with the harmony, and the ISY connected home skill to deal with the ISY? I don;'t see where amazon would care. The bridge looks like a Hue system, but the ISY connected home looks like itself. so I am not sure there would even be an issue.  Amazon should handle multiple smart things.

 

 

 

Barry,

 

Since you have moved on the the ISY Portal, I wanted to better understand what benefits you see with the Portal service instead of using a Pi-based emulator. What can you do with the Portal that you cannot do with the emulator?  The only thing I see is being able to ask Izzy questions and get responses, such as temperature, etc.  But I find the whole "Alexa, Ask Izzy to...." phrasing to be overly complex. My wife has gotten very used to simply saying " Alexa, turn on Kitchen Lights" or "Alexa, set temperature to 71 degrees".  Adding another person (Izzy) to the conversation just seems awkward.  Given that, I do not see any benefit to going with the Portal.  Especially since I still need to run the Pi for other uses, such as Harmony control to change TV channels and adjust TV Volume with more than one button press.  

 

Curious on your thoughts.  

Link to comment

Blueman2,

 

First off, I use the UDI skill. Since that is directly mapped there is no intermediary named skill needed to be asked or told to do anything. I do have my own skill named Sarah, but I do not use it at this time since I can do everything I want/need using the UDI skill.  The bottom line for me is that using the UDI skill or the emulator looks exactly the same from a vocalization point, i.e. you say exactly the same things. I use programs referencing network commands to hit other systems such as my Proxy.

 

jratliff,

 

Clever, I assume you are using the UDI skill to handle commands ("Alexa, turn off . . ." as opposed to "Alexa tell me to turn off . . .")

Link to comment

Barry,

Yes turn on off stuff I use the normal device way.

 

Asking a question that fits a "tell me..." structure I write into the "me" skill.

 

I've been using skills a lot lately to make certain requests more natural. I have a skills with names for each of my Sonos speakers(Alexa, tell the living room to play/stop/speak a message), skills with family names used to send texts (Alexa, tell Jason some message), and I found out the other day you can launch skills by saying their name alone. So I made a skill named "where is Jason" and I can just say "Alexa, where's Jason" and it will get my location and speak it.

 

 

Back to the emulator vs portal. I've been using the emulator since the beginning with no issues and haven't touched it. But the past week or so Alexa keeps saying my devices are not responding. Not sure why. I might just go ahead and switch to the portal if I can't quickly find a reason it stopped working. My original echo keeps saying it is having trouble connecting to the internet every so often as well for a few minutes, but if I yell out Alexa to the one in the other room it's always fine. Not sure if that's related but it's odd.

Link to comment

Barry,

Yes turn on off stuff I use the normal device way.

 

Asking a question that fits a "tell me..." structure I write into the "me" skill.

 

I've been using skills a lot lately to make certain requests more natural. I have a skills with names for each of my Sonos speakers(Alexa, tell the living room to play/stop/speak a message), skills with family names used to send texts (Alexa, tell Jason some message), and I found out the other day you can launch skills by saying their name alone. So I made a skill named "where is Jason" and I can just say "Alexa, where's Jason" and it will get my location and speak it.

 

 

Back to the emulator vs portal. I've been using the emulator since the beginning with no issues and haven't touched it. But the past week or so Alexa keeps saying my devices are not responding. Not sure why. I might just go ahead and switch to the portal if I can't quickly find a reason it stopped working. My original echo keeps saying it is having trouble connecting to the internet every so often as well for a few minutes, but if I yell out Alexa to the one in the other room it's always fine. Not sure if that's related but it's odd.

 

 

jratliff,

 

That is crazy cool!!!  How are you creating the skills?  Is there a primer I can read somewhere on how to create my own skills?  Also, do you not run into issues with Skills naming conflicting with other Alexa skill names or protect words??

Link to comment

Blueman2,

 

First off, I use the UDI skill. Since that is directly mapped there is no intermediary named skill needed to be asked or told to do anything. I do have my own skill named Sarah, but I do not use it at this time since I can do everything I want/need using the UDI skill.  The bottom line for me is that using the UDI skill or the emulator looks exactly the same from a vocalization point, i.e. you say exactly the same things. I use programs referencing network commands to hit other systems such as my Proxy.

 

Ah, OK.  So the only time you need to use Izzy is when you ask a question?  For example, I think with the Portal, you can ask for the thermostat temperature.  I assume that requires a 'ask izzy' type of command?

Link to comment

@blueman2

 

I understand you may be using macros assigned to buttons via the configuration app.

 

Can you detail how you have set up and used it?

 

Thanks.

 

 

Do you mean multiple buttons?  For example, I like to tell my Echo "turn on CNN" and it will change the TV channels to 202 for DirecTV.  To do this, I use the HA-bridge interface from a web browser to set multiple button presses for a 'device' named CNN.  It is detailed on the BWSSystems home page and on the GitHub page for HA-bridge.  

Link to comment

I would encourage people to play around with skills if you're willing to learn and are comfortable with computers in general. About 3 months ago I didn't know anything besides basic html. Now I have learned quite a bit about javascript and nodejs.

 

I started with a skill someone else made and started cutting it down to the basics and then pointing it to my own things then learning new stuff to add to it.

Amazon has several example skills too if you're looking for how to do specific things.

 

You can find other sites with how to's for making your first skill but here's what I used. There are many different ways to do it. I'll just pass along what I'm familiar with. You can write skills in python, java, or nodejs (javascript). The first skill I setup up used nodejs so I stuck with that.

 

This is a Sonos skill that to actually work requires you running a Sonos nodejs http server either on a computer or raspberry pi in your network. You don't have to actually set up the Sonos server if you don't want to or don't have sonos. The skill will still interact with you, but it would send back an error when trying to actually do something. Mainly this gives you step by step instructions on setting up a skill, so just follow the steps (skip the first few relating to the jishi node server if you're not going to actually use for Sonos) and then go from there with new code or samples. If you do have Sonos I really like having the server running. You can send it text and it will speak it out over the chosen speaker or all speakers.

https://github.com/rgraciano/echo-sonos

 

This is the amazon site that talks a lot about the alexa sdk, lot of good reading information:

https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/defining-the-voice-interface

 

I really like this site for learning javascript, I reference back to it often. I'd recommend just reading through the whole javascript tutorial:

http://www.w3schools.com/js/default.asp

 

You have to have a code editor for easier to follow code writing. Maybe others can give more feedback on different ones. What I've been using is this free desktop editor:

http://atom.io

 

I'm probably missing something but other than that I just do a lot of googling and reading through other peoples javascript, then cut, paste, and modify to try to get done what I wanted to do.

My "tell me" skill just accesses the ISY REST commands for status and looks to see if the device is on or off. I can share my code when you get to that point.

Let me know if you have any questions or get stuck I'll try to explain it better. I use it to ask alexa to tell me if the doors are locked, if the garage door is closed, if the sprinkler is on, what vents are open, and the temperature from my zwave thermostat. I send texts from Alexa two different ways, 1) forwarding to Tasker on my phone so the text comes from me or 2) passing the message and number to ifttt to send an email to the phone number, then my phone doesn't have to be on to send it.

 

If anyone else has more info or tips maybe they can point you to it too.

 

 

As far as problems with naming conventions and stuff, I hadn't tried using any trouble words yet until you asked. I just tried renaming my tasker skill to "tasker play". I said "Alexa, tasker play" and it opened that skill! So maybe you can use those words too! I have a TV skill I'm going to have to play around with copying out parts of it for play and pause and see if that works. The only issue I ran into so far was having a skill named "Jason" and then another skill I have prompts you for a name, so when I say "Jason" Alexa says "I can only help you with one skill at a time". Kind of dumb if you're in one skill it would try to open a second instead of just take down the speech as input.

Link to comment
  • 4 weeks later...

Hi All,

 

Haven't posted here in a long time. just wanted to let all know that I finally found a solution to Alexa not finding the HA bridge on my network. 

 

I recently setup a sonos node server and had the same issue. it worked if I plugged all devices involved into the same wireless router (it's how i've been setup since December) but not my 48 port switch.

 

this link talks about cisco switches but I used the same info on my 3com switch

http://www.cisco.com/c/en/us/support/docs/switches/catalyst-6500-series-switches/68131-cat-multicast-prob.html

 

Logged into my switch and all I did was enable multicast and disabled IGMP snooping. I then rebooted the switch 

 

FINALLY!! I'm discovering with zero issues

Link to comment
  • 1 month later...

Hi, i finally got the aws configurator working added the devices, dimmer switches and a i/o linc no problem, but the devices did not get discovered by the echo. I then broke the aws configurator by patching it to the harmony version and now it does not work, but the echo was able to discover switches only? im running a pi jesse lite, java 8, hue bridge, aws configurator, thats it. Any idea what i am doing wrong? does the hue bridge not support dimmers or i/o linc, only switches?


Hi, i finally got the aws configurator working added the devices, dimmer switches and a i/o linc no problem, but the devices did not get discovered by the echo. I then broke the aws configurator by patching it to the harmony version and now it does not work, but the echo was able to discover switches only? im running a pi jesse lite, java 8, hue bridge, aws configurator, thats it. Any idea what i am doing wrong? does the hue bridge not support dimmers or i/o linc, only switches?


Hi, i finally got the aws configurator working added the devices, dimmer switches and a i/o linc no problem, but the devices did not get discovered by the echo. I then broke the aws configurator by patching it to the harmony version and now it does not work, but the echo was able to discover switches only? im running a pi jesse lite, java 8, hue bridge, aws configurator, thats it. Any idea what i am doing wrong? does the hue bridge not support dimmers or i/o linc, only switches?

Link to comment

Hi, i finally got the aws configurator working added the devices, dimmer switches and a i/o linc no problem, but the devices did not get discovered by the echo. I then broke the aws configurator by patching it to the harmony version and now it does not work, but the echo was able to discover switches only? im running a pi jesse lite, java 8, hue bridge, aws configurator, thats it. Any idea what i am doing wrong? does the hue bridge not support dimmers or i/o linc, only switches?

Hi, i finally got the aws configurator working added the devices, dimmer switches and a i/o linc no problem, but the devices did not get discovered by the echo. I then broke the aws configurator by patching it to the harmony version and now it does not work, but the echo was able to discover switches only? im running a pi jesse lite, java 8, hue bridge, aws configurator, thats it. Any idea what i am doing wrong? does the hue bridge not support dimmers or i/o linc, only switches?

Hi, i finally got the aws configurator working added the devices, dimmer switches and a i/o linc no problem, but the devices did not get discovered by the echo. I then broke the aws configurator by patching it to the harmony version and now it does not work, but the echo was able to discover switches only? im running a pi jesse lite, java 8, hue bridge, aws configurator, thats it. Any idea what i am doing wrong? does the hue bridge not support dimmers or i/o linc, only switches?

 

Ok, I manually added the devices and urls via the built in hue bridge configuration page and all the devices are working. The aws_configurator worked the first time I configured it and ran it. now no matter what pc i install it on on my network, i get a run time '424' error "object required" the amazon hue bridge appears to work fine, but if i get that error on every pc i would guess that the problem is on the bridge but i have no idea whats wrong. the logs do not appear to show any errors.

Link to comment

ok, had another question. probably doesnt matter that much since this topic seems...................desloate and no one answered my last questions. Anyways, the hue bridge is running ok, commands are accept by the echo, yada, yada, yada, except, if i have not issued a voice command for awhile the echo just has a spinning ring for about 30 seconds then the voice request gets processed. this only happens for requests that use the hue bridge installed on my rpi. is there some kind of sleep mode in Raspbian jesse lite that could be causing this?

Link to comment

I'd answer if I knew anything about that. I have just used the regular Raspbian on 3 different pi's for people and it is pretty instant. Only time I see a long delay is for a skill I made that imports data from a google spreadsheet. The first time you have called it in a while, it loads forever, then the second time its almost instant.

Link to comment

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...