Jump to content

Amazon Echo and ISY


madmartian

Recommended Posts

When you do step 5 and get that link does it match the link you use to get to your REST server?  i.e. if you click on that link can you get all the REST information?  If not then something is wrong with the data you are entering into the form under "Get ISY devices".

 

Additionally, if you want to enter it in manually the devices address is in the REST information when you hit the link manually.  You can then use the "Add a new device manually" section if you wanted.

 

One other thing to note is because of cross side scripting protection you need to open Chrome with web security disabled by closing all of your chrome windows and doing "open -a Google\ Chrome --args --disable-web-security" 

Link to comment

New question. Now that I have the Echo and Hue Emulator working great I have run into one issue. I have way more than 27 devices I would like to control. So I am thinking about adding another one or two Echo's in other rooms of the house to control devices within reach of those rooms. So my question is, is there a way to run multiple emulators that would be specific to each of the Echo's in the house?

Link to comment

New question. Now that I have the Echo and Hue Emulator working great I have run into one issue. I have way more than 27 devices I would like to control. So I am thinking about adding another one or two Echo's in other rooms of the house to control devices within reach of those rooms. So my question is, is there a way to run multiple emulators that would be specific to each of the Echo's in the house?

 

Maybe. You might have to run them on different computers. It might be possible to mess with the default ports such that one would be on 8080 and the other would be on a different port on the same computer, but I don't know if anyone has tried that. I don't know where the port numbers are stored. It is possible that all three ports need to be different (8080, 1900, 50000)  but 8080 for sure. Not sure, though, if you can get each echo to see different emulators, even if they are on different computers but connected to the same network. I intend to try this once I get a second echo (next time they have a sale that lasts more than 15 seconds). An upstairs echo and a downstairs echo would be ideal for me.

 

Ideally it should be possible to create an emulator that can handle more modules. There are at least two different Hue emulators - perhaps more. It is possible someone has licked this problem already, just not with the emulator most of us are using. Someone may also be working on a WeMo or Wink emulator...

Link to comment

I will repost in message 1 when I have the kinks worked out...

 

WeMo Emulator Instructions
 
Install Python 2.7:
   Select the MSI Installer for your version of Windows (there are versions for other OS', but I only played with Windows)
   Run the installer
Install "Requests":
   Copy the folder inside to a desired location
   Rename the folder to PythonRequests (optional)
   Open a command prompt and paste: python setup.py install
   If that doesn't work, add your Python27 folder to your path:
   at the command prompt, paste: PATH=%PATH%;C:\Python27 (or wherever your python folder is)
   Now try the install again: python setup.py install
Download fauxmo.py:
   Right-click and SaveAs... Should default to fauxmo.py
   Put the file in your Python27 folder
Edit the file with your modules:
 
BEFORE:


FAUXMOS = [
    ['office lights', rest_api_handler('http://192.168.5.4/ha-api?cmd=on&a=office','http://192.168.5.4/ha-api?cmd=off&a=office')],
    ['kitchen lights', rest_api_handler('http://192.168.5.4/ha-api?cmd=on&a=kitchen','http://192.168.5.4/ha-api?cmd=off&a=kitchen')],
]


AFTER:


FAUXMOS = [
    ['office', rest_api_handler('http://user:pass@192.168.X.X/rest/nodes/XX X XX 1/cmd/DON','http://user:pass@192.168.X.X/rest/nodes/XX X XX 1/cmd/DOF')],
    ['down hall', rest_api_handler('http://user:pass@192.168.X.X/rest/nodes/XX X XX 1/cmd/DON','http://user:pass@192.168.X.X/rest/nodes/XX X XX 1/cmd/DOF')],
]

   Note: remove preceding zeros from the middle digit. If a module is 1A 06 E3, drop the zero from the 06.

 

Run the python file:
   CMD
   cd \Python27 (or where you put the faumo.py file)
   python fauxmo.py -d
 
You should see the following:
 
C:\Python27>python fauxmo.py -d
Listening for UPnP broadcasts
got local address of 192.168.X.X
UPnP broadcast listener: new device registered
FauxMo device 'office' ready on 192.168.X.X:PORT
UPnP broadcast listener: new device registered
FauxMo device 'down hall' ready on 192.168.X.X:PORT
Entering main loop
 
Now ask Alexa to discover devices. This should be successful and should result in responses in the command window:
 
Responding to search for office
Responding to search for down hall
Responding to search for office
Responding to search for down hall
Responding to setup.xml for office
Responding to setup.xml for down hall
Responding to setup.xml for down hall
Responding to setup.xml for office
 
If discovery doesn't work and you see no response at all below "Entering main loop", then try rebooting and running the python script again. Once you have tested everything, add the python command to a batch file and add the batch file to your startup folder.
 
 
NOTE: the code above will re-assign port numbers every time you re-run the python file (such as after re-booting). However, this should not be a big issue as the Echo asks the emulator for an update every few minutes. If this is an issue for you, there are methods for hard-coding the addresses at the above link.
Link to comment

So far the WeMo emulator is working every bit as good as the Hue emulator. So far I prefer the WeMo method because everything is in one file (though you do have to install Python). Moving the WeMo emulator from one computer to another is as easy as moving the fauxmo.py file (after installing Python and the Requests module).

Link to comment

Thanks for the info, madmartian! I've been using the Hue emulator for a few weeks and it's working well, but it has a limit of about 27-28 devices.

 

So today I tried out the WeMo emulator - I have good news and bad news :?

 

The Good: The Python script runs great on my RPi2 and it's much easier to maintain since as you noted everything is in one editable file. It's also WAY faster to load than the Hue Emulator JAR file, which takes about 40 seconds to start up after booting the RPi2.

 

The Bad: I can only get Alexa to discover 28 devices ... beyond that she reports "I couldn't find any devices". From viewing the cmd window, it appears that the emulator is too slow responding to each device search and Alexa times out at 20 seconds. This is probably the same issue that prevents the Hue emulator from handling more than 28 devices. (BTW, omit the '-d' parameter when starting fauxmo.py unless you want to view responses ... that slows discovery response time considerably.)

 

It's possible that running the WeMo emulator on a PC will be faster, but in my HA system I prefer to use the RPi since my PC server is already heavily loaded (web server, openhab, and a bunch of other stuff).

 

UPDATE: I've been able to get past the 28 device limit by running BOTH the Hue emulator (with 28 devices) and WeMo emulator (with another 14 devices) concurrently on the RPi2 . Alexa is now reporting 42 devices discovered!

Link to comment

I discovered that I have a 14 device limit with the WeMo emulator on Windows 10 64-bit. Discovery fails if I add a 15th. I ran it without the -d and this did not change. I asked the creator of the emulator if he has any ideas. Will let you know if that improves. I will also try it with the current version of Python (3.4).

Link to comment

I discovered that I have a 14 device limit with the WeMo emulator on Windows 10 64-bit. Discovery fails if I add a 15th. I ran it without the -d and this did not change. I asked the creator of the emulator if he has any ideas. Will let you know if that improves. I will also try it with the current version of Python (3.4).

 

I can only get 9 devices to discover.  So I now have 37 total.  28 Hue and 9 WeMo.  Weird.

Link to comment

I am still waiting on the programmer to respond. I tried Python 3.4 and it fails at multiple points. I made the appropriate syntax changes in the code, which allowed it to run, but then it crashed. So Python 2.7 is a must.

 

One thing - the guy did a good job of explaining in detail what his code is doing:

 

http://www.makermusings.com/2015/07/13/amazon-echo-and-home-automation/

 

Someone could take this info and rewrite it in any language. Since the only language I know is Delphi, I won't be doing that. :)

Link to comment

Back on the Alexa Skill (app) for the Echo front (if anybody still cares), I have my Alexa skill running and controlling the devices in my house. A list of device IDs and spoken names (node list) and the controller settings is in a database, so it would be configurable to be used with anyone. Next step is to create a service that supports a simple website creating a user profile, setting your controller settings, and building a node list from the ISY via REST.

 

What I was planning on doing was have the service build the list from any nodes that had a spoken name attribute set. However, it appears that spoken name is only available on devices, but I am finding that I am much more interested in controlling scenes than individual devices. For example, if I am standing in the bay window at night and want to see what's outside, I don't want to have to say "Alexa, tell Victor to turn on the left backyard floods," "Alexa, tell Victor to turn on the right backyard floods," "Alexa, tell Victor to turn on the basement porch lights," and "Alexa, tell Victor to turn on the deck lights." I would muck rather say, "Alexa, tell Victor to turn on the backyard lights," which is a scene that I have setup.

 

Michel, I don't mean to sound ungrateful (I know you added spoken name to devices already), but can we get this attribute added to scenes (groups) as well? Even more difficult, can we get spoken name worked into the UI for scenes? Maybe in 5.0 beta? That would be great. Otherwise, my service will be limited to devices only (or scenes for me since I can just manually alter my database).

Link to comment

How are you planning to make it available?  Open sourcing via github or otherwise.  It could be a good start for a broader Echo service with more flexible intents etc. as folks look to add stuff that requires requests other than on/off.

Link to comment

How are you planning to make it available?

Don't really know yet. A database, web service, and web app will cost money to host, not to mention the processing cost for the lambda service implementing the actual Alexa Skill. So it's either charge an app fee to cover costs (after some testing) or just open source it and not worry about it. Frankly it only makes sense to make it database driven if it is going to be a multi-user service. If each individual is doing it for themselves, you can keep the database out of it and save 150ms or so per call. Better yet, you can put your individual device/scene names in the sample utterances to make the speech recognition better. As it is now, I have chosen generic sample utterances and want to eventually add smarter matching of spoken name with names in the node list, something like a confidence match with a threshold.

Link to comment

Hello kingwr,

 

We are already working on ISYPortal integration with Echo and will host a node.js service for it. If you'd like to help, we will probably have a quicker development time based on what you have already developed. Please PM me if you would like to help and we can also discuss spoken word.

 

With kind regards,

Michel 

Link to comment

I just found this topic today. First some background:

 

My home is mostly controlled by an ISY994. The ISY994 controls 99 Insteon Devices and 2 Z-Wave Thermostats. There are about 50 scenes and 25 programs. In addition to the ISY994 there is an Autelis Pool control system, a One-wire controller for reading the temperature of every room, a Global Cache GC-100 to handle all IR, and an older Homeseer system which does involved programming and controls the security system until it is upgraded. The user interface to all the systems are a set of 5 wall mounted Gen 1 iPads and 1 iPhone 4s. The house has a dedicated Home Theater which is quite large (133" diagonal screen) and is controlled by a PC that gets its commands from either a Pronto PRo (I like the hard buttons) or an iPad. I do not think voice will be possible due to the high ambient noise factor in a theater.

 

The house has a Single gigabit LAN running off a MikroTik Router with a maximum of 36 switched ports (2 16 port gigabit switches) . The house was wired with cat 5e and runs fine at gigabit speeds.  The router provides firewall services and port forwarding.  All the equipment is kept in a dedicated "Server" room with its own HVAC and UPS. The house has a backup generator that comes on line after a 20 second power outage. The "Server" has 2 Windows 7 always on PC's (24/7), 1 windows 10 PC for test and evaluation, and a 33TB NAS based on Linux/unRaid and expandable to 44 TB. With regard to reliability, If I don't touch things they stay up. For example the current up time for the NAS is 235 days, 21+ hours. A major contributor to the high up times is the UPS and the backup generator. If you have never seen a Mikrotik router do some research. It is a fantastic device. Does all common router functions plus DHCP and has a built in sniffer

 

Last month I picked up an Amazon Echo and became a registered developer. I spent the last month reading. Now to the meat.

 

I made the decision to use the Alexa skills SDK and base the skill on an endpoint on my house LAN. I do not like cloud services and I try to minimize their use in my home. To do this I had to:

 

Set up port 443 to be forwarded,  Install "stunnel" as a windows service on one of the always on PC's. stunnel is a freeware "SSL Proxy" that wraps any sort of endpoint so the endpoint does not have to be SSL aware. AWS requires that a non-AWS cloud endpoint, one on an internet reachable LAN must be using SSL. Write and install an endpoint that I hand coded. Configure stunnel to forward all decrypted received data to the endpoint, and return encrypted responses from the endpoint. Eventually I intend, after all is working well, to move the endpoint and the SSL proxy (stunnel) to a Raspberry Pi.  The endpoint deals with multiple TCP/UDP sockets one of which is where the stunnel proxy sends the decrypted text. The decrypted text is http protocol as per the AWS Alexa specifications.

 

I chose (totally independently) the skill name to be Sarah for the same reasons as Randyth. Ergo utterances are something like "Alexa, Tell Sarah to . . .".  After I get the interactions between AWS and my endpoint working I will start to do the "backend" of the endpoint. This "backend" will handle all my HA controllers which are all TCP/IP and UDP/IP based. I have a lot of experience dealing with the ISY994 because of what I did on the iPads. The backend will read the full configuration from the ISY when it starts just as the iPads do. It will then know the names/addresses of all the devices , rooms, scenes, and programs on the ISY994. It will subscribe to the ISY994 for all updates as the iPads do and monitor the heartbeat drom the ISY994. The Autelis pool controller and the Homeseer system can be querried over TCP for their current status, and they send status change information asynchronously over UDP. The one wire controller can be queried at any time for the temperature of any room.

 

At the current time the Alexa skill and my endpoint are talking to each other. The Alexa skill is transmitting intents, the results of Alexa hearing a string for Sarah, and the endpoint is receiving them. All the endpoint does at this time is display and log the header and the body it received. Tomorrow I will add the code to write the responses back to Alexa just to make sure all is correct. Probably a simple response of "Okay Barry".  Then the fun begins.

 

If others are using an Alexa skill as I and Randyth seem to be doing, I would like to share intentInstance files and utterance files. I can be reached at barry@the-gordons.net

Link to comment

I have advanced a little and have my LAN based endpoint completely dealing with Alexa handling IntentRequests, LaunchRequests, and SessionEnd request. My end point is not a web server but rather a VB6 app (I use vb6 since I have a large library of code and it provides debugging I like). The only limitation on a Web based endpoint (LAN based) is that it must use SSL and handle certificates. 

 

I think this approach will work well to integrate and deal with the next group of IOT device. With the intermediary, I call it my ISY Proxy Server, I can add code to handle most any situation and write it pretty fast. Once I get it debugged I will port it to Node.js and run it and the SSL wrapper on a RPi 2. There is already a version of the  SSL wrapper for the RPi. I am using a self signed certificate but switching to a proper CA certificate should not be a problem just a little additional cost. The SSL wrapper comes with everything needed to build certificates (OpenSSL) and either get them authorized or self-signed.

 

I am just starting to write the code that during initialization of the ISY Proxy will pull the configuration from the ISY and allow me to make a reasonable data base of what exists.  This is how I run the code on my iPads. On the iPads it is all written in Javascript. I am interested in the "Spoken name" feature that has been alluded to in this thread. Is it in the ISY current version?.  Where can I find out about that?  This week I will add all the code to deal with the ISY, grabbing the configuration, setting up the tables I want, Subscribing to changes in device states, monitoring heartbeats so I can re-request the ISY state if it has gone off line. The iPads deal with the REST interface to handle all devices on the ISY (Insteon and Z-Wave). 

 

I may need an "Alias" table to map various easily spoken names to what I am using as device names on the ISY. My ISY is organized so rooms are folders and devices are in rooms. 

 

As an interesting aside, my Echo is right now next to the monitor in my office where I do my development work. I played the little video from this thread with the little girl and the father asking Alexa  to put on party mode. My Alexa immediately replied that she could not find a group name Party Mode in my profile.

 

Reading the documentation it seemed quite complicated to get started but it really wasn't. With the proxy server there is nothing about my world or home stored in the cloud, it is all in the proxy server  which is well secured. I have put a feature request into AWS to add a property to the session object on an intent request. The property I want is a unique ID such as the Echo's MAC Address. I feel it would be handy to know what room the request is coming from.  I do that with my iPads as there is one in each room.

 

When I get it all on the RPi the RPI and the ISY will be sitting near each other on the wall in my server room plugged into the same LAN switch. Hopefully rarely having to deal with them

Link to comment

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...