Jump to content

barrygordon

Members
  • Posts

    692
  • Joined

  • Last visited

Everything posted by barrygordon

  1. The new version 0.4.0 and is available now. I haven't tried it yet but I worked with the author to figure out what was wrong. I will be trying it shortly and posting all the info you will need. I just need to take a short break. Basically the issue was authentication. The ISY994 needs basic authentication. He was not supplying that. Later on I will be installing 3 copies on 3 different RPi's Which should give me more than enough controls. The new version is out there now, http://www.bwssystems.com/files/ha-bridge-0.4.0.jar. read the https://github.com/bwssytems/ha-bridge page for new options.
  2. With regard to the new emulator, I have gotten further and will probably have it all cleaned up tomorrow. The issue was that I do a uPnP search and checked the notify response to see if it was a Hue bridge emulator. The new version of the emulator did not identify itself as such so I disregarded it. The same may be true of the Amazon echo when it searches for hubs and bridges when it powers up..
  3. Interesting. My situation is a little different, but thanks for the advice. I always do a forget all prior to a discover devices. I am running with several RPI's and my configurator can be told which bridge to put the command on. Two of the bridges (1 and 2) are running the older emulator, the third the new emulator. If I put the device on RPi 1 or RPi 2 it runs fine. If I put it on RPi 3 it always fails with Alexa stating "The network device XXXXX is not responding, please check its network connection and power supply". In my experience this reply happens when Alexa does not get a response from the device for the command that was issued, except Alexa should not be talking to the emulator but rather to the end device that is set up in the on and off URLs. Or am I missing something. The other interesting point is that I can not discover the new bridge. I am using version 0.3.3 and will move to 0.3.5 later today. That may be the crux of the issue if Alexa cannot discover the new bridge. I use the RC.local file to set up the bridge. In my RC.local file there is a line that pulls the current IP of the bridge and sets it as a bash variable named _IP. It then does a -Dupnp.config.addresss=$_IP which has the advantage of allowing the IP address to be reset on a boot. I have tried it with the fixed address as in -Dupnp.config.address=192.168.1.83. But I have not tried version 0.3.5. Brad indicated he did some cleanup of the upnp logic in 0.3.5. What bothers me is that alexa should not care about the hub once it is configured with the on and off URL's or does it always go to the hub when it sends a command and the hub then sends the URL on. If that is the case and Alexa can't find the hub, then that explains it all and hopefully 0.3.5 solves my problem.
  4. Thanks, That appears to work. Now to my next issue. I am not having any problems with the current hub other than it 28 device limit. It seems a new version of the hub has been developed that removes/eliminates the 28 device restriction. I have that loaded on an RPi and am playing with it. Everything works fine until I tell the echo to do something such as "Alexa, turn on the desk light". If the data is on the old hub version Alexa has no problem. If the data is on the new version of the Hub Alexa states that the device is not responding and to check connections . . . Has anybody else tried the new hub version with success. the new hub can be found at www.bwssytems.com. I am going to try and contact them next
  5. I have run into a minor snag and perhaps someone with more Linux smarts than I have can help me. I have the Hue emulator running on a RPi, in fact I have 2 of them so running but only have loaded data into one of them. To ensure the emulator is running on a reboot of the RPi I added the following into the RC.local full that is run every time the system starts up (as instructed in the excellent installation doc referenced in post 1 of this thread. nohup java -jar < full path address of the jar> --upnp.config.address=192.168.1.81 > /dev/null 2>&1 & which works. If I do a ps -ef | grep java I get the correct line displayed which shows the correct IP address. Now I would like to get the startup entry above to be IP address independent so if for example the DHCP address changes it will adjust to that at next reboot. I changed the --upnp.config.address=192.168.1.81 (that is the address of the RPi) to --upnp.config.address="$_IP" where "#_IP" as shown by a prior line in the script (printf "My IP address is %s\n" "$_IP" to be the machines current IP address. If I then do a ps -ef | grep java I get the same line as before. HOWEVER the Echo no longer seems to find the emulator and reports zero devices discovered. My Proxy app finds all of the emulators, and retrieves there data no matter which version of the jar startup line I use. It is not a serious issue as I can just use a DHCP reservation and the hard coded address, but it bothers me why it doesn't work Any thoughts or explanations would be appreciated. My goal is to make an SD image that I can share and that requires no work, or at least a minimal amount of work to finalize on a RPi. The image appears to be working and includes the tinyvncserver so all the work can be done using a PC and the RPi run headless. Any assistance/advice greatly appreciated.
  6. I guess it is status update time for my ISY Proxy server. The proxy server now includes a complete configurator for the Hue Bridge. It is similar in design to "Mapper" but is quite different visually. The proxy will automatically do a uPnP search to get the IP address of the ISY and all the addresses for HUE emulators that are on your network.search You can easily select on which emulator you wish to have a device placed. When you pick a device to add, it checks all the emulators it has found and warns you if the device already exists on one of them. It also includes a listen module for an Alexa skill and a listen module for the emulator(s). The listen module for the Alexa skill assumes you have written and registered an Alexa skill to an HTTPS endpoint and are running a HTTPS proxy such as stunnel to handle all the SSL encryption and decryption. stunnel handles all the incoming HTTPS traffic, including the submission of a CA, and forwards that traffic to the ISY proxy as HTTP commands. What the skill does all depends on what intents and utterances you have set up in the skill you wrote. The listen module for the emulators allows one to set up a URL on a Hue bridge that calls back to the listen module where code can be added to handle various cases. Eventually I plan to make it such that plugins can be added to the executable which can be written in VBA or something similar. I am not sure where I am going with that. As a minimum I will be able to handle things like "Alexa, turn on/off the SPA". "Alexa, turn on the Theater" (The theater needs a WOL as I keep it asleep, but once awake will respond to HTTP REST commands). I am not sure how Alexa will operate when the Theater is playing as the ambient noise will be quite high. I have ordered a Samsung SnartThings hub to play around with it when it is released next month. Lastly, I now have a microSD image for a Raspberry Pi 2. The image contains the lest version of NOOBS with Raspian as the installed OS. It has a tightvnc server so it can be run headless using a PC or MAC that is running the tightvnc viewer which is exactly how I deal with it. It contains the Hue Emulator set up so that a uPnP search will find it. In theory you just image the file onto a microSD card using win32diskimager or something similar insert the card into the Pi and let it boot up. If anyone is interested in either of these two items drop me an eMail. If there is enough interest I will put it up for download on my web site, but I need to write a little documentation. The Proxy will work perfectly fine without the Alexa Skill and reverts to just being a configurator for the Hue emulators. The Proxy is written in VB6 (I know, I will eventually port it to VB.net and Node.js) and I run it on a Win 7 machine. I have a Win 10 system and will try it on there eventually.
  7. In the ISY administrator go to the summary page for programs. The column labeled ID is the address of the program you are looking for. Doesn't Mapper set the URL correctly with the Program's ID?
  8. Blueman is correct on the location of the emulator information My ISY Proxy server status has gotten a little further. I incorporated the same capabilities as the Mapper program with some improvements. The proxy server now searches for the Hue emulators. It will find as many as you have and is able to do the same thing Mapper does for each of the emulators. You select the emulator you wish to deal with from a drop down list the system makes. I will put the search in for the ISY next week as it should be simple. As of now if the proxy is running I can say things like: "Alexia, turn off the kitchen lights". The command gets executed very quickly and Alexia replies "Okay" "Alexia, Tell Sarah to . . . " and based on the utterances I have registered different things happen. I do not yet have the code in but intents, launches and session endings are all handled by the proxy, and show in the proxy log and as cards in echo. I just need to write the code to parse the utterances from Amazon and deal with the ISY. It is a little slower due to the extra step of getting the message from Alexia as opposed to alexia dealing through the emulator Tomorrow I will set up another emulator on a second RPi. That should give me the capability to control between 50 and 60 devices (lights) I can currently control my Ceiling fans also with "Alexia, Turn on the office Fan medium". as it is a scene in my ISY994. What I am interested in is how to tell Alexia to set a given light at a specified brightness. or to dim or brighten a light in lets say 20% steps. I know I can do it with scenes, but that quickly eats up the device count. If anyone knows . . .
  9. Use two RPi's to get past 28 devices. Clever! I need to do that. Any idea how many "Hue's" the echo will accept? Does anyone know of somewhere that states all the verbal utterances the echo will accept for the HUE. obviously "Turn off" and "Turn On". It seems to try and do something for "Shut". Any clues on what to say for dim/brightness control or is that not there yet?
  10. I have hit the 29 device limit with The HUE emulator. I operate fine with 28 devices, but as soon as I put in the 29th and ask the Echo to discover devices, it discovers no devices. So it is back to my ISY proxy. I have been in discussion with Amazon Echo customer service who are escalating this, however I do not think it will be resolved quickly
  11. But, The thing you asked for was the Notes entry, not the Node, and it does not exist; ergo the 404 is what should be returned IMHO
  12. Yes that is what happens. In my code I first check the returned header to ensure it is a 200 status. It returns the 404 since it could not find the file it was looking for, the Note object. If it is not a status of 200, I make up a dummy note that has isloaded set to false. here is a snippet of code that should illustrate the point: Function getNodeNote(node As String) As ISYNoteType ' return a note object based upon the note contained at the node Dim temp As String ISY_NodeNote = "" ' indicate nothing received from the rEST request SendRestMessage "nodes/" + encodeURI(node) + "/notes", 4 While ISY_NodeNote = "" ' the TCP receiver will set ISY_Node note to the note that exists DoEvents Wend temp = Mid(ISY_NodeNote, InStr(ISY_NodeNote, vbCrLf + vbCrLf) + 4) ' temp will = the returned body If InStr(ISY_NodeNote, "HTTP/1.1 200 OK") Then getNodeNote.isLoaded = True getNodeNote.Location = ExtractFromXML(temp, "location") getNodeNote.Spoken = ExtractFromXML(temp, "spoken") getNodeNote.Description = ExtractFromXML(temp, "description") Else ' there is a problem getNodeNote.isLoaded = False End If End Function Hope that helps
  13. And now for my status. I moved the Hue emulator to a Raspberry Pi model 2. The Pi sits on the wall in my "Server/electronics room" It has no keyboard and no monitor, ergo, it is headless. I run tiny VNC server on the Pi and tiny VNC viewer on my development desktop system. Tiny VNC like all of the VNC variants gives me full control over the computer running the server. It is as if the Pi was running with a monitor and keyboard on a long cable. I am going to renter my devices today. I thought I had started Samba on the Pi so I could just copy stuff to and from it but it is easy enough to reenter. I am not a Linux expert but I can generally figure things out. I followed the instruction at the site posted in this thread, and if i could accurately type I would have been done in under 30 minutes. Now if I talk directly to Alexia, as in "Alexia, turn off the office lights", the intent goes to the Echo Bridge Hub Emulator on the Pi. If I tell Alexia to tell Sarah to do something, as in Alexia, tell Sarah to start the SPA" the intent goes to my endpoint which is under development. This endpoint which I call the "ISY Proxy server" is an app written in VB6 running on a windows machine it listens on port 9099 (my choice) for traffic forwarded from the sTunnel app. The sTunnel app handles all SSL issues since AWS requires all local (LAN) endpoints to use https and not http. The ISY proxy gets intents from Alexia based as what I designed as my intents and utterances. It will be more flexible than the HUE emulator. At the current time interface between AWS and the ISY proxy is working. I get IntentRequests, LaunchRequests and SessionEndedRequests. I can respond to these with replies to be spoken by Alexia as I want. I can also issue 'Cards" for the Echo app. I have finished all the code to read the configuration (nodes, programs and scenes) into the proxy. For nodes I only add the Node to the proxy if it has a "Note" associated with it. This will allow me (hopefully) to keep all echo related data on the ISY994. I plan to make the description portion of the node's Note entry have additional information such as reply phrases for various conditions. The spoken word portion of the note will probably the speech fragment such as "Office Lights". If the spoken name is blank it will use the name stored in the node. The proxy subscribes to the ISY and gets all status feedback. I handle and monitor the heartbeat from the ISY and will resubscribe if the heartbeat is missing for 1 minute. With the status feedback coming in all the time there is no polling required and I will be able to cause action and Alexia replies to be state dependent. So much code, so little time.
  14. I thought I would add this tidbit of information from the Echo support staff (for what it's worth): Hello Barry, I hope you're enjoying the Echo! At this time, the Echo does not have a way to configure the amount of time that it discovers devices. You do not have to discover all of your devices during this 20-second window. You can set a few up then start the discovery process again and set up more. However, making the time to discover longer is a great idea for improvement! Thanks for your suggestions to make the amount of time the Echo spends discovering devices configurable, to use different wake words, and to send a unique ID from the Echo to a Skill that is created so the Skill will know which Echo is originating the request! Those features sound like great ideas! I particularly like the idea about adding ID's to the Skills; I think it would widen the abilities that can be created. I'll share your suggestions with our Amazon Echo development team for consideration. I forgot to add in my previous email that our Echo development team is planning to add more wake words in the future. If you have additional questions about Amazon Echo, visit our help pages at: https://www.amazon.com/echosupport You can also reach us by phone directly and toll-free at 1-877-375-9365 or by e-mail at echo-support@amazon.com. We're available from 3 a.m. to 10 p.m Pacific time, seven days a week.
  15. Yes there is a comments field. I do not know if there is any way to retrieve it by a program using the REST interface as there is for a Notes field in a Node. LeeG? Having a free form(XML or JSON could be used) field associated with scenes and programs allows one to place auxiliary information needed for some external process like an http endpoint running an Amazon Echo skill.
  16. I In what I am doing I do not use the Amazon web services. The only contact to amazon is from between Alexia and the Amazon system. All intents go to my "Proxy server" on my LAN The only issue I had was I didn't want to handle SSL but the stunnel solution took care of that very nicely eliminating the need for the proxy to understand anything about SSL. Stunnel submits certificates, I am using a self signed one as that is allowed if test mode is enabled when the skill is registered. I started playing with the Hue emulator and got that up and running in a couple of hours on a Win 7 systems. The emulator is currently handling 26 devices each with two URL's (one for on and one for off). I am concerned if the "29" device limit that I heard about is a hard limit as I have many more devices I would like to add. This week I will move the emulator to an RPi model 2. I use tiny VNC with the RPi so I can sit comfortably in my office and deal with the Pi. In this mode the Pi is running headless, no keyboard, no monitor just power and an Ethernet connection. I have a question that perhaps someone can answer. Can multiple Echo's talk to a single Hue Emulator? I have already put a Request into Amazon to pass some means of identifying the Echo e.g. serial number or MAC address, with each Intent that is passed to a skill.
  17. It would be nice if the same "Notes" capability existed for Scenes and Programs. Scenes shouldn't be difficult to implement, but programs might be a bit tougher.
  18. Thanks Lee, as usual.
  19. barrygordon

    Notes field

    There is a field that can be set for devices called "notes". Is this described anywhere? Can it be retrieved by command e.g. over the REST interface as part of the configuration, nodes , . . .. It seems it might be a simple place to provide information that can be used by external systems when dealing with the ISY befor eversion 5 becomes fully available (out of alpha/beta) . I could use it right now to provide aliases for the device names to use with the Amazon Echo.
  20. I would consider it, however I know nothing about it. Could you provide some information or point me to some information on the subject.
  21. I have advanced a little and have my LAN based endpoint completely dealing with Alexa handling IntentRequests, LaunchRequests, and SessionEnd request. My end point is not a web server but rather a VB6 app (I use vb6 since I have a large library of code and it provides debugging I like). The only limitation on a Web based endpoint (LAN based) is that it must use SSL and handle certificates. I think this approach will work well to integrate and deal with the next group of IOT device. With the intermediary, I call it my ISY Proxy Server, I can add code to handle most any situation and write it pretty fast. Once I get it debugged I will port it to Node.js and run it and the SSL wrapper on a RPi 2. There is already a version of the SSL wrapper for the RPi. I am using a self signed certificate but switching to a proper CA certificate should not be a problem just a little additional cost. The SSL wrapper comes with everything needed to build certificates (OpenSSL) and either get them authorized or self-signed. I am just starting to write the code that during initialization of the ISY Proxy will pull the configuration from the ISY and allow me to make a reasonable data base of what exists. This is how I run the code on my iPads. On the iPads it is all written in Javascript. I am interested in the "Spoken name" feature that has been alluded to in this thread. Is it in the ISY current version?. Where can I find out about that? This week I will add all the code to deal with the ISY, grabbing the configuration, setting up the tables I want, Subscribing to changes in device states, monitoring heartbeats so I can re-request the ISY state if it has gone off line. The iPads deal with the REST interface to handle all devices on the ISY (Insteon and Z-Wave). I may need an "Alias" table to map various easily spoken names to what I am using as device names on the ISY. My ISY is organized so rooms are folders and devices are in rooms. As an interesting aside, my Echo is right now next to the monitor in my office where I do my development work. I played the little video from this thread with the little girl and the father asking Alexa to put on party mode. My Alexa immediately replied that she could not find a group name Party Mode in my profile. Reading the documentation it seemed quite complicated to get started but it really wasn't. With the proxy server there is nothing about my world or home stored in the cloud, it is all in the proxy server which is well secured. I have put a feature request into AWS to add a property to the session object on an intent request. The property I want is a unique ID such as the Echo's MAC Address. I feel it would be handy to know what room the request is coming from. I do that with my iPads as there is one in each room. When I get it all on the RPi the RPI and the ISY will be sitting near each other on the wall in my server room plugged into the same LAN switch. Hopefully rarely having to deal with them
  22. I just found this topic today. First some background: My home is mostly controlled by an ISY994. The ISY994 controls 99 Insteon Devices and 2 Z-Wave Thermostats. There are about 50 scenes and 25 programs. In addition to the ISY994 there is an Autelis Pool control system, a One-wire controller for reading the temperature of every room, a Global Cache GC-100 to handle all IR, and an older Homeseer system which does involved programming and controls the security system until it is upgraded. The user interface to all the systems are a set of 5 wall mounted Gen 1 iPads and 1 iPhone 4s. The house has a dedicated Home Theater which is quite large (133" diagonal screen) and is controlled by a PC that gets its commands from either a Pronto PRo (I like the hard buttons) or an iPad. I do not think voice will be possible due to the high ambient noise factor in a theater. The house has a Single gigabit LAN running off a MikroTik Router with a maximum of 36 switched ports (2 16 port gigabit switches) . The house was wired with cat 5e and runs fine at gigabit speeds. The router provides firewall services and port forwarding. All the equipment is kept in a dedicated "Server" room with its own HVAC and UPS. The house has a backup generator that comes on line after a 20 second power outage. The "Server" has 2 Windows 7 always on PC's (24/7), 1 windows 10 PC for test and evaluation, and a 33TB NAS based on Linux/unRaid and expandable to 44 TB. With regard to reliability, If I don't touch things they stay up. For example the current up time for the NAS is 235 days, 21+ hours. A major contributor to the high up times is the UPS and the backup generator. If you have never seen a Mikrotik router do some research. It is a fantastic device. Does all common router functions plus DHCP and has a built in sniffer Last month I picked up an Amazon Echo and became a registered developer. I spent the last month reading. Now to the meat. I made the decision to use the Alexa skills SDK and base the skill on an endpoint on my house LAN. I do not like cloud services and I try to minimize their use in my home. To do this I had to: Set up port 443 to be forwarded, Install "stunnel" as a windows service on one of the always on PC's. stunnel is a freeware "SSL Proxy" that wraps any sort of endpoint so the endpoint does not have to be SSL aware. AWS requires that a non-AWS cloud endpoint, one on an internet reachable LAN must be using SSL. Write and install an endpoint that I hand coded. Configure stunnel to forward all decrypted received data to the endpoint, and return encrypted responses from the endpoint. Eventually I intend, after all is working well, to move the endpoint and the SSL proxy (stunnel) to a Raspberry Pi. The endpoint deals with multiple TCP/UDP sockets one of which is where the stunnel proxy sends the decrypted text. The decrypted text is http protocol as per the AWS Alexa specifications. I chose (totally independently) the skill name to be Sarah for the same reasons as Randyth. Ergo utterances are something like "Alexa, Tell Sarah to . . .". After I get the interactions between AWS and my endpoint working I will start to do the "backend" of the endpoint. This "backend" will handle all my HA controllers which are all TCP/IP and UDP/IP based. I have a lot of experience dealing with the ISY994 because of what I did on the iPads. The backend will read the full configuration from the ISY when it starts just as the iPads do. It will then know the names/addresses of all the devices , rooms, scenes, and programs on the ISY994. It will subscribe to the ISY994 for all updates as the iPads do and monitor the heartbeat drom the ISY994. The Autelis pool controller and the Homeseer system can be querried over TCP for their current status, and they send status change information asynchronously over UDP. The one wire controller can be queried at any time for the temperature of any room. At the current time the Alexa skill and my endpoint are talking to each other. The Alexa skill is transmitting intents, the results of Alexa hearing a string for Sarah, and the endpoint is receiving them. All the endpoint does at this time is display and log the header and the body it received. Tomorrow I will add the code to write the responses back to Alexa just to make sure all is correct. Probably a simple response of "Okay Barry". Then the fun begins. If others are using an Alexa skill as I and Randyth seem to be doing, I would like to share intentInstance files and utterance files. I can be reached at barry@the-gordons.net
  23. I completely control my ISY from several iPads and an iPhone. I looked at iRule when it first appeared but dropped as soon as I learned about CommandFusion. The advantage of CommandFusion is a full Javascript engine for handling feedback, and performing extensive logic. My system involves 5 wall mounted iPads and one iPhone. All capabilities of the ISY are handled using the REST interface for commands, and ISY subscription services for feedback. I started my working career as a professional programmer in 1960 so I have been around a while. I am now retired (for over 10 years) so my time is my own. My original HA world started with X10 and Homeseer in 2000. It is still running controlling my security system. If/when I change my security system to an ELK based one, Homeseer will disappear. Anyone want to buy a lot of X10 stuff? The iPads in each room just run a CommandFusion app I wrote. It is always active. Double tap the screen and immediately the apps home screen comes up kiosk style. The double tap is needed as I run with the iPads backlight turned off. I control all of the Insteon devices and two Nexia/Trane thermostats with complete feedback and asynchronous notification to all the iPads of any change to any device's state(s). My web site "the-gordons.net" has a lot more info on the automation of my home, and my Home Theater. I have been doing the HA thing since we built the house in 1998. The ISY has 99 unique devices, 103 scenes, 37 programs and 19 network resources. I only use the ISY Admin Console when I install a new device or want to add a new program or scene.
  24. barrygordon

    HomeKit

    I have been completely controlling my ISY994i (with both networking and Z-Wave modules installed) from iPads and my iPhone. The iPads are wall mounted (one in each room) and act as the UI to the ISY. Same with the iPhone (5s). The "App" is running under the CommandFusion iViewer. On the iPhone as soon as I touch the app button to make it active it immediately re-syncs with the ISY and is completely up to date in about 1 second. Control of the insteon devices is just about instantaneous. I am very interested in voice control via Siri. If Apple opens up a better interface from Siri to running apps under IOS, then perhaps CommandFusion will deal with Siri as a command source. I would like to see some sort of a bridge capability between Homekit and the ISY994. A box sitting on the LAN that can bridge commands from the IOS/Siri devices to the ISY would be very nice, but I am not holding my breath.
  25. Please contact me directly about this. My name is barry and I own the domain the-gordons.net so my email should be obvious. I normally do not release the things I develop for the iPad/iPhone as I take the support issue very seriously. I am waiting to see what CF is going to do about allowing for support of "Mudules" before I make a final decision. Do you use CF at this time? Are you fluent in JavaScript?
×
×
  • Create New...