kck Posted December 3, 2015 Posted December 3, 2015 (edited) Barry, Finally got around to beginning to set up stuff with the BWS bridge and your configuration tool. I had a limited setup running since very early on using the old emulator and semi-manual configuration which was too fragile to touch. Nice job with your front end. One quick question/verification. You do not read the "spoken" property from the ISY with any option - correct? It would be nice to have the spoken names in one place but it isn't a big deal to enter them via your front end. But I just wanted to make sure in case I was just missing something. The only downside of the current setup is that I always like to have a "scriptable" way to recreate entire configurations in case I ever need to reproduce everything and if I am renaming devices/scenes/programs with the Friendly Name field I can't see any way to capture that such that I could recreate things without manually doing that all over. Am I missing something? Again - thanks for a great piece of work. Kevin P.S. I hate editing things like rc.local needlessly so a suggestion to folks who use this setup. Have your rc.local start the bridge with a name like ha-bridge-latest.jar or some such and in your bridge directory have a link from the version you download to that name. That way when you download a new version of the bridge you can keep its version name (which is convenient) and just relink it as the "latest" name and not touch the rc.local. Edited December 3, 2015 by kck Quote
Michel Kohanim Posted December 3, 2015 Posted December 3, 2015 Hi kck, I think that's a good idea. With kind regards, Michel Quote
barrygordon Posted December 3, 2015 Posted December 3, 2015 kevin, If you want me to show spoken name I can do that. I actually wrote the code originally to get the spoken name. And i do have it along with the description. How would you like to see it presented or how do you plane to use it - As the friendly name (the one Alexa recognizes) ?. The HA Bridger Data base (<filename>.db) is the entire configuration as saved by the HA Bridge. It is a JSON string. You can save that at any time on the Pi. Just remember the HA Bridge only reads the .db file when starts. I was going to provide a save restore, but eventually decided not to. If you notice in what i posted a little earlier in this thread I name the HA-Bridge "current.jar" so I don't have to change the rc.local as I change versions. Quote
kck Posted December 4, 2015 Posted December 4, 2015 Missed the comment about current.jar. Same thought - comes from too many years as a developer. I was thinking that the spoken name field from ISY would populate the Friendly Name if it was not blank and otherwise use what you do now. It isn't a really big deal - just that since Michel and friends have provided it, it would be nice to maintain all the info there if possible. By the way - to the larger thread - I just pulled out a few more of my few hairs on an (as always after the fact) obvious issue in starting the bridge. I was using the IP address variable so that if I were to move the Pi by changing my reserved DHCP addresses it would just continue to work. Had erratic results with whether the Echo could see my devices. Eventually I looked carefully at the bridge invocation from ps. Turned out that depending upon timing, sometimes the rc.local ran before the Pi had finished getting its IP address from the DHCP server and so the address field on the invocation was blank. Of course if I ran the same script to debug it, it always worked since by then the system had been up for a while. So if anyone sees the situation where sometime the bridge is invisible to the Echo and sometimes not you might want to add a (relatively long) delay with a sleep in the script before the bridges get started. Quote
barrygordon Posted December 4, 2015 Posted December 4, 2015 I have my router (MikroTek) reserve that IP address based on MAC address so there does not seem to be an issue. I an't say enough about the MikroTek router. Best $50 I ever spent. Quote
imagamejunky Posted December 4, 2015 Posted December 4, 2015 Guys. Would any of you mind posting a video (or a link) of how your set up works with the Echo and the ISY? I'm specifically curious about the commands you (and others) are having to use to control insteon lights without any current native support. This huge thread is pretty confusing. I'm looking for the most simple solution so that the family can easily control lights with Alexa. Thanks in advance. Junky Quote
barrygordon Posted December 4, 2015 Posted December 4, 2015 kck, I started looking at the 'Spoken Name' property. I am running ISY 4.3.26. The 'Spoken Name' property used to be in the notes associated with a device. It is no longer there. The notes property now has; 'is Load', 'Location' and 'Description'. I could use location, but that might be confusing for the 'Spoken Name'. Has the 'Spoken Name been' removed by the UDI team or moved somewhere?. The code to add the spoken name as a choice for friendly name is trivial. It will be in a drop down list if it exists with an option as to which is the default friendly name; the spoken name or the device name. IIRC the 'Spoken Name' was originally set up for the ELK system for announcements. Quote
barrygordon Posted December 4, 2015 Posted December 4, 2015 imagamejunky, I use the following commands for my 70 some odd lights/scenes 'Alexa, turn on the <friendlyName>' 'Alexa, turn off the <friendlyName>' 'Alexa, dim the <friendlyname> to fifty percent' The friendly name is either a device name or a scene name e.g. "Kitchen Lights", "Klitchen Area" When an area has several different lighting systems, e.g. the Kitchen, (Main lights, Peninsula Lights, Pantry lights, Counter lights); I define an ISY scene and name it appropriately (e.g. Kitchen Area) to handle all the lights in that area. Bear in mind you can not dynamically change the brightness of a scene. Quote
barrygordon Posted December 4, 2015 Posted December 4, 2015 It is worthwhile to read the entire thread just to gain context. I joined the thread at page 16 post #319. The HUE Emualtor, actually called the HA Bridge, is developed by BWS Systems (bwssystens.com). I do believe most ISY users are using that as their emulator with the front end I wrote as their configuration tool. The emulator runs on any machine that runs Java 8, and the configuration system is a Windows (7) program. The emulator has to run 24/7 and a lot of people are running it on a Raspberry Pi 2 B with no monitor and no keyboard (headless) using tightVNC as the remote desktop tool. As a piece of information VNC (latest version) on the iPad will connect to tightVNC on the Pi with no issue. The emulator is needed at this time since there is not a native skill provided by amazon for the ISY. Amazon does provide a native skill for the Philips HUE bridge that is limited in that it was made for lights not thermostats, security systems etc. BWSSystems developed a program that follows the API for the HUE Bridge and therefore looks to the Amazon cloud as a HUE Bridge. UDI is developing a skill for the Echo/ISY pairing. It will not be a native skill, only amazon can develop a native skill. Since it will not be a native skill but a named skill. Think of Alexa as a supervisor with a bunch of named workers that can be called on to perform tasks that Alexa can not do by herself (natively). This changes the spoken phrase to the Echo slightly: Alexa, tell <workerName> to . . . Alexa, ask <workerName> what . . . My understanding is that the worker name for the ISY non-native skill will by "Izzy" so the speech required would be something like: "Alexa, tell Izzy to turn off the kitchen lights" The above is just a guess on my part as I have not been in discussion with Michel at UDI A non-native skill, or as I call it a worker, is not difficult to write. It is time consuming however as all variants of what a user might say need to be covered with appropriate verbal feedback if not understood. I am registered with Amazon as a developer and can write my own non-native skill to handle the oddball things in my home that are not controlled by the ISY e.g. the Theater, the Pool, etc. I first want to see what UDI comes up with. Quote
imagamejunky Posted December 4, 2015 Posted December 4, 2015 Barry. I really appreciate your very thorough explanation. I actually understand what's going on a lot better now. It sounds to me that using the emulator is going to be the better option because we would not have to ask Alexa to ask a worker to do the job. I just wish that I wouldn't have to leave a PC or a Raspberry Pi running all the time to accomplish this. Thank you again I really appreciate it. Junky Quote
barrygordon Posted December 4, 2015 Posted December 4, 2015 A raspberry Pi uses a trivial amount of power. You can mount it on a wall or hide it on a shelf. All my Raspberry Pi's are wired units for their Ethernet connections, no monitors, no keyboards. Quote
mwester Posted December 4, 2015 Posted December 4, 2015 Get a Raspberry Pi, a cell phone charger, and a roll of electrical tape -- tape the Pi to the back of the cell phone charger, plug it into the wall, plug an ethernet cable into the Pi, and think of it as a vastly-superior version of the Insteon Not-So-SmartLinc device! Quote
jratliff Posted December 4, 2015 Posted December 4, 2015 I use a Windows tablet always on for my emulator to run on. It uses ~4 watts/hr. I use chrome remote desktop to access it so I can leave it on its shelf. Also, if I remember correctly someone said the ISY skill will work both ways one is ask izzy to (command), but will also be able to use "Alexa turn on/off" also. Could be wrong since I haven't really seen much official information on it. Quote
barrygordon Posted December 4, 2015 Posted December 4, 2015 It should be able to run bothe the Emulator and the new UDI skill. We will have to wait to be sure. Hopefully the new skill will handle things like Thermostats, Fanlincs, and other non on/off/dim devices in a much cleaner method. My plan is to use both; Native for simple lighting control tasks and IZZY for non lighting tasks. Quote
jackal Posted December 4, 2015 Posted December 4, 2015 (edited) It should be able to run bothe the Emulator and the new UDI skill. We will have to wait to be sure. Hopefully the new skill will handle things like Thermostats, Fanlincs, and other non on/off/dim devices in a much cleaner method. My plan is to use both; Native for simple lighting control tasks and IZZY for non lighting tasks. Hi Barry, I thought the new UDI skill requires the portal subscription? So there is no need to run any native/local application? *confused* Edited December 4, 2015 by jackal Quote
Michel Kohanim Posted December 4, 2015 Posted December 4, 2015 Hello everyone, We heard back from Amazon. They have an internal technical issue they are trying to resolve. With kind regards, Michel Quote
barrygordon Posted December 4, 2015 Posted December 4, 2015 jackal, What you say is true. but if you want to eliminate the "... tell IZZY" for simple commands then you run both. I am Assuming that the new skill will require some sort of a fee. I have heard rumored a $50 two year subscription fee for the UDI skill, but that is just a rumor. Amazon charges a usage fee, but for a single developer doing testing (aka me) it is free. Quote
jerlands Posted December 4, 2015 Posted December 4, 2015 Hi Barry, I thought the new UDI skill requires the portal subscription? So there is no need to run any native/local application? *confused* I'm uncertain of the doors this will open up but I imagine the app will offer a more fluid experience.. we'll just have to wait and see Jon... Quote
jratliff Posted December 4, 2015 Posted December 4, 2015 I'm hoping we can also query sensors/devices in the isy as to their status and get a response from the echo. Quote
rizast Posted December 4, 2015 Posted December 4, 2015 I have been going through this for days and my echo does not discover any devices. ive tried multiple versions of the jar file, verified I can access the wired lan from wifi, tried different computers, and made sure no ports are in conflict. its driving me nuts! any suggestions would be appreciated. Quote
gregf Posted December 4, 2015 Posted December 4, 2015 Hello everyone,We heard back from Amazon. They have an internal technical issue they are trying to resolve.With kind regards,MichelHopefully that will be soon.Will be happy dance time. Thanks for the update! Quote
stusviews Posted December 4, 2015 Posted December 4, 2015 Hello everyone, We heard back from Amazon. They have an internal technical issue they are trying to resolve. With kind regards, Michel Have them post the issue here Quote
kck Posted December 5, 2015 Posted December 5, 2015 Barry, I'm confused on the Spoken field. I am also running 4.3.26 and I see it. When I right click on a device or scene the popup has a Notes item and when I click that I get a box with an IsLoad check box and fields for Location, Spoken, and Notes. You don't see that? Kevin Quote
MWareman Posted December 5, 2015 Posted December 5, 2015 Have them post the issue here Yes! We'll be happy to help. Hopefully, this means they've approved it, but the 'technical issue' is in publishing the approval. Quote
jerlands Posted December 5, 2015 Posted December 5, 2015 They have an internal technical issue they are trying to resolve. Yes! We'll be happy to help. Hopefully, this means they've approved it, but the 'technical issue' is in publishing the approval. I'm thinkin' the technical issue extends beyond the UDI app... Jon... Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.