bsnively Posted March 26, 2016 Posted March 26, 2016 Has anybody created some programs to be able to control a sonos through the echo using the ISY? I know there was some work to be able to control the ISY via various switches and keypads, so theoretically should be able to control it using programs via Alexa... Especially with the new Echo Dot coming out, I'd love to be able to use the echo in the rooms I have sonos already, and setup commands through it. I know folks have used lambda/raspberry integrations...but thought this may be an easier way to do some simple integrations (like play list, volume controls, stop/start/etc...) Thoughts/comments? Anybody try this or have a jump start on it? Thanks, Ben Quote
barrygordon Posted March 26, 2016 Posted March 26, 2016 Does Sonos obey bluetooth IR commands of the control type. This is commonly referred to as AVRCP The Audio/Video Remote Control Profile is a Bluetooth profile that allows Bluetooth devices to control media playback on remote devices. It is typically used with A2DP devices for next/previous track selection and pause/play functions. The DOT handles AVRCP. I am not sure if the original echos do, but they probably will in some future update. Quote
mwester Posted March 26, 2016 Posted March 26, 2016 @barrygordon - no, Sonos does not do bluetooth control. It uses a very complex SOAP-based API for communications and control. @bsnively - I think you'll find that what you're proposing is rather easily done, and I suspect others are doing it now. Search the threads for Sonos. In a nutshell, you'll need to create network resources to control your various Sonos devices - on resource per action (so if you want, for example, on and off, select playlist a, b, c and d, for each of 3 rooms, that's (2 + 4) * 3 total resources... it gets painful to keep editing all those, but it works. Then create programs to do the work, and assign the programs as "spoken" in the portal. Remember that "Alexa tell izzy to turn Kitchen Sonos on" runs the "then" part of the program, and the "off" command runs the "else" -- so tie the appropriate network resources into the appropriate parts of the programs. Done. Enjoy! (Now, if you're looking for something a lot easier to set up on the backend -- there are folks working on that, but it ain't easy. I'm stalled on my project right now (a node server integration of the Sonos) because without a richer set of controls in the ISY screens, you just can't create a node status that makes sense -- I can show you that Sonos 7 is playing source 43, but you probably prefer to know that Sonos "Master Suite" is playing "Classical Rock Redux" instead!). (And, if you're looking for something on the front end, where you can tell Alexa to "play Classical Rock Redux in the Master Suite", well, that's something that a lot of Sonos users would like to see, but IMO are unlikely to ever get -- the amazon folks want music playback to happen from Alexa, not Sonos, where the data is streamed from their data center servers instead of your choice... and since they own Alexa, the echo hardware, they're not likely to give that up!) Quote
bsnively Posted March 26, 2016 Author Posted March 26, 2016 (edited) Thanks for the great feedback/thoughts. I wouldn't be surprised if we see an integration eventually. The echo seems to be constantly getting new updates with various integrations with stuff, either directly or through various skills. So I don't run into the same sort of issues/problems -- What's missing to be able to go from: "Sonos 7 is playing source 43" to "Master Suite is playing Classical Rock Redux" Is it just a set of additional web services that need to get called against the sonos controller to find out the labels instead of the ids? Or is it that since it's a control devices rather than a standard skill, can't have the set of utterances in the skill? I was planning on modeling off of something like this: http://forum.universal-devices.com/topic/11716-insteon-and-sonos-with-isy-994i/?hl=sonos Thanks again! Ben Edited March 26, 2016 by bsnively Quote
mwester Posted March 27, 2016 Posted March 27, 2016 Basically, the API between a node server and the ISY is limited to integer data, and pre-defined text values such as "0 = off", "1 = on". Thus one cannot provide the user with any of the identifying information about a Sonos Zone, because one (the name) is an arbitrary text string that cannot be pre-defined in the node server app (I can't possibly know your room names!), and the other (the id) is a hexadecimal string which cannot be represented as a simple integer. The result is that one would need to create a separate, out-board web-based "configuration" tool that could provide that user-defined mapping as part of a setup or configuration exercise. And from a practical point of view, if you're going to write that, then it just doesn't make much sense to write an entire node server -- a few network resources as we can do today are dead simple by comparison. Quote
barrygordon Posted March 27, 2016 Posted March 27, 2016 I have a proxy server as Mwester describes that I use for many non-ISY endpoints that is triggered by ISY network resources. I generally use it for things I can see happen such as my SPA coming on or the pool lights, etc. One of the issues with handling a Sonos like device well is feedback. At the current time the Excho can not speed notifications sent by a user program so the only audio response the Echo would make is Okay doing it this way through the connecetd home. I assume you would like to know what is playing and that would take a skill written to handle the Sonos. Quote
madcodger Posted March 27, 2016 Posted March 27, 2016 (edited) Personally, I could do without knowing what is playing because I'll soon enough hear it. I continue to wish we could pass the specific Echo identity making a request (which of several within a home/account) to ISY, as that would allow for much more control within the home, including control of specific Sonos speakers. There are 2 or 3 Echo > Sonos controllers written and available (a decent one on Github) but in a multiple Echo household they become largely unworkable without the ability to specify WHICH Sonos should play. One could specify in the phrasing in a program name, I suppose, but that quickly becomes tiresome and prone to error. If "Alexa" were able to pass along the specific Echo from which a command was made, the existing Sonos controllers could likely be easily modified, and we would have a much sought after solution. I have written Amazon about this (there is much application other than for Sonos) and received no reply other than the standard acknowledgement of receipt. Edited March 27, 2016 by madcodger Quote
barrygordon Posted March 27, 2016 Posted March 27, 2016 The ability to identify an Echo in the processing code is one of the top 5 requests by skill developers. One of the others is the ability to handle notifications. Notifications would allow the Echo to speak some message on request as either a code initiated item or as a result of some action like a mailbox getting mail. I suspect that someday both of these will be implemented. Quote
madcodger Posted March 27, 2016 Posted March 27, 2016 The ability to identify an Echo in the processing code is one of the top 5 requests by skill developers. One of the others is the ability to handle notifications. Notifications would allow the Echo to speak some message on request as either a code initiated item or as a result of some action like a mailbox getting mail. I suspect that someday both of these will be implemented. We can only hope. Sooooo close to a perfect system. Yet the gap is significant. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.