Jump to content

Interface with Amazon Echo?


awzulich

Recommended Posts

Posted

If it was the way I say Izzy why would it open the skill when I say open Izzy?

 

I don't have problems when I tell Alexa to do other things that have nothing to do with Izzy. Play music, volume up/down, what's the weather/time all work.

 

I do think that the limitations are on Amazons side and in time they will improve.

 

 

 

Sent from my iPhone using Tapatalk

 

I know what you mean. I can tell you that sometimes it does funny things. "Open izzy" is shorter and may have more facility matching to this.

 

Another advice I would have in order to help - open the alexa app.

Then say something simple like: "Alexa, tell izzy to turn on" with no device. Make sure that alexa understood that exactly. This should invoke the skill, and the skill prompt you with the device you want.

 

You may also want to try the voice training if you see from the echo app that she did not understand you correctly.

 

Benoit.

Posted

Not using this yet as I am pretty happy with the Hue bridge solution but I suggest, based on the comments here, that you rethink your distribution of utterances for unusual requests.  I would wager that relatively few people run multiple ISYs and yet you have a lot of phrases aimed at selecting and querying for that case.  Many of the phrases aimed at these activities are similar to phrases aimed at the 98% cases of dealing with devices in one house attached to one ISY.  This will inevitably cause conflict/interference with these much more common utterances.  I think you'd be better advised to be very narrow in the phrases that you allow for unusual actions - say only accept "select ISY" or something like that for the multiple ISY case.  This should mean that you don't have folks dealing with some of the confusions like switch izzy versus switch light.  The usability of this sort of spoken interface is impacted by all the other phrases that are meaningful and not just the ones aimed at what you are trying to do.  Anyway - just a thought.

Posted

Not using this yet as I am pretty happy with the Hue bridge solution but I suggest, based on the comments here, that you rethink your distribution of utterances for unusual requests.  I would wager that relatively few people run multiple ISYs and yet you have a lot of phrases aimed at selecting and querying for that case.  Many of the phrases aimed at these activities are similar to phrases aimed at the 98% cases of dealing with devices in one house attached to one ISY.  This will inevitably cause conflict/interference with these much more common utterances.  I think you'd be better advised to be very narrow in the phrases that you allow for unusual actions - say only accept "select ISY" or something like that for the multiple ISY case.  This should mean that you don't have folks dealing with some of the confusions like switch izzy versus switch light.  The usability of this sort of spoken interface is impacted by all the other phrases that are meaningful and not just the ones aimed at what you are trying to do.  Anyway - just a thought.

 

On that topic, I planned to remove "switch to", as this is too close to "switch on" and "switch off".

 

That will help.

 

Benoit.

Posted

 

And this is the list of sample devices. This does not mean that a device outside this list won't work.

...

 

Hmmm....

 

So, finally success -- I got one (and only one) of my devices to work, for the first time ever, this morning.  I managed this by picking a spoken name from the list of sample devices -- and lo and behold, doing so finally got something to work!

 

It may indeed be that a device spoken name outside the provided list DOES work, but apparently there are limits to this.  I am trying to imagine what Echo does with the sample devices -- perhaps it analyzes the list to find similarities of some sort in order to see if the device it THINKS it heard might be correct.  If so, this would explain why spoken names that are completely correctly recognized (e.g. "fungus") do not do anything, just get the prompt "what device?" over and over.  I presume then, that "fungus" -- even though that's what I put in the spoken name field -- is not considered valid based on some sort of parsing and matching on the example device list?

 

Can anyone more familiar with how Alexa/Echo works comment on this presumption?  Because it this is true, then it means that users have quite a chore ahead of them to find spoken names for all programs, scenes, and devices that meet this magical, mystical Alexa criteria!

 

I'll continue playing to try to find out more on my own, but for now I'll just pick names from the sample list to get things working -- the WAF for this skill is bouncing around zero right now, so I need to demonstrate some success soon!

Posted

A new portal version is now live.

 

Changes:

1 - Fix "lock all doors" which was finding more locks than exists

2 - Scenes can now be dimmed/brightened. 

     We cannot set it to a specific level, but we can say "Alexa, tell izzy to brighten <scene>"

3 - Refresh devices now tells you if it found duplicate spoken names.

4 - login names are no more case-sensitive (Thanks MWareman)

5 - Mobilinc support

 

To use Mobilinc, you first need to select your "preferred ISY" in your portal user profile.

Then in mobilinc, simply use your portal credentials, along with the host name my.isy.io

 

NOTE: This update does not help with speech recognition yet.

 

Thanks,

 

Benoit.

Posted

I wonder if the skill has accounted for the fact that Insteon thermostats are set in 1/2 degree increments while zwave thermostats are set in whole degree increments.

Posted

I wonder if the skill has accounted for the fact that Insteon thermostats are set in 1/2 degree increments while zwave thermostats are set in whole degree increments.

 

The skill does not handle it correctly today. It assumes it's NOT in 1/2 degree increments. A fix is required.

 

Benoit.

Posted

In the utterance list you provided I notice a line stating shut off and one stating shut off {device}.

 

Depending on how Amazon does the match of speech against utterance list item,:

 

1) shortest match

2) first match

3) Longest match

 

That might explain some abnormal behaviors depending on the way the utterance list is organized. 

Posted

Alexa hears exactly what I say according to the app. 

Posted

Hmmm....

 

So, finally success -- I got one (and only one) of my devices to work, for the first time ever, this morning.  I managed this by picking a spoken name from the list of sample devices -- and lo and behold, doing so finally got something to work!

 

It may indeed be that a device spoken name outside the provided list DOES work, but apparently there are limits to this.  I am trying to imagine what Echo does with the sample devices -- perhaps it analyzes the list to find similarities of some sort in order to see if the device it THINKS it heard might be correct.  If so, this would explain why spoken names that are completely correctly recognized (e.g. "fungus") do not do anything, just get the prompt "what device?" over and over.  I presume then, that "fungus" -- even though that's what I put in the spoken name field -- is not considered valid based on some sort of parsing and matching on the example device list?

 

Can anyone more familiar with how Alexa/Echo works comment on this presumption?  Because it this is true, then it means that users have quite a chore ahead of them to find spoken names for all programs, scenes, and devices that meet this magical, mystical Alexa criteria!

 

I'll continue playing to try to find out more on my own, but for now I'll just pick names from the sample list to get things working -- the WAF for this skill is bouncing around zero right now, so I need to demonstrate some success soon!

 

Having worked with this for a few months already and having access to insights from the Amazon certification team, I will share my understanding.

 

1 - First step is that Alexa tries to understand what you say. This is what you can see on the echo app.

2 - Then if it finds that you are referring to a skill, it will try to match that with one of the skill's utterances using a fuzzy search.

3 - Then it will invoke the skills, passing it the intent, and the slots. The intent is the first word in the utterance list, and the slots are simply variables.

 

If someone has trouble with what the echo app displays, the first thing to do is follow the voice training. Each person will pronounce slighty differently, and Alexa can learn. Just regular use of Alexa also help. You may be speaking a device name a few times and she does not understand, then she starts to pick up, that's all in the first step.

 

At step 2, it determines where to route the request. If she hears:

open <skill invocation name>

tell <skill invocation name> to

ask <skill invocation name> to
ask <skill invocation name> about
 
Then she figures out it's a skill, she will look at the utterances, and attempt to pickup the one closest. An exact match is not required. 
 
Each utterances can have slots like {device} or {percent}. Alexa will attempt to fill these slots with what she heards. She first look at the list of samples from that slot type. If she can't find it, she makes a guess by using some sort of weighting. Having a word spoken often will increase the weight. Putting a word in the samples increases the weight.
 
If she thinks something is unlikely to have been spoken on purpose she will simply drop it. 
 
This is what happens with "light" and "lights". It's not anywhere in the samples currently and somehow, she thinks she may have not understood well in the first place, so she drops it and the skills receive a device name without "light" at the end.
 
This is different from connected home. Accuracy is better with connected home because Alexa knows about the user's devices. It's in her database, so she can make a fuzzy search against it and find the right device. In the case of a skill, all she has is a list of sample devices. But outside that list, she has to do a guess, and then in the skill, it has to be an exact match with the device name (or the spoken field if present).
 
Benoit.
Posted

configured mobilink as stated. Just hangs with Network Error. Should we be using default ports 80 and 443?

 

Ron

 

Yes default port 443. It needs to be https.

 

Make sure your host name is "my.isy.io", and remove the trailing path.

 

Benoit.

Posted

I don't know about anyone else but I would never say 'master's bedroom' or 'master's room', i would however say 'master bedroom' or 'master room'.

Note taken for next release.

Posted

Benoit,

 

I know it kind of goes against the way the Echo is designed to be used, BUT it sounds like it work be a lot more accurate if it passed of everything after the command to you... for example

 

tell izzy to turn on the Den lights and the bathroom lights and the bedroom lights.  If Alexa recocnized the command as turn on, and then passed you "Den lights and the bathroom lights and the bedroom lights" with the utterance of turn on, you could simply scan that string for matches to the "spoken words" configured in the ISY, and it would allow multiple devices to be turn on at once...

 

Ron


Master Bedroom not Master's...

my .02

Posted

I get a chuckle when I read these posts.

 

Amazon assigned the Echo (a physical device) a name (Alexa) which happens to be, in most countries associated, with the female gender.

In the posts the Echo has been anthromorphised  into a female person and is referred to as such. Makes me think of Margaret Mead.

 

Alexa skills are non-trivial to develop, and I have great respect/empathy for Benoit. He has kept his cool dealing with all the people in the this forum.  

 

Many of us are using the HUE emulator as most of our needs are Dim/On/Off.  I don't change a thermostat setting often, maybe twice a year. Since the HUE is a "connected Device" the majority of the problems most are having with the Alexa skill disappear.

 

I use Turn On; Turn On The; Turn Off; Turn Off The; Shut; Shut Off; Shut Off The; Set <device name> to XX;  Set <device name> to XX percent; Set The <device name> to XX;  Set The <device name> to XX percent; 

 

To handle multiple lights, I define a scene and give it a name ending in Area. Ergo for all my lights in the kitchen (ceiling lights, laundry room lights, Peninsula lights, Counter Lights) I can say Shut The Kitchen Area.

 

For fans I use programs triggered from Alexa by: Turn on the <room> fan Low, Turn on the <room> fan Medium, Turn on the <room> fan High, Turn off the <room> fan, 

 

Using the HA Bridge (the Hue Emulator) I can set up any URL and command to be sent (one for On and one for Off for each device name) to any device on my LAN.  That is how I control my Theater which is actually controlled by a PC. 

 

I get about 95% accuracy for speech recognition as my Echo is located in a room that has a large space (about 1600 sq ft with 12 foot ceilings) with hard surfaces so it has a high reverb factor (Plaster walls and marble floors).  I am playing with mounting the Echo on the ceiling.

 

Hopefully once the ISY is certified as a connected device I hope it will at least have the same capability as the HUE.

Posted

Hopefully once the ISY is certified as a connected device I hope it will at least have the same capability as the HUE.

 

It will, and recognition will be just as good.

 

Benoit.

Posted

A general thought (and something I should probably post on Amazon's Alexa developer forum)...

 

Instead of having just one list of generic "one size fits all" utterances (Interaction Model) per app, it would be nice if Amazon allowed Echo apps to have a dynamically customize-able Interaction Model per user.  This way, the Interaction Model for any one user could include the actual device names found during the device refresh, thereby greatly improving voice recognition for that particular household.

 

-Randy

Posted

I agree w/ Randy (though I doubt Amazon will change this in the near future). I have quite a few device names that don't match the master list of utterances & I'm not really keen on changing the names.

Posted

I'm getting a message that the ISY is not on-line (no portal). When I select Configuration, Portals it states that Portal Integration is Online and Registered. Refresh does not help, neither from the ISY menu nor the portal menu.

Posted (edited)

Benoit,

 

Based upon your statement re the "Connected Defice" capability for the ISY; then perhaps the skill should concentrate on handling things that are not simple On/Off/Dim.  This covers things like thermostats (all of the Climate options), Things that one would address with Open/Close (e.g. locks and doors).  I would also like the skill to be able to report the state of any ISY device With

"Alexa, ask IZZY the state of <device/Scene/Program name>, or if must be: "Alexa, ask Izzy to Tell me the state of . . . "

 

In text representing speech, the comma signifies a short pause and the period a long one.  

 

I also agree that in the "Connected Home" a request should be able to end with "and  . . ." or "and then . . .".   I have requested this of Amazon a while back, but it is not on their priority list. I also requested they make Blue Tooth two way so external Bluetooth speakers would work, and implement the Bluetooth AVRCP protocol to control other Bluetooth based A/V system that act as an AVRCP client. All of these requests should not require a hardware change. 

Edited by barrygordon
Posted

I'm getting a message that the ISY is not on-line (no portal). When I select Configuration, Portals it states that Portal Integration is Online and Registered. Refresh does not help, neither from the ISY menu nor the portal menu.

 

Your ISY with uuid ending in :eb is indeed not online on portal.

 

Benoit.

Posted

Benoit,

 

Based upon your statement re the "Connected Defice" capability for the ISY; then perhaps the skill should concentrate on handling things that are not simple On/Off/Dim.  This covers things like thermostats (all of the Climate options), Things that one would address with Open/Close (e.g. locks and doors).  I would also like the skill to be able to report the state of any ISY device With

"Alexa, ask IZZY the state of <device/Scene/Program name>, or if must be: "Alexa, ask Izzy to Tell me the state of . . . "

 

In text representing speech, the comma signifies a short pause and the period a long one.  

 

I also agree that in the "Connected Home" a request should be able to end with "and  . . ." or "and then . . .".   I have requested this of Amazon a while back, but it is not on their priority list. I also requested they make Blue Tooth two way so external Bluetooth speakers would work, and implement the Bluetooth AVRCP protocol to control other Bluetooth based A/V system that act as an AVRCP client. All of these requests should not require a hardware change. 

 

The main benefit of the skill over the connected home is extra functionnalities like locks and thermostats, as you pointed out. But I don't think that also handling device turn on, turn off, brighten and dim hurts lock or thermostat functionnality.

 

The skill can report the status of a device (no matter what the type is), and the status of a program. Scenes unfortunately don't have a status.

For a device:

Alexa, ask izzy to get the status of {device}

 

For a program:

Alexa, ask izzy to get the status of program {program}

 

There are many variations of those.

 

> In text representing speech, the comma signifies a short pause and the period a long one.  

Did you notice a place where a change would be appropriate?

 

Thanks,

 

Benoit.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...