Jump to content

Getting Serious - When Comm issues strike


ELA

Recommended Posts

IndyMike, I have also had the forum software time out if I compose too long. Loosing everything.

 

OK I have a small system. 9 ApplianceLincs HW 1.3 FW v.28.

System is fine.

Decided to get newer hardware ApplianceLincs.

HW 4.1 FW v.32 At the time of purchase.

Communications went down the toilet.

Put old ones back.

Everything back to OK.

 

So even though the 1.3's have zener diodes that get real warm and the filter cap also gets warm. I am staying with the 1.3's.

Except one new HW 4.2 FW v.38. On a inductive load that drove the 1.3's crazy.

 

All the HW 4.1s are in a storage box.

Link to comment
Share on other sites

Hi Brian,

 

I remember you commenting on the V4.1 APL's previously. I do have one V4.15 currently installed. I've been running tests where I have been removing all of my I2 capable devices (I don't have many). I had forgotten about the APL - I'll be adding that to the list.

 

Thanks for reminding me of the plug-in's,

 

IM

Link to comment
Share on other sites

Hi IndyMike,

I was pretty sure it was one of your posts that talked about receiving extended responses when not expected.

 

I not sure I understand your last post.

Your diagrams are difficult to make out. Can you possibly leave the red colored stuff off and emphasize the blue, maybe a white background?

Are you saying you think that there could be extended responses occurring during the normal "wait period" and implying that they must be RF then (considering timing required for extended msg) ?

 

I cannot see clear enough on your captures but I often see large spikes during the "wait" periods when I have dimmers on?

 

Up until this point I have been focused on messaging. Now that I appear to be able to make this repeatable (hopefully ... anyways since I found it in two different systems now),

I intend to take a closer look at the signal levels during that point if I can possibly be lucky enough to catch one.

 

I totally sympathize with losing a long post. That has happened to me a few times. I try my best to remember to copy to the clipboard before I press review.

 

Happy hunting ...

Link to comment
Share on other sites

Hello ELA,

 

Answers to your questions -

 

Hi IndyMike,

I not sure I understand your last post.

Your diagrams are difficult to make out. Can you possibly leave the red colored stuff off and emphasize the blue, maybe a white background?

 

I'll was trying to show the relative timing between a "good" standard message transmit (baseline) and one that appears to have "extended message interference" (tests 4 and 6). I will try to expand the time scale and focus on the responses alone. This will take a little time as all of my data is captured and post-processed.

Are you saying you think that there could be extended responses occurring during the normal "wait period" and implying that they must be RF then (considering timing required for extended msg) ?

 

I cannot see clear enough on your captures but I often see large spikes during the "wait" periods when I have dimmers on?

The "spikes" highlighted by the red ovals are Insteon communications that are occurring during the standard message wait period (1/2 60 Hz cycle after the 5 cycle standard message comm). This is not RF - I have no RF devices installed (other than the dual band PLM). These are also not attributable to dimmer impulse spikes - there are multiple spikes at the Insteon frequency where there were none during the PLM transmit sequence.

 

As an aside, I have been running head to head communication tests between my "new" dual band 2413S PLM (V1.0) and my old 2412S PLM (rev 2.75). My older 2412S consistently performs better in my system when performing full link table reads (8000+ reads requiring over and hour to complete). It may be my system configuration, but the 2412S performs nearly flawlessly.

 

I'll try to post the expanded plots tomorrow. I've got a football date tonight with my neighbor (ND vs USC) that we've been doing for the past 30+ years. If I'm late, I buy the "refreshments" for the next 5 years.

 

IM

Link to comment
Share on other sites

This is from a post on a different forum from back in Feb 2010. Relates to seeing Extended messages. They were being reported on a SHN device but I was able to recreate the same thing using an ICON Switch. These odd Extended messages have been around along time.

 

-----------------------------------------------------------------------------

 

I have been able to recreate the symptom on both of my EZIO2X4s using the 0x49 (Read Input Port) command and two different ICON switches using the 0x19 (Ping) command. Looks like the PLM is passing out a garbage message every once in awhile.

 

2010-02-20 16:28:20.194 TX 02 62 09 FD 5D 05 49 00

2010-02-20 16:28:20.225 RX SENTINSTEON=0F 44 DC 09 FD 5D 05 49 00 06

2010-02-20 16:28:20.772 RX RECEIVEINSTEONRAW=09 FD 5D 0F 44 DC 22 49 00

2010-02-20 16:28:20.928 RX RECEIVEINSTEONEXT=09 FD 5D 0F 44 DC 12 49 00 66 F8 E5 9E 65 54 A3 6A BF F0 38 E5 9E 65

 

2010-02-20 16:36:32.210 TX 02 62 04 5A F1 05 19 00

2010-02-20 16:36:32.241 RX SENTINSTEON=0F 44 DC 04 5A F1 05 19 00 06

2010-02-20 16:36:32.444 RX RECEIVEINSTEONRAW=04 5A F1 0F 44 DC 21 00 00

2010-02-20 16:36:32.663 RX RECEIVEINSTEONEXT=04 5A F1 0F 44 DC 12 FF C0 B1 F0 3E 58 0C 7F 8F A1 E6 55 4D EE B7 FF

 

2010-02-20 16:41:35.366 TX 02 62 04 5A F1 05 19 00

2010-02-20 16:41:35.397 RX SENTINSTEON=0F 44 DC 04 5A F1 05 19 00 06

2010-02-20 16:41:35.991 RX RECEIVEINSTEONRAW=04 5A F1 0F 44 DC 22 00 00

2010-02-20 16:41:36.241 RX RECEIVEINSTEONEXT=04 5A F1 0F 44 DC 55 FF C0 DD 50 3E 58 0C 7F 8F A1 E6 55 4D EE B7 FF

 

2010-02-20 16:54:16.600 TX 02 62 04 5B 2E 05 19 00

2010-02-20 16:54:16.632 RX SENTINSTEON=0F 44 DC 04 5B 2E 05 19 00 06

2010-02-20 16:54:17.178 RX RECEIVEINSTEONRAW=04 5B 2E 0F 44 DC 22 00 00

2010-02-20 16:54:17.350 RX RECEIVEINSTEONEXT=04 5B 2E 0F 44 DC 12 00 00 BF 11 6B C4 28 64 00 6F 66 55 4D EE B7 FF

 

2010-02-20 16:55:20.757 TX 02 62 0C B0 AC 05 49 00

2010-02-20 16:55:20.788 RX SENTINSTEON=0F 44 DC 0C B0 AC 05 49 00 06

2010-02-20 16:55:21.038 RX RECEIVEINSTEONRAW=0C B0 AC 0F 44 DC 21 49 00

2010-02-20 16:55:21.210 RX RECEIVEINSTEONEXT=0C B0 AC 0F 44 DC 12 88 80 70 E1 6B C4 28 64 00 6F 66 55 4D EE B7 FFgrif091

Link to comment
Share on other sites

IndyMike,

I noticed you are clipping your peaks now? Germanium/Schottky diodes?

I guess we think alike? I had just added a 1n4148 back to back diode clip to my oscope monitor to allow me to see the lower level signals.

I like yours better!

 

When you say, " there are multiple spikes at the Insteon frequency" that is the detail I would be interested in. e.g. "Zoomed in on the time scale to confirm , at the Insteon Freq?

Another nice feature of my device is that I have a Oscope trigger output that allows me to zoom in on any zero cross of my choosing from the start of a msg.

I hope to make the erroneous msgs repeatable enough to then zoom in and present a waveform.

 

p.s My football game just ended. No Insteon today :)

 

LeeG,

I appreciate that these extended msgs have been around for a while. Probably past due to understand why? I do not want to get too distracted by corrupted msgs but I do want to understand why simulcasting does not appear to be all that it is supposed to be. I think the two may well be related.

Link to comment
Share on other sites

I would really like to know where they are coming from. The only thing I proved at the time was that it was not device or device type specific. I have very few Dual Band devices. Two Access Points and a few Dual Band PLMs that come and go as testing requires. The test from 2010 was using a 2412 PLM with Powerhome (easy environment to generate test scenarios) so there was not much mesh stuff going on, only between the two Access Points. No wired Dual Band devices then or now.

Link to comment
Share on other sites

ELA,

 

Here's and expanded timebase showing the KPL response and Hops alone. The vertical time intervals are set to coincide with the gaps in a standard insteon message. In other words, communication during any of the vertical time markers (.2917, .3417, .3917) is a violation of standard message timing.

 

Additionally, a standard message should have terminated with a gap at the .3917 mark. This response continued past that. I was not able to capture the end of the message (scope ran out of buffer). While it's possible that this is a "rogue" unit responding in invalid time slots, I find it far easier to believe that this is a distant unit responding with an extended message protocol.

 

4_extended_20mv_Zoom.png

Link to comment
Share on other sites

Thank you IndyMike for being so kind to represent your findings. That is very interesting, of course we need a bigger buffer to be sure?

Seems like there is never enough buffer. My scope is also somewhat limited and I long for deeper memory.

 

In this case the theory would be? ... data collisions resulting in a mixed message?

 

Thank you LeeG for your interest in knowing what these may mean as well.

The messages in and of themselves does not concern me. It is not knowing why they exist. I want to be sure they are not problems in simulcasting resulting in bit shifting/inversions whose CRC just happen to match and thus are passed as valid messages.

 

 

I hope to do more testing soon but those pesky chores keep getting in the way.

I hope to capture an event (at first at the same zoom level as yours IndyMike so we can compare traces).

 

Of course as this project seems to go I am seeing more firmware changes for the test device first. I want to pause the data collection " on the 1st Erroneous receive). With this I hope that my scope trace will not be over written and I do not have to sit and eyeball the scope for hours. :shock:

Link to comment
Share on other sites

Thank you IndyMike for being so kind to represent your findings. That is very interesting, of course we need a bigger buffer to be sure?

Seems like there is never enough buffer. My scope is also somewhat limited and I long for deeper memory.

 

Unfortunately, these are very difficult to catch on the scope. Until Friday I had only seen evidence of Extended responses in the event viewer. Quite frankly, I was extremely lucky to catch two transmissions on the scope.

 

In this case the theory would be? ... data collisions resulting in a mixed message?

 

My current theory is that the local I2 devices (on the same circuit) are repeating faithfully. I'm guessing that a "distant" I2 device is receiving a garbled transmission from the KPL with a correct CRC. It is interpreting the transmission as I2 and responding in kind, thereby colliding with the standard message response.

 

I am trying to verify the above by removing all "other" I2 devices that are not on this circuit. So far so good. I've performed a number of full link scans with no evidence of I2 responses in the event viewer. It is very early in testing this, and it will take many scans to come up with a statistically significant result.

 

Of course as this project seems to go I am seeing more firmware changes for the test device first. I want to pause the data collection " on the 1st Erroneous receive). With this I hope that my scope trace will not be over written and I do not have to sit and eyeball the scope for hours. :shock:

 

Now that would be extremely helpful. My scope sampling has been pretty much a crap-shoot. Being able to trigger and stop the scope on a failed transmission would be wonderful...we truly are EE dweebs aren't we?

 

Hope your football turned out better than mine,

 

IM

Link to comment
Share on other sites

I seem to have no problem duplicating corrupted messages in either an isolated test environment nor in my living room communications.

 

I wanted to ask if anyone has any direct information on what appear to be "wait periods" between retries when a device is retrying message sends due to a lack of response?

 

I have read in the Insteon Details doc. that it may retry up to 5 times if it does not get a reply. I have not found any specifics on the protocol other than it increases the hop count on each failed attempt up to 3. Both of these are consistent with what I witness.

 

Has anyone have documentation on this apparent "wait period" between retry attempts?

 

Below is a scope trace of an isolated environment that seems to be behaving correctly, there is a 1 msg slot "wait" between each subsequent retry attempt.

 

Is this consistent with normal expectations?

retries_0_0_2_2.jpg

Link to comment
Share on other sites

I had wanted to get a better understanding of the "wait" period I see most often in between retries. Is there a better term to describe that time slot?

 

For now I will refer to it as a "wait period" until I learn more from someone in the know.

 

In the mean time here is a corrupted msg I captured in the isolated test environment. What I am seeing is that the corrupted msgs occur when someone sends during the ("normally expected"?) wait period. This seems to throw things off.

retries0_0corrupted.jpg

 

 

below is another where I was also monitoring the transmitter signal of the originator. This allowed me to understand that it was the originator in this case that sends an extended message in the ("normally expected wait slot"?)

 

Thus far it seems as though devices are getting confused during the retry protocol. Not only sending when they should not but also sending the incorrect message type and content in this one.

 

extended_withXmitinfoL.jpg

Link to comment
Share on other sites

Here is one more example. This one from a device #7 in my living room.

It shows an incorrect extended message and how it causes extra unwarranted network traffic.

 

Isn't it interesting that devices immediately adjacent to this one on each side also exhibit the failures, but at a lesser rate. Yet there are devices further away on the same circuit , in both directions, that do not exhibit the issue. This would seem to indicate more than just insufficient signal strength.

 

This is not good. Sure would be nice if we had access to the all "The Details".

 

corruptiontodevice7_scope.jpg

Link to comment
Share on other sites

Hello ELA,

 

I'm sorry, but I'm not much help with the "wait" periods between transmissions. The best information I have was presented earlier in this thread (scope trace showing re-try timing)- http://forum.universal-devices.com/viewtopic.php?p=45144#p45144.

 

This is only a sample of 1, and could vary between PLM revisions - not of much use.

 

Your latest post shows exactly what I had envisioned when I first saw communications during the 1/2 cycle gap between packets. A unit that is incorrectly overlaying extended messages on top of standard messages would truly throw things out the window.

 

I ran some testing with all but one of my I2 capable devices removed from my system. I left one I2 SWL on the same circuit as my target KPL (same J-Box). I ran over 40,000 communications to the KPL -

1) 0 extended message responses received (definitive).

2) Communication efficiency only marginally improved. Still seeing some PLM retries.

 

From the above, I'm inferring that I2 overwrites are not "all" of the problems that I'm seeing. What if standard message responders were also simulcasting incorrectly? This would be extremely difficult to catch on the scope since it would be at the bit level.

 

To that end, I've been playing with Hop counts on my system (PLM retries disabled)-

1) 0 Hops - remote devices respond with 94% to 99% reliability at 100 mV p-p input to the PLM. This is somewhat dependent on noise/happenings within the house, but demonstrates that the PLM has good low signal level reception capability.

2) 1 Hop - Near optimal reliability 99% to 100%. Very insensitive to other devices within the house.

3) 2 and 3 hops - Variation in reliability increases (70% to 100%) depending on activity in the home.

 

The above may be due to the way that I've implemented my system (passive coupling - no RF). Nonetheless, I'm wondering if you could try restricting your Hop count to see if your "interference messages" decrease? I'm thinking that fewer Hop counts may limit the ability of other devices to interfere with the messaging.

Link to comment
Share on other sites

Hey guys,

 

I know I am not in the league of the main poster of this topic, and the discussions are at the edge of my ability to understand, but I do have a question.

 

I get that you are still gathering data, but are we at a point yet where we can make a generalized statement like:

Sometimes Insteon does not work. It is not a 100% reliable protocol. Sometimes it is not noise or signal absorbers but the other Insteon devices themselves that can affect communications.

 

As many of you know, I have worked hard on communication issues from a commoner's perspective -using brute force and chance (trial and error, lots of error) to try to solve my issues; which is why this tread has been so interesting to me.

 

But if Insteon can interfere with itself, and we can say that at this point, it certainly changes the methodology I will employ when I am doing my huge trial and error communications corrections attempts. I know bad devices can cause comms failures, but what I am getting from this thread is that even 'good' Insteon devices can cause comm problems sometimes. I am getting the gist of the discussion?

Link to comment
Share on other sites

Hello Illusion,

 

I'm replying a bit out of turn here (ELA has the lead on this), but my time is limited. I can't speak for him, but it's safe to say that I'm using a microscope to look at the communication efficiency of my system. My tests are rather intensive and can last for over an hour (thousands of device interrogations). That accounts for day's of normal system operations. I've had some recent problems with noisy devices and I see some things that I question, but in general, I am happy with the overall performance of my system.

 

While I am happy with the performance of my system, ELA is less so. As a result of his testing, I am looking much closer at my system. We see some things in common, others that are not.

 

I would hope that the outcome of all of this would be some sort of "best practices" guide that forum members could use to construct and troubleshoot their systems. I don't think we're at that point yet. The device interference theory is a bit new. I would be careful not to generalize this across everyone's systems (we have a sample of two). I for one am still learning and comparing notes (typical Chicken Sh*@ engineer response).

 

The one thing that I believe I can say is that the newer devices (V5.X) appear to be able to respond more consistently. That probably does not come as news since you've already replaced older devices and observed the same.

 

IM

Link to comment
Share on other sites

Hi IndyMike,

 

Thanks for your thoughts on this.

 

Unfortunately we do not have complete control over limiting the number of hops.

In case it was not clear I was attempting to limit the number of hops used by sending 0:0 initially. I was then going to work my way up. But retries get in the way...

 

While we can set, and I do all the time set, the max hops number, we cannot disable the PLM's internal automatic retries. So when we try to limit the number of hops that only works if no retries are required.

If the PLM retries due to a lock of response it will automatically increase the number of hops used, up to 3.

 

I want to try not to speculate too far on this, however I agree that we will not get a true answer without being able to diagnose this at the bit level. With my device I can trigger and look at the bit level but that requires that the corruption occurs at the exact same place in time each time it occurs and that is rare in my testing.

 

While I wonder about possible simulcasting issues I am also suspicious of the PLM "reaching too hard" to dig a message out of the noise level. I have seen several occurrences where the message is corrupted on the very first 0 hop reply. When I zoom in on this error, to the best of my capabilities, I see bit levels that start out at about 100mv p-p and the decay during the stream to near 20mv, they then climb in amplitude again near the end of the packet up near 100mv p-p again.

Link to comment
Share on other sites

Hi Illusion,

 

Along with what IndyMike has said:

 

I do think there are some installations that do not have many issues. This could especially be true if the install were small and you are able to propagate all messages throughout the system without the need for hops or retries using a hardware coupler.

In other words if the hop "0" send and reply signal levels are strong enough that the PLM can read them without needing to reach into the noise level ( somewhere below 100-200mv as a guess, again install dependent).

 

I am hesitant to make any blanket assertions that would cover all installs.

In the beginning of my efforts I was only concerned with signal suckers and noise interference.

 

The closer I looked I then suspected older device hardware/firmware as a possible contributor. I think we agree this certainly appears to be true?

 

Now I am currently a bit concerned with apparent protocol failures? Again these may never come into play in some installs. I know my newest purchased PLM has newer firmware than my two prototype testers.

Why can't we be privy to what the new changes are?

 

IndyMike said he is happy with the performance of his system. I am also happy with my system. I worked hard to get to this point and the current issue was a minor nuisance. I only elected to look closer because I could.

 

Each installation will be different. I now believe there are more variables, that may affect any one systems performance,

than Smarthome would lead a person to believe.

Link to comment
Share on other sites

I am happy with my system as well. I am back to 99.8-99.9% reliable. Replacing the two KPLs that were having trouble with new ones fixed my latest issues. My posts may make my system seem crappy, but it is not. I have over 200 nodes spread over two separately metered houses on the same transformer and a huge amount of interaction with the modules throughout a single day. I do not typically post about the success of my system, only the failures.

Link to comment
Share on other sites

ELA, I agree that some devices with certain firmware have issues.

We have seen SwitchLincs with I believe v.35 giving problems and I know my ApplianceLincs hw 4.1 fw v.32 give me more grief than my old hw 1.3 with fw v.28

 

I also have a few of the 2412S PLMs with known firmware issues.

 

My small system. Works best with no Access Points {hw 1.0-1.6} and just the 2413S Dual Band for a few RemoteLincs.

Link to comment
Share on other sites

HI Brian,

Thanks for your input.

This current issue I have been posting about bothers me quite a bit. It has lead me to believe that at least in my system, and I fear a few more, you cannot rely on simulcasting as advertized. I mentioned earlier that my living room "should be" performing beautifully as a poster child for more devices = good. I am getting very strong hops along the entire route from my system PLM to devices in the living room and back. There is also a very strong RF path between all as well.

In a recent effort I added another RF access point near the service and a hardware coupler.

 

If a person could rely totally on the concept that more devices = better and increasing the dual mesh aspect = better then I would have to say, why am I not there?

 

I do, and probably most others on this forum, have a system that is very reliant on the ISY's - PLM. We want all devices to be able to respond back to the ISY's PLM for status/confirmations/ and for program direction.

This should not be a problem if hopping/simulcasting is working as advertized.

 

I am now of the opinion that an ISY based install may require some special consideration. I believe it is important for the ISY-PLM to be as close to the central distribution point of a network as possible ( the service cabinet). This allows that all important device the best access to all other devices via the AC line (within reason).

 

I think most would probably agree this makes sense but would prefer not to go to extra efforts to make this happen. Perhaps some are just lucky that their ISY setup happened to be near their service cabinet, or some probably did it by design.

 

I was trying to avoid going to this extent but I am committing myself now at least to test this theory. Hopefully something else unexpected will not crop up as a result of this big change I am going to make.

 

I bought a long ethernet cable and will move my ISY connected PLM from my office/computer area, which is now about 70 ft from the cabinet, to within 10 ft of the cabinet. I will replace an existing access point that was there with the ISY's PLM. Of course the ISY will need to come along as well (keeping the serial cable short).

I considered running a dedicated 120Vac circuit to the existing location but pretesting proved that is not much of an improvement. Too much series wire inductance leading into a large capacitive load.

 

The theory here is that my hop "0" signal is very weak to many devices right now. The living room suffers worst because it has so many devices (capacitive loading).

By moving the PLM to near the service it will now drive most directly to all devices in parallel. I expect more devices to then receive a greater hop "0" amplitude signal.

 

The hope is that now devices will not have to "overreach into the noise floor" to pick up the first hop.

I will post how it goes, using the very repeatable living results as the first gauge of progress.

Link to comment
Share on other sites

After several days of testing I have some more interesting results.

I had decided to test relocating my PLM to within 15ft of the service cabinet. The theory being that it would be better able to drive all Insteon loads ,in parallel, from this location.

The down side being a 100ft ethernet cable and moving the ISY into the garage.

 

I ran testing prior to the change and after the change with dramatic results in terms of making my living room situation clear up beautifully.

 

After the first initial test I discovered a new caveat to the corrupted messages that I have been registering.

Prior to moving the PLM I had lots of corrupted messages that contributed to failure rates. These are defined as either incorrect standard length responses (comd1 does not match the sent command) or unexpected extended msg responses that also do not match in commands.

 

After the change I was noticing that I was now getting what I call "Extraneous msg responses" to the Get Engine message I was using to test with. By this I mean that I would get one correct response followed by another immediately after the first, that was usually corrupt.

I changed my test code to treat this separately from corrupted responses. My reasoning being that true corrupted responses were failures because I never received a correct response. Extraneous responses I am considering as successful communications because we are receiving the correct response on the first msg, while the second message is corrupt it is ... extraneous.

 

Here are the results of before and after for comparison. Each device was tested for 100 message send/receives. Sending the Get Engine msg at hop setting 0:0.

 

comparebefore_afterPLMmove.jpg

 

 

After the tests I once again retested device #7, from the original computer location, just to confirm that it did again fail as miserably as it did before the move, for a double confirmation. It once again got Corrupted msgs rather than Extraneous ones.

 

I think this pretty effectively confirms that devices in the living room were receiving a very weak hop#0. The devices were attempting to "overreach themselves" digging into the noise floor and producing corrupted results.

Even though hops 1-3 were very strong in amplitude they were not cutting it!

 

As can be seen in the results devices that were performing poorly greatly improved, and note that they now did not need to auto retry at hops 2 or 3. Worst case is now using hop1.

 

One other interesting bit of information.

 

I had mentioned in my early testing of this room that the recliner was a bad signal sucker. I had elected not to filter it because I could not register any difference with it unplugged in earlier tests (when the room was performing poorly overall).

 

After the PLM relocation the results were very greatly improved but one device #10 was still struggling at 98% while all the others were 100%. It just happens this is where the recliner is located. I added a filterlinc to the recliner and device #10 now joined all the others at the 100% level.

 

I consider this extreme efforts to improve a system performance and not what anyone should have to go through for a reliable system. While the system was pretty reliable (no longer noticed any issues visually) as it was, I was not happy with the testing statistics. I also wanted to prove or disprove the theory of centrally locating the ISY -PLM.

 

I now feel very comfortable recommending that for, ISY based networks, the PLM be located as close to the central distribution point of circuits (this would usually be the service cabinet). While this may not be convenient in many installs, and it was not in mine, the results (thus far) prove its worth. I will still keep a watch to see that no other rooms have suffered a loss in this move, although I would only expect them to benefit as well.

 

At this point I truly hope to go silent on this thread except to answer any questions/comments. That is of course unless some other issues should rear their heads ..... I sure hope not!

Link to comment
Share on other sites

  • 1 month later...

Hi Brian,

 

As I had posted here I believe the new firmware is enhanced:

http://forum.universal-devices.com/viewtopic.php?f=28&t=5923&p=53174#p53174

 

What prompted you to upgrade?

I would be curious to know if you observe any benefits from the newer firmware.

 

My network has been doing wonderfully and so I am now recovering from "getting too serious - when comm issues strike"

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...