Jump to content

Brultech - ISY NodeLink nodes (GEM data) stops displaying data.


mblitz

Recommended Posts

Posted

Still getting error messages with a 30 second rate.

 

Sun 2017/01/29 02:30:10 PM System -170001 UDQ: Queue(s) Full, message ignored

Sun 2017/01/29 02:30:23 PM System -170001 UDQ: Queue(s) Full, message ignored
Sun 2017/01/29 02:30:33 PM System -170001 UDQ: Queue(s) Full, message ignored
Sun 2017/01/29 02:30:54 PM System -170001 UDQ: Queue(s) Full, message ignored
Sun 2017/01/29 02:31:05 PM System -170001 UDQ: Queue(s) Full, message ignored
Sun 2017/01/29 02:31:22 PM System -170001 UDQ: Queue(s) Full, message ignored
Sun 2017/01/29 02:31:33 PM System -170001 UDQ: Queue(s) Full, message ignored
Sun 2017/01/29 02:31:43 PM System -170001 UDQ: Queue(s) Full, message ignored
Sun 2017/01/29 02:31:57 PM System -170001 UDQ: Queue(s) Full, message ignored
Sun 2017/01/29 02:32:22 PM System -170001 UDQ: Queue(s) Full, message ignored
Sun 2017/01/29 02:32:33 PM System -170001 UDQ: Queue(s) Full, message ignored
Sun 2017/01/29 02:32:59 PM System -170001 UDQ: Queue(s) Full, message ignored
Sun 2017/01/29 02:33:23 PM System -170001 UDQ: Queue(s) Full, message ignored
Sun 2017/01/29 02:33:33 PM System -170001 UDQ: Queue(s) Full, message ignored
Sun 2017/01/29 02:33:52 PM System -170001 UDQ: Queue(s) Full, message ignored
Sun 2017/01/29 02:34:02 PM System -170001 UDQ: Queue(s) Full, message ignored
Sun 2017/01/29 02:34:02 PM System -170001 UDQ: Queue(s) Full, message ignored
Sun 2017/01/29 02:34:13 PM System -170001 UDQ: Queue(s) Full, message ignored
Sun 2017/01/29 02:34:26 PM System -170001 UDQ: Queue(s) Full, message ignored
Sun 2017/01/29 11:39:08 AM System -170001 [TCP-Conn] -1/-140002, ISY
Sun 2017/01/29 01:19:38 PM System -170001 [TCP-Conn] -1/-140002, ISY
Sun 2017/01/29 02:35:25 PM System -170001 [TCP-Conn] -1/-140002, ISY
Sun 2017/01/29 03:00:12 PM System -170001 [uDSockets] RSub:25 error:6
Sun 2017/01/29 03:00:17 PM System -170001 [uDSockets] RSub:25 error:6
Posted

Hi Marty,

 

If you look at the timestamps, they are a maximum of 15 seconds apart (not 30). So, although GEM is contributing to what you see, I very much think there's something else there that does things in 10-15 intervals and GEM is just making it worse.

 

With kind regards,

Michel

Posted

Hi Marty,

 

If you look at the timestamps, they are a maximum of 15 seconds apart (not 30). So, although GEM is contributing to what you see, I very much think there's something else there that does things in 10-15 intervals and GEM is just making it worse.

 

With kind regards,

Michel

 

 

Hi Michel,

 

If I disable NodeLink (i.e. GEM), I don't get any error messages, period.  The GEM is sending data to NodeLink in 30 second intervals.  I don't know how often NodeLink sends data to the isy.  I have confirmed that emoncms is updating every 30 seconds from NodeLink, so I would assume that NodeLinks is using the same (30s) interval to isy.

 

If NodeLink was not a major contributor, I would expect to see at least some error messages without NodeLink.  Especially since I have backed off the GEM to 30s

 

From what io_guy mentioned in early responses, NodeLink in sending individual updates for each channel, and has a type of back-off algorithm to pace updates.  It may be that NodeLink is sending the channel updates over a period of seconds, rather then at the same relative time.  In the sample I posted above, the range is from 10s to 25s. In a sample a few minutes earlier than I posted, I see a 30s interval.

 

10s interval

Sun 2017/01/29 02:30:23 PM System -170001 UDQ: Queue(s) Full, message ignored

Sun 2017/01/29 02:30:33 PM System -170001 UDQ: Queue(s) Full, message ignored
 
25s interval
Sun 2017/01/29 02:31:57 PM System -170001 UDQ: Queue(s) Full, message ignored
Sun 2017/01/29 02:32:22 PM System -170001 UDQ: Queue(s) Full, message ignored
 
30s interval
Sun 2017/01/29 02:25:53 PM System -170001 UDQ: Queue(s) Full, message ignored
Sun 2017/01/29 02:26:23 PM System -170001 UDQ: Queue(s) Full, message ignored
 
I think isy has a problem consuming a single batch of 32 channels (i.e. 4 * 32 = 128 updates), and not related to the frequency of batches (i.e. 10, 20 or 30s intervals).
I agree with io_guy that isy needs an API to allow batching updates.  The current implementation is not scalable.
 
cheers,
marty
Posted

Just for reference, I am running nodelink with 36 GEM channels, 4 sets of milights, and DSClink and I don't get the errors you are reporting

....Barry

Posted

As reported, io_guy has introduced algorithms to reduce traffic to only necessary data changes but this brings to mind a situation I encountered, in my travels.

 

I don't know what the front end of the GEM looks like but I can imagine and it makes me think about this same situation I encountered even more, so I will relate it to you.

 

We had our master SCADA system lock up with maasive data changes where the buffers overflowed and locked up the whole city and county reporting. This system collected data from about 40 remote terminal units gathering on average about 40 x 16 bit analogue data points, and 120 single bit binary points. I never stopped to count them, but let's just say thousands and thousands of data points. It consisted of dual powerful multi-processor boxes, with mutiple front end comm processors, and rendering processors, to handle this much data without choking. Well, the system was choking.

 

Turns out, if you tie a remote system's (think GEM inputs) common input sensing conductor to a 60 Hz source every input can show On and Off at 120 changes per second. (shhhhh...Engineers tried to share a form C contact between a 120vac source and a 24vdc status input detection...ooops! :) )

 

Just a thought that every poll update, NodeLink could be seeing a lot of jitter, that is has to report to ISY. With other processes going on it could be enough to bog ISY things down.

 

Input values could be watched for changes every poll (jitter), to attempt to diagnose something like this. If the values are not changing every poll then NodeLink isn't likely doing it, by io_guy's report.

 

Another method to check, is to measure any AC ripple on the input lines to ground to detect if the whole analogue system is "alive". Could be internal in the GEM or related equipment too.

Posted

Hi io_guy,

 

Thanks so very much. So, worst case scenario is 4 x 32 ~ 130.

 

This is definitely a problem if all of them are changed and the report interval is very short.

 

With kind regards,

Michel

 

Michel, I used to pause 100ms between all sends but per a previous conversation you asked me to pound the ISY and let it handle the traffic (so now I do).  If the ISY doesn't like the pounding then it should fail a receive or buffer a response to take a break.  My single core original Pi has no issue sending it out.

 

mblitz, please enable the ISY verbose log on the Main NodeLink tab and paste a snip back when you are getting the queue errors.  I want to see what the ISY is returning.  If it's having network issues then it should send a failed response and NodeLink would then give it a 30s break.  Doesn't appear to be happening.

Posted

Hi io_guy,

 

Yes, thanks so very much. I really do not think sending 128 messages every 10-20 seconds would cause queue overflow. But, without knowing what else is happening, I cannot be sure. So, having the log snippet is quite valuable.

 

With kind regards,

Michel

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...