Jump to content

bpwwer

Moderators
  • Posts

    3255
  • Joined

  • Last visited

Everything posted by bpwwer

  1. Pulling the ISY response times from the PG3 logs isn't that hard. Someone better at unix shell scripting than me could probably whip up a script that dumps just the timestamp and response time into a file that could then be imported into a spreadsheet to graph it. Just seeing how it changes over time may provide some insight like every day at 4pm the response gets slower, then you'd know to look at programs that start at 4pm. PG3 saves 2 weeks worth of daily logs so you do have some historical data available to correlate with other events/logs. You can just dump all the response time entries for the current day with the command line command: grep "ISY Reponse" /var/polyglot/pg3/logs/pg3-current.log Or to just get time and response time grep "ISY Response" /var/polyglot/pg3/logs/pg3-current.log | cut -d ' ' -f 1,2,13 This is from a ssh shell on the Polisy running PG3.
  2. Ok, try version 1.0.4.
  3. @simplextechThe log shows this when trying to install: 8/16/2022, 11:37:50 [pg3] info: [00:0d:b9:59:41:84_1] 'ST-Nuheat' installed into ISY successfully... 8/16/2022, 11:37:50 [pg3] error: NSChild: ST-Nuheat(1) /bin/sh ./install.sh: /bin/sh: cannot open ./install.sh: No such file or directory 8/16/2022, 11:37:50 [pg3] debug: NSChild: ST-Nuheat(1) /bin/sh ./install.sh: exited with cause code: 127 8/16/2022, 11:37:50 [pg3] error: installNs: Error: Non-zero exit code: 127 at ChildProcess.<anonymous> (/var/polyglot/node_modules/@universaldevices/pg3/lib/services/node servers.js:47:30) at ChildProcess.emit (node:events:537:28) at maybeClose (node:internal/child_process:1091:16) at Socket.<anonymous> (node:internal/child_process:449:11) at Socket.emit (node:events:537:28) at Pipe.<anonymous> (node:net:747:14) Then when it tries to start if fails with: ReferenceError: Cannot access 'logger' before initialization at process.<anonymous> (/var/polyglot/pg3/ns/000db9594184_1/st-nuheat.js:1:6746) at process.emit (node:events:537:28) at process._fatalException (node:internal/process/execution:167:25) Node.js v18.5.0 I don't think running version 3.1.2 would cause this, but I've also never tried any node.js node servers on it yet.
  4. Nope, I don't know anything about that node server. Have you tried uninstalling and reinstalling since you did the update?
  5. I understand that you have a very large system and it's possible that no one else has built one out to that same level. Unfortunately, folks won't normally be monitoring threads like this if they aren't having issues so we may never know. My point is, that the current assumption is that the box is just overloaded and something with more "horsepower" will work better. But what if that assumption is false? What if you have one program out of those 900+ that's doing something that is taking 90% of the CPU resources and causing everything else to wait and changing that one program makes everything better? What if there's a bug in the ISY firmware that you just happen to trigger with one of your programs that causes the ISY to sit in a busy loop for 30 seconds over and over? I like to know the root cause before trying to come up with solutions. It may be that you've simply overloaded the box, but Insteon, Z-wave, and node server devices typically are very low resource drains on the system because of the communication delays. In the time it takes an Insteon device to receive a command and return a status response, the ISY can likely process 100's of TCP network requests. But we're seeing the ISY take 100-200x the normal time to respond to TCP requests. What is it doing for all that time?
  6. The PG3 log may be too big to display. Try downloading it and PM it to me.
  7. What version of PG3 are you running? What's in the node server logs that aren't starting (if there is any log) What in the PG3 log?
  8. While anything is possible, it's just not real practical to try and reduce the data sent by the node server. WeatherFlow may be one of your more prolific, but it is far from the most prolific of node servers. I've created stress test node servers that send far more data to the ISY than the WeatherFlow node server, probably at least 100x more data without problems. I don't believe your issues are solvable by changes to PG3 or the node servers. Perhaps there's a way to profile what the ISY is doing to isolate what's really causing it to be so busy and re-work that so that. I don't know. Without data to determine what the ISY is really doing that causes it to be so slow in responding, there's not much we can do about it. Maybe @Michel Kohanimhas a way to debug issues like this or extended log info that would help.
  9. It would still be good to see the PG3 log file that includes an attempt at installing the node server. That's the only way for us to determine what's not working right. The one you attached above says "unavailable" so we can't look at it. Also, since you're running the test version, any feedback? The main changes are in how node servers are presented for installation from the store. It looks like the production version got overwritten with the test version. I'm working to get that corrected. Once the correct version is there, an update should revert it back to the production version.
  10. hmm, that's not good. That version should only be installed manually. What did you last do to update the Polisy? Click the "Upgrade Packages" button in the Admin Console?
  11. Try version 1.0.3
  12. @johnnytI guess you missed it or I wasn't clear. The WeatherFlow hub sends the data every minute, that's not configurable in the hub or in the node server, it just the way it works. You can configure how often the WeatherFlow node server queries for forecast data and that's it. Since I don't have any PG3 node servers running on a i994, I don't have any way to directly compare, but those times seem very long for a response. On my IoP, I will occasionally see requests take 200-300ms but most of the time it's 4 or 5ms. From the log snippet, it seems like the ISY is always busy and responding slowly. There's not really anything the node servers or PG3 can do to make the ISY respond faster. Slowing down node servers or having them send less data won't really help, it doesn't appear that they are the cause.
  13. 3.1.2 is a test version of PG3 for developers to test their node servers and provide feedback on the current direction of PG3. As such, there's no expectation that all current node servers will install and run on it. Is there some reason you are running the test version?
  14. Maybe I forgot to upload the new package. I just uploaded it so try again. If you can, test to verify which buttons on the remote correspond to the A-E buttons created for it. You'll probably have to create program(s) with the button press as the trigger as that's the only way they can be used (or maybe as scene controllers, but testing with programs is a bit easier). My guess is that A = open, B = up, C = down, D = close, and E = preset as I think that would match the 3 Button R/L Pico.
  15. You can always go to the PG3 Log menu and select download log there. That will download the PG3 logfile only and if a node server isn't starting, the error will be in the PG3 log, not the node server log.
  16. PG3 has multiple queues and a queue hierarchy. Each node server has it's own queue and those queues feed to a common queue. It's mostly the common queue that will determine how fast requests are sent to the ISY. It is certainly possible to push the ISY beyond what it is capable of handling and it seems like you have. If it is really taking 10-12 seconds to update just WeatherFlow data then I think you're pushing the ISY way to hard. The WeatherFlow hub sends out data once a minute (that's not configurable) and on mine, all of the data that needs updating is handled within 1 second. You can look at the PG3 log and look for lines like: ISY Response: [Try: 1] [00:0d:b9:4e:3a:44] :: [200] :: 5.08479ms ..... Those lines provide information on: 1) How many tries it took before the ISY accepted and responded to the request. PG3 will make up to 3 attempts. If you're seeing a lot of 2 or 3 try responses, that's because the ISY is dropping the requests PG3 is sending. 2) How long it took the ISY to respond to the request. The example above took 5ms. The time it takes the ISY to respond should correlate with how busy the ISY is. It may be interesting to look for periods where it takes a long time to respond and see if you can determine what the ISY was doing (from the ISY log/event viewer) a those points in time. That may give some insight into what's really causing the issues.
  17. @JimboAutomates version 1.0.2 of Caseta node server is available. Let me know about the remote buttons. Currently, it creates 5 button nodes, each will trigger on DON. But it may be better to have one node with triggers on the correct type (open/close/raise/lower/preset) if those are the buttons that are sent through. Also, I'd need to know which button (A-E) corresponds to what button on the remote.
  18. I thought I had kept all the shade support from the PG2 version. Are you sure this wasn't something you added locally? This is what I see in the PG2 version. if device.get('type') == "SerenaHoneycombShade": NodeType = SerenaHoneycombShade elif device.get('type') == "QsWirelessShade": NodeType = QsWirelessShade if not NodeType: LOGGER.error("Unknown Node Type: {}".format(device)) continue In any case, I added the TriathlonRollerShade so it should be recognized in version 1.0.2. I've also added some support for the remote. But I'm not at all sure about that. The SmartBridge is showing that it has 5 buttons, but when I look up the device, it looks like it has 10 buttons. So I have no idea which of the 10 button states are actually getting reported by the SmartBridge. I suspect it is probably the open/close, raise/lower, preset that are passed through the bridge, but you'll have to experiment with it and let me know. My distributor doesn't seem to carry that product and my on-line searches haven't really found much other than it's a special order device. I'll let you know when I push out version 1.0.2 for you to test with.
  19. This has nothing to do with the node server, it has no control over how the data is displayed in the AC.
  20. Fri 08/12/2022 11:02:52 AM : [D2D EVENT ] Event [n001_48711] [WINDDIR] [90] uom=76 prec=0 Fri 08/12/2022 11:02:52 AM : [ n001_48711] WINDDIR 90 (uom=76 prec=0) Fri 08/12/2022 11:02:52 AM : [D2D*CMP 000F] STS [n001_48711] WINDDIR B Cannot convert values (from=0E to=4C) Fri 08/12/2022 11:02:52 AM : [D2D-CMP 000F] STS [n001_48711] WINDDIR op=5 Event(val=90 uom=76 prec=0) >= Condition(val=1 uom=14 prec=0) --> false Fri 08/12/2022 11:02:52 AM : [D2D EVENT ] Event [n001_48711] [GV3] [90] uom=76 prec=0 Fri 08/12/2022 11:02:52 AM : [ n001_48711] GV3 90 (uom=76 prec=0) This is what I see in the Event Viewer. It looks like the IoP isn't handling the comparison of UOM for wind direction properly. @Michel Kohanim, @Chris Jahn The WINDDIR status value is set to UOM 76 (wind direction degrees) but it seems to be comparing against UOM 14 (degrees) and failing. My program is simply doing "if winddir >= 1". Just like @johnnyt's program above.
  21. Yes, but... the ISY and IoP don't really communicate state between them. So while the ISY can continue to manage z-wave devices having them interact with other devices (scenes, programs, etc.) on the IoP isn't going to work. Given what you posted about your setup initially, I don't think you want to try and do this. I haven't read through everything about, but it sounds like you're trying to decide if you should spend the money on a z-wave stick for the Polisy now or wait for the new UDI z-wave dongle. My question is, what is compelling you to want to migrate now (as opposed to waiting a few months)? Is there something specific that IoP will solve now that the ISY isn't able to do? The big difference between Polisy and Eisy is performance. There are some I/O differences as well but those will really only effect limited use cases. For node server development, there should be very little difference. Node server development doesn't typically run into performance limitations on the Polisy. Just for reference. For doing PG3 development, the UI build takes about 5 minutes on a Polisy and I expect that to improve by at least 10x on an Eisy.
  22. Thanks, that answers the question as to where the failure is. To me, it looks like the first program should evaluate to true and the second to false. You're saying it is doing the opposite. So either we're missing something or the logic is failing. Someone better versed in ISY programs will have to look at this. You could also try creating programs for each condition separately and see how they evaluate. Or try setting a variable to the value to the eto value in the then or else sections. The node server doesn't have anything to do with the program evaluation other than providing the value, which it seems to be doing correctly.
  23. I don't believe the ISY/IOP rounds the value. I think it just truncates the display to the number of decimal places the node server specifies. The node server is rounding the value to 3 (maybe 2) decimal places. If the ISY/IOP was rounding up, the second program wouldn't be true. @CoLongplease just post the value the node server sends, that's what's important here.
  24. Either the program logic is correct and the value is between .270 and .299 and the displayed value is truncated or the program logic is failing and the displayed value is correct. It either case, it's not a problem with the node server so there's nothing I can do about it. Looking at what the node server actually sends to the IOP will tell us which of those cases is true and the appropriate ticket can be raised to UDI.
  25. Yes, UDI's upgrade script failed. As part of that upgrade it is supposed to re-install all the python packages so that all the 3.9 versions are installed, but that doesn't seem to have happened. I don't believe there's any user accessible way to fix this, right @Michel Kohanim. Other than removing and re-installing each node server.
×
×
  • Create New...