Jump to content

bpwwer

Moderators
  • Posts

    3255
  • Joined

  • Last visited

Everything posted by bpwwer

  1. What is the actual value of ETo being sent to the IOP? You're seeing what the AC displays, but not what the node server sent. My guess is that the value is between .270 and .299. It looks like the AC isn't honoring the display properties the node server requests. The node server says that value should be truncated to 2 decimal places, but it looks like it's truncating it to 1.
  2. Just FYI, the "errors" in the node server log aren't really errors. I just forgot to change them to debug message.
  3. @Michel KohanimWhat would cause the AC to show writing to a node server node? Would there be something in the log or error log to explain why it's doing this?
  4. Not that I know of. The precipitation info uses a different query from the rest of the weather data. It seems just the query used for the precipitation data had changed.
  5. Yes, not having direct access to the equipment can make developing a node server more difficult. But more than that is the time it takes to develop and support it when you don't have easy access to the equipment. The bottom line with node servers is that unless the developer plans to use the node server themselves, it's just not real cost effect to develop one. A professional software developer makes something like $100-$150 per hour. Creating a node server will take between 50-100 hours. So the upfront cost to develop one is between $5000 - 15000. That means that to try and break even, they'd need to sell a minimum of 100 copies at $50 per copy. From the sales data I have for the past 7 months, my highest selling node server is less than 40 copies. I don't have access to any other developer's sales number, nor do we track installations to know the quantity of "free" node servers that are in-use. So I don't really know the size of the potential market for any given node server. That being said, my criteria for developing a node server isn't really based on the ROI, I do most of them because I like developing them. I would certainly consider working on expanding the existing Vue node server to handle their other devices, but I really don't have the time right now to do that.
  6. It's not something I have time to take on at this time.
  7. Yes, it is the same issue. Version 3.0.23 should fix it.
  8. The Vue node server only support the VUE utility connect energy monitor. https://www.emporiaenergy.com/how-the-vue-utility-connect-works It doesn't support the any of the other VUE energy monitor devices.
  9. No idea what that means. From what I understand of that state, it means the ISY is trying to update the node configuration and that should only be possible for Insteon and z-wave nodes. If you restart the AC does it continue to display that?
  10. Verstion 2.0.7 of the node server should fix this. AERIS changed the format of the precipitation data in the query response.
  11. This is the problem: 24:00:46 [pg3] error: ISY Response: [Try: 1] [00:0d:b9:53:d8:14] :: [404 - OK] :: 18.856618ms - http://192.168.2.117:8080/rest/ns/7/nodes/n007_controller/report/status/ETO/2.903759505852784/106 The ISY is rejecting it. A 404 error means "not found". I did some experiments and it seems the ISY rejects it if there are too many digits of precision in the value. I just pushed out version 3.0.22 that rounds the value so this should be fixed. If you refresh the node server store, and then restart the WeatherFlow node server, it should auto-install the update.
  12. The setup looks correct. If PG3 is displaying a value and that value is changing from day to day, then the node server is working correctly. The node server never talks to the ISY directly, everything goes through PG3. So if PG3 has the right value and the ISY doesn't, it's not an issue with the node server. You didn't mention any other problems, but if it's just that one value that isn't getting sent to the ISY, that would be very, very strange. The only thing I can think of that might cause something like that would be if the node server's profile file on the ISY was missing the ETo definition, but the AC does get the proper definition. You can try re-sending the profile files to the ISY (PG3, node server details, upload profile button) and see if that helps. Otherwise, I'd suggest checking the node server log and look for when it sends the ETo value to PG3, should happen within a minute or so of midnight. Using the timestamp from that, check the PG3 log for the same time and see what it does with it. There should be log entries there showing it sending the value on to the ISY. Then at least we know if the value is making it to the ISY or not.
  13. ETo is set to zero if there isn't any forecast data. The node server will enable the ETo calculation only if the custom parameter "Forecast" value matches a station ID returned when it queries the WeatherFlow servers for station meta data.
  14. Unless you have a backup of Polyglot, you can't. The information about node servers is stored in a database, when you factory reset the Polisy, you removed that database. If you have a Polyglot backup, restore it and you should be good. Otherwise, you'll have manually remove the node servers from the ISY (node servers -> configuration -> slot #, and click delete) and then re-install them from Polyglot.
  15. The query isn't returning data in the format expected. If you turn on debug level logging, the log will show the actual URL that is being used to query AERIS. Note, that URL will contain your secret API key. But I'd need to see the output from that query to know specifically why it's failing.
  16. There was a bug about 6 months back that set it to zero after the first day, but that should be fixed. I see mine is at zero also so I'll do some debugging. It does need forecast information to calculate ETo and it does the calculation at midnight using the previous day's information.
  17. Check the log, it will log any errors encountered while querying for that data.
  18. I only suggest that to try and determine if there is something specific your doing or using that causes this. It may well be that the ISY is just overloaded. It has limited processing power and fairly limited TCP/IP stack. The IoP will certainly help as it has significantly more processing power. I have a lot of influence over what is done in PG3 as I'm the only developer working on it. But not much influence over the ISY software. #1 PG3 does support the ability for a node server to restart itself as that's currently the only way it can be implemented. The ISY doesn't have any built in methods to control node servers and it would be a pretty big project to add that. We can discuss this, but I'll make no promises. #2 is a possibility. Right now, I don't know of anyone else having problems with the current startup process and there are folks that run quite a few node servers. I've had near 25 starting and have not seen the issues you have. Of course this is with IoP with no programs running on it. Because node servers can vary quite a bit in how long they take to initialize, it may be hard to tune something like this. It can't be too long, a couple of minutes and you could have systems that take 30-60 minutes to start. A few seconds may or may not help. #3 is already being done. PG3 has both a per-node server queue and a global queue for messages being sent to the ISY with retries if they fail. In your case, the ISY either doesn't respond (which will cause PG3 to give up) or it is responding with an error that indicates a retry wouldn't help. You log did not show any cases where PG3 was retrying requests. It didn't look like there was enough traffic to trigger throttling. I do understand about all the programming needed to deal with HVAC. For a while I had a lot of programming for mine that took into account windows being open, inside vs. outside temperature diff, etc. The data was coming from WeatherFlow, my alarm sensors, and the thermostat. It controlled the thermostat and an whole house fan. I disabling most of it at one point when a window sensor failed and I couldn't get the whole house fan to turn off.
  19. So I'm clear, the only thing running on the Polisy right now is PG3? I'm assuming you did some experiments that prompted you to set up the Polisy reboot when the ISY stops/starts, why? What happens if you don't? PG3 shouldn't care if the ISY stops or crashes, other than it will fail to get info from it and fail to send information to it. I don't believe it opens any connections to the ISY that it expects to say open until it is restarted. If it's not able to restore operation when the ISY comes back up, that would be a bug in PG3. Clearly data is being sent to the ISY, the event viewer log shows that as do the node server and PG3 logs. If the ISY isn't doing anything with that data, then it's a problem with the ISY. As I mentioned previous, the PG3 log shows a lot of failures communicating with the ISY when it starts up. When PG3 starts, it starts all the node servers which can generate a quite a bit of traffic as it tries to make sure the ISY is fully up to date with each node server's current status. With all the random failures it's getting, it's not really surprising that you're having issues. That would explain why restarting the node server when things are a lot less busy allows it to fully update the ISY and work. You obviously have a complicated setup on the ISY which means it's going to be difficult or impossible for anyone to replicate what you're seeing. There are a few things you could try that may help narrow down the problem. 1. You could stop all but one node server and restart PG3 and see if you still get the communication failures on startup. (If you manually stop a node server, it won't automatically restart when PG3 starts). You could try this with each node server, maybe only one really impacts the ISY. 2. With a good backup of the ISY, you could try disabling or removing all programs and then restart PG3 and see if it still has failures starting. Maybe a program or set of programs is causing so much load that it can't handle starting multiple node servers. Based on what I've seen in the logs it seems like the ISY is just overloaded and there may not be anything anyone can do about that.
  20. I just saw you posted the PG3 log in another thread so I can provide more updates here. Looking at the PG3 log. At 10:50:52 PG3 scheduled the request to send the temperature value to the ISY: 7/21/2022, 10:50:52 [pg3] debug: Scheduling http://192.168.200.251:80/rest/ns/8/nodes/n008_s_2930073475/report/status/CLITEMP/23.6/4 to group statusGroup It got a response back from the ISY indicating it was successful 7/21/2022, 10:50:52 [pg3] debug: ISY Response: [Try: 1] [00:21:b9:02:55:cd] :: [200] :: 1526.422921ms - http://192.168.200.251:80/rest/ns/4/nodes/n004_controller/report/status/CLITEMP/26.93/4 So again, I see no indication that anything is wrong. PG3 is getting updates from the OWM node server and passing those on to the ISY successfuly. I'm confused as the PG3 shows PG3 communicating just fine with the ISY until 02:13:52, that's the first time a request from PG3 to the ISY failed. It continued failing until 09:53:05 when it looks like the Polisy was rebooted. PG3 restarted started at 10:10:33 At 10:10:54 PG3 sends some info to the ISY and the ISY responds with success. At 10:10:57 PG3 sends a /rest/nodes request and the ISY fails to respond. But then other requests after that are successful. At 10:11:07 PG3 tries to add the OWM controller node to the ISY, this fails with a "bad request" error, however, it's not a bad request. This continues for about a minute with various "bad request" responses, successful responses and no responses to the PG3. This is all happening while node servers are trying to initializing. It appears that the ISY is struggling to keep up. It mostly clears up after that, but the ISY is still not responding to the /rest/nodes requests from PG3. For the next few minutes there brief periods where the ISY fails to respond to PG3 but then it starts responding normally. Probably as node servers finish their initialization. From that point everything is normal until it is restarted at 11:03. After the restart, I see the same errors happening where the ISY fails to respond to PG3. PG3 restarts place a heavy load on the ISY as node servers start and initialize. On your system, it seems like it takes 5-10 minutes after a PG3 restart before the load on the ISY decreases to what it can handle.
  21. Did you reload the browser window after restarting the Polisy? The uptime value has been a pain because the browser seems to cache info related to that causing it to be wrong.
  22. Thanks for providing such detailed information! Would it be possible to get the PG3 log file for the that time as well? You can PM it to me if you don't want to post that. From the node server log. Until 09:50:05, the node server was working properly and sending data to PG3. It was unaware that there were issues with the ISY. At this point, the node server stopped (I'm guessing because you power cycled the Polisy. The node server starts again at 10:10:54. There are no issues with it starting and it runs and is sending data to PG3 until 10:50:59 when it is again stopped. I'm not sure how you are determining that no data was sent. At 10:50:52 - 10:50:59 the log shows it sending: Successfully set controller :: CLITEMP to 26.93 UOM 4 Successfully set controller :: GV2 to 28.58 UOM 4 Successfully set controller :: DEWPT to 20.52 UOM 4 Successfully set forecast_0 :: DEWPT to 20.3 UOM 4 Successfully set forecast_0 :: GV0 to 27.0 UOM 4 Successfully set forecast_0 :: GV20 to 3.12 UOM 106 Successfully set forecast_0 :: GV2 to 26.0 UOM 4 This says the node server successfully sent those updates to PG3. With the info I have, I can't tell you what PG3 did with them. Also, the node server only sends values that changed. In this case, for the main node it was only the temperature, dewpoint and feels like temperature that changed so that's what was sent. How often stuff changes is really dependent on how often OpenWeatherMap updates the info for your location, not your node server polling intervals. The node server starts again at 11:03:29. And again, there are no issues with it starting and running. From the log, the node server was running properly the whole time. I see no indication that a restart was needed.
  23. If it wasn't obvious, my test and maths seemed to disprove my original theory that the AC was the cause of slow program loading. For the 88 programs, the time was split almost evenly between waiting for the ISY to prepare the data and the actual download of the data. I used the developers tools in the browser to get the timing info for the /rest/programs request from the ISY. So I just tried the same thing with IOP running on a PolisyPro. Unfortunately, I don't have as many programs on that so I'm not sure this is a valid comparison. But it was many times faster. 26 programs in 4.5ms, again split fairly evenly between processing time and content download time. If the time scales linearly (a big assumption), it would be able to send 980 programs in about 170ms. That's like 20x what I calculated for the i994. These tests were done between the device and my browser over a wired ethernet connection. Since PG3 can work with both IOP and i994 ISY's, it communicates with them over the network interface. It may be a bit more optimized if it's using the localhost address, but it still uses the network drivers. However, that may be offset by the fact that it's one network driver that's handling both ends of the communication (twice as much work) when both IOP and PG3 are running on a single Polisy.
  24. I'm glad that UDI has reviewed the issue and at least has some idea of the cause. 980 programs is a lot of programs to format and send out and I new the AC was slow about loading programs, but I thought that was because the AC had to process them, more than the ISY having to send them. It can be difficult to understand the effects of scale for something like this without doing the math. 3 minutes seemed like a long time so I did a quick test. I only have 88 programs but it takes about 300ms to send that. So scaling that up to 980 and I get close to 3.5 minutes. It will probably take more than just compressing the data to get reasonable performance with 2000 programs. I'm thinking it will need to reduce the size of the data by 20x or more. But looking at the data, it may be possible simply by reformatting the data into something far less verbose. An interesting problem for sure. Thanks for the effort to help resolve it.
  25. @johnnytThanks for the well written response. You make some very good points! In my defense, I didn't have anything to do with the original design of PG2 or PG3. I originally signed up to help test PG3 and now I'm the only developer working on it. There's enough work that I do have a tendency to try and avoid major design changes if I can. My concerns are specifically around adding the capability to automatically reboot/restart. Once you can automate a work-a-round for a problem, it tends to not be a problem for you anymore. Sure, no one would ask for a weekly restart, but if it happens and you don't really know it happened are you really going to care? I specifically didn't say anything about power outages because the system should recover from those without any user intervention. If it isn't, we really need to understand why. I'm not completely opposed to automatic restarts. PG3 will automatically restart if it crashes and a lot of work has gone into make it recover and continue if it does crash. There's not really anything you can set up that would improve the process. Eventually, we should get to the same point with Node server, but we're a long way away from that now. Keep in mind that the production release of PG3 is just a little over 2 months old. It can take many iterations of a software product to work out the issues. With the number of different node servers and use cases, it is impossible for us to test even a small fraction of what it's capable of doing. I've been watching your other thread on ISY-Inventory with interest because I do want to know if there's anything we need to do in PG3 to prevent it from happening again. The key is to figure out what is really happening. Node server should not be able to crash an ISY. I do sometimes do stress testing with PG3 to see if I can make bad things happen. It's been a while since I've done any of that with a i994. It is possible to overload the i994's network interface, but I've not seen that crash the ISY. If you haven't, I highly recommend you submit a trouble ticket to UDI for the ISY crashes. Neither @simplextechor I have access to ISY code to debug it. Best we can do is help to define a reproducible test case.
×
×
  • Create New...