-
Posts
3255 -
Joined
-
Last visited
Everything posted by bpwwer
-
I used "same slot" because you could have a situation like this: where the node server was installed in more than one slot so you have to pick the correct slot to re-install from (4 or 15 in the example above).
-
From the admin console: Node Servers -> Configuration -> slot# -> Delete
-
PG3 queries the ISY/IoX device for the node servers installed on the device. It then compares those with what it has in it's own database of installed node servers. If it doesn't find the node server in it's database, then it assumes something else installed the node server on the ISY/IoX device and marks it as "Unmanaged". The node servers have to be deleted from the ISY/IoP to free up the slots. That can only be done by the same polyglot that installed them or by deleting them manually using the node server menu on the Admin Console. For node servers that were installed by PG3 before you replaced the SSD, you should be able to go through the node server store and re-install them to the same slot.
-
I just realized what the problem is with auto-upgrades. When PG3 3.1.x was released, it included the ability to support multiple different versions of the same node server in the store. Prior to this, there could only be 1 version of the node server available. This change caused a change in the format that the version information is maintained. While PG3 3.1.x is able to work with node servers installed by prior versions those versions don't have the information available to know which version was installed when comparing against the new node server store format. The version of Airthings that you have installed was 0.0.6 and that used the old style version information. Version 1.0.0 of Airthings uses the new style version information where there could potentially be other versions available too. Because of this difference, PG3 doesn't really know if version 1.0.0 is an upgrade to 0.0.6. I've been working with the new style version info for so long now that I don't have any test cases like this. But now that I understand what is happening I can probably tweak the logic to handle this better. In the mean time, the easy solution is to just re-install the node server to the same slot. That will update the node server on your system to the new style version.
-
Should be fixed in PG3 3.1.16
-
Bought wrong Node Server (DavisWeather instead of WeatherLink)
bpwwer replied to TriLife's topic in DavisWeather
Yes, I'm the author but UDI handles all the payment and licensing. You'll have to open a support ticket. -
The error in the log doesn't appear to have anything to do with rain or temperature. It looks like it is trying to update the forecast information for a forecast node that doesn't exist. Either a forecast node didn't get created or the query is returning forecast data for more days that it should.
- 1 reply
-
- 1
-
-
The node server API in the ISY/IoP has never had the ability to track node server state. Everything that has been done in both PG2 and PG3 has been a bit of hack to try and provide that information, typically for use in programs. First you have to define what it is you want. If you take a high level view, a "node server" is really supposed to be just a bridge between the ISY/IoP and a device. So do you want the status of the bridge or the status of the device? (or more likely both). In an ideal world, the bridge status should be irrelevant as it should never fail, but in the real world that's pretty much never true. The ISY/IoP is basically a rules engine with Insteon and Z-Wave node servers built it. Where's the status for the built in node servers? With all of that, PG3 does track connection status for the node servers and PG3 does have an API that allows node server developers to expose this connection status if they want to. Like @Goose66I believe the right solution would be to add node server status to the ISY/IoP API and have a system variable for each node server that can be used in programs. But the priority to add something like that is very low right now.
-
It's expecting a response with JSON data and whatever it got back wasn't what it expected. It's calling https://api.weatherlink.com/v1/NoaaExt.json?user=001D0A004ADE&pass=<your password>&apiToken=<your token> So you can enter that URL into a browser and see what it does. My guess would be that one of the parameters is wrong and it's returning an error message instead of the data.
-
It's clear but it's not the node server doing the grouping that way, it's Volumio. The node server does a browse at the top level and creates the entries based on the order it see sources in that browse. Given that what is available in the list of sources will vary for each Volumio device and the order may even depend on the order sources where added to Volumio, that's a fairly complicated ask.
-
Every application/channel installed on the roku has an app ID. The node server queries the Roku for the list of apps when it first starts. That message means the Roku is reporting it's active application is the one with idea 562859 but that application id is not on the list of applications the node server originally received. I think this is because of a somewhat recent changed in the Roku firmware related to screen savers where it doesn't list them in the application query. I haven't really looked into it.
-
Yes, the hub will broadcast messages even if the Tempest battery dies. It just won't broadcast the Tempest data. The hub has a fixed schedule that it broadcast data on and the node server is doing reporting on a fixed schedule so unless something changes (I.E. restarting the hub) the value shouldn't vary.
-
The on-line status is for the node server to PG3 connection, not the node server to WeatherFlow device connection. WeatherFlow doesn't provide any method to query if it is on-line or not. It simply broadcasts data over your network and it uses a method that doesn't allow for any type of acknowledgement that the data was received by anyone. The seconds since seen is how many seconds have elapsed since the node server has seen a broadcast message from the hub. The hub sends out a couple of different messages at different intervals the main message being the one with the actual weather data in it. The main data message should come every 60 seconds. Rapid wind messages are every 3 seconds (or not depending on the state of the batter charge). And there are a couple of other messages it sends out. Since broadcast message can be lost or temporarily blocked on the network, there's no specific time. The value is going to vary depending on network load, when the hub was started, when the hub starts getting data from the Tempest device, the state of the battery charge. With all that being said, if the time since last seen gets above 2 minutes, chances are pretty good that something is wrong.
-
When the Polisy reports a uuid of 00:00:00:00:00:01 or something like that, it means that some component of the Polisy didn't start or is missing. You can attempt to run the "Upgrade Packages" option from the admin console another time (or two) to see if it corrects it. The process can take a while (like 20+ minutes) so don't start an upgrade process and then reboot/power cycle the Polisy shortly there after. If you can't get the upgrade process to run or running a twice doesn't fix the issue, you'll have to enter a support ticket at support@universaldevices.com
-
No, I'll clean up the change and push it to the production store.
-
According the log file, none of the node servers are starting because python isn't available. python is an OS level component on the Polisy and is what actually runs the node server code. STDERR: env: python3: No such file or directory I'd try doing an "Upgrade Packages" again from the admin console. Maybe something the last time and some stuff didn't get installed that should have.
-
I think I see the problem. This is the first "roku" device it finds: http://192.168.1.134:40000 which doesn't look like a roku device at all. So something else is responding to the roku discovery broadcast. I just uploaded a new beta that may resolve the issue. All it really needs to do is skip this non-roku device and continue and it should find the real roku devices.
-
You can try re-installing the beta version. Use the option to re-install it to the same slot you used before (non-production store -> Roku -> install -> re-install This version should at least show what it finds during discovery.
-
Well, that change didn't do what I wanted, I'll have to make another version.
-
That error is happening in the roku python library that the node node server is using. I could make the node server handle the error better, but I'm not sure I can do anything about the library failing. It is finding a Roku device (or devices) but the data it's getting back when querying the device is not what is expected. I just created a version with some error checking and additional debug. It's in the non-production store and is version 2.0.3. (refresh the store if it still shows version 2.0.2) You can install this version either by re-installing in the same slot or installing to a new slot and then download and post the node server log here (I don't need the log package).
-
From this log it looks like PG3 didn't detect that there's a new version and thus it didn't do the autoupdate. There are two places where it checks for updates. 1) when it pulls a new copy of the node server store data, it compares it with the copy currently in PG3 and if it sees an installed node server with a new version it creates the notice. 2) when a node server starts it compares the version that was running with the current node server store version and auto-updates if they are different. It sounds like the first is working correctly but the second may still be buggy. I don't have time right now to look into it as I'm busy helping to get things ready for the eisy and we're on a deadline to get the initial image done. Once that's out of the way I can start looking into the issues again.
-
Changing the configuration doesn't delete any nodes. You'd have to delete the node from PG3 which should then also delete them from the ISY. Both the ISY and PG3 keep a record of what nodes were created. If you delete from just ISY, it doesn't notify PG3 and thus PG3 continues to have a record of those nodes. Depending on what steps you actually took back then, this may be expected behavior.
-
It's not the node server it's the ISY/IoP that won't accept the precision. The node server is converting units and the conversion results in more decimal places than the ISY/IoP can handle. Fixed in version 3.0.26
-
Polisy - Support thread for: PG3 v3.1.6 - v3.1.16 (December 9, 2022)
bpwwer replied to bpwwer's topic in Polyglot v3 (PG3x)
OK, well I won't complain too much about it fixing itself The update process can sometimes take a while and restarting PG3 can also take a while now. So it's possible that when you first tried, things weren't fully initialized and it just need a bit more time to finish up. -
I also have it installed and working with 1 zwave switch. I have a room with an Insteon motion sensor that triggers the switch and the switch is in an awkward position in the room so I wanted it working now. I did an exclusion to remove the switch from the i994 and an inclusion with IoP and it is working fine. I'm waiting for migration instructions for the rest of my z-wave devices.