-
Posts
3216 -
Joined
-
Last visited
Everything posted by bpwwer
-
Maybe. The i994 UUID was the same as the ethernet port mac address. But with the Polisy having 3 network interfaces (and the pro having wifi too) which mac address is used as the UUID? The eisy has 2 mac address so same thing. I don't know how it determines which mac address (and I'm assuming it still uses one of them) is the UUID. PG3(x) simply queries the running IoX instance to get the UUID. It's possible that how the UUID is determined has changed over time too, I just don't know. But that could explain why there's a phantom UUID in the database.
-
Yes, running the package upgrade will upgrade everything to the latest versions. This is what UDI has recommended, but I understand your hesitation. It is possible to limit what gets updated using the command line tools but this may cause other issues either now or with later upgrades which is why it isn't recommended. sudo pkg update sudo pkg install pg3 However, installing the latest PG3 may also pull in other dependencies which may or may not work with the existing versions of packages on your system. You may be better off sticking with the version of PG3 that you have until you're ready to do the full upgrade.
-
Hello all, We are very happy to introduce PG3 v3.1.19 with bug fixes and new features, see changelog below. To upgrade PG3: In the Admin Console, click on Configuration, Upgrade Packages (note, this can take many minutes, depending on how long it's been since you last updated.). After the upgrade is complete you will need to restart PG3. From PG3 select System-> Restart Polyglot 3. Once PG3 has restarted, reload /refresh the browser page. Changelog for 3.1.19 - Force python udi_interface update to get version that fixes missing version on node server. Changelog for 3.1.18 - Add a rating system for node servers. Users may rate node servers they have purchased from the Purchases page. - Display node server average rating on node server Info page. - Remove "Log Out Of Portal" buttons. Too many things won't work if logged out of the portal. - Add a button to the node server info page that links to the node server software license. - Don't assume that output on stderr means the node server has crashed. - Add support for developers to add/edit devd configuration information for node servers - Update oauth2 via cloudlink support. - Verification of the node server user config on IoX was wrong resulting in excess node server configuration updates. - Developers can now specify which controllers a node server works on (Polisy and/or eisy). By default, it's both. - Add local store backup/restore feature fro node server developers. Support Thread:
-
Support thread for: PG3x 3.1.21 (January 23, 2023)
bpwwer replied to bpwwer's topic in Polyglot v3 (PG3x)
3.1.17 is the latest production version of PG3 for Polisy. 3.1.18 is in pre-release testing. -
What happens if you right click on the ST-Inventory node in the AC and select query? Does it populate the values then? When I re-installed and start it, I saw the same thing you did, just the total nodes, node server nodes and error log entries populated. As I tried to debug, I noticed that everything was working but it wasn't sending the other values to IoX which is actually correct for the way it was written. It should only send values that have changed and none of the values were changing so they were never sent. A query forces it to send all values. So I'm not sure if this is a bug or not. It seems to be working as designed, but I'm not sure the design is correct. A better design might be to always send the values even if they haven't changed.
-
Yes, that changed happened almost a year ago and according to the logs, before the 1.0.0 release of the node server. Prior to that change, there wasn't any code to update the online status so as far as I can tell, that value would never change.
-
Looks like the event viewer reports the UOM in hex not decimal so 11 would be temperature which makes sense, that's how the node ST value is defined. The moisture sensor status is defined as the moisture level. Everything in the node server looks correct to me, I don't know what you want to me to "fix".
-
It looks like programs are trying to convert the boolean value of the status for those nodes into something else. 11 is deadbolt status and 16 is european macroseismic (whatever that is). The conversion is failing (which makes sense) Check the programs with IDs 0251 and 0245
-
We implemented the testing phase after all of the recent problems with upgrades, but this was after I had added the upgrade check in PG3(x) so it's detecting the upgraded version in testing now.
-
No, no correlation to the LiFX problem. I understand the frustration. The fact some of them are working is actually a good sign, it means it can work, we just need to figure out why it's not working for all of them. The node server log with debug enabled is the most important piece of information to understanding what is happening and you still haven't provided that. Without getting into to too many technical details, when PG3(x) installs a node server and the node server creates nodes, there are two main parts to this. The node server running on PG3(x) creates the node in the IoX. Updates to that node come from the node server and commands to that node go to node server. So both the node server and the IoX have an internal representation of the node. Some of the synchronization of this is managed by the node server and some by you the user. The Discover button (and the configuration) on the node server is how it determines what MH devices are out there. If it sees a new device, it creates a new node on the IoX. However, the opposite isn't true, it doesn't remove the node from the IoX if it doesn't see the device during the discovery. That's where you would have to manually remove the node to keep things in sync. That's why the IoX may show 10 device nodes, but if the node server only discovers 4 you get 4 working and 6 not working. The discovery process isn't perfect, it depends on a number of factors both in how the devices themselves are designed, how busy the network is at any given time, etc. It something like asking a group of people to say they're name to see who is there. They start talking over each other and you may only be able to catch a few of the names. That's why you can also enter the devices in a list on the configuration page. So I'll re-iterate, the node server debug log is how we determine what the node server is doing and manually configuring the devices in the node server may help. Those two things are how we move forward.
-
when the right versions of the libraries are installed, the node server works. Getting the right versions installed when things seem to change daily on the system is the challenge.
-
@SteveBLEach slot on the dashboard display is a different node server. Slot one holds the Elk node server and now slot 3 holds the ISY Portal node server. There can be up to 100 different node servers installed on the IoX. The ISY Portal node server is something separate from PG3, and not managed by PG3 but rather it is managed by the ISY Portal. That's why it shows up as unmanaged in PG3.
-
Do you mean they all populated after 7 minutes or that you had to manually query them to get them to populate? In general, you shouldn't need to manually query to get them to populate. However, because of how node servers work and timing, it is possible that some initial values could be lost on startup and that the node server could take quite a while before it sends updates for some values. For example, WeatherFlow will send all of the weather values when it starts. If it sends those before the IoX is ready to receive them, it won't re-try until it has a new value to send. For things like wind speed, that could be in 60 seconds, but temperature could be 5 or 10 minutes. So it would slowly populate those values over time. For some of the weather service node servers, it can be 30 minutes before it gets an update. It's going to depend on how often the node server sends updated values.
-
I see some of the problem. When you run discover on the node server, it tries to discover all of your bults/strips. For the devices it finds, it adds to it's internal list. It will then create nodes for any new bulbs/strips that it discovers. However, it doesn't delete nodes that were found at one time but not found with the current scan. This means that the list of nodes in the AC may not represent the list of nodes that the node servers is currently communicating with. The nodes that aren't showing status aren't in the internal list which is why it reports not found when you try to control them. The question is why isn't it discovering all the devices? I don't know. But you do have the option of manually adding them to the node server via the configuration screen. That may solve the problem or, if it really can't communicate with the device, you'll just get a different error, instead of not found, it will be a failed communication error. But it's worth a try.
-
Probably not. The node server log doesn't really show historic log entries (unless you download it) and the PG3x log won't maintain that level, it will revert to info on restart.
-
No, 2.1.23 is still in release testing and nothing in that would be related to this issue. Can you check the node server log (dashboard -> MagicHome details -> log and set that one to debug. Try the on/off commands again and either post the results of that log or download a log package an PM it to me.
-
Possibly, PG3x is more dependent on IoX and UDX so issues with those can impact PG3x. If you do see the issue with the nodes not populating try the following: 1. Set one of the node servers that isn't populating to debug log (if WeatherFlow is one, it's a good one to check) and monitor. If it's sending updates to PG3x it should be obvious. If not, you'll likely see very little log activity. 2. If you see it sending to PG3x, open the PG3x log and set it to debug. You should see the http requests to IoX with status 200, meaning it was able to send the update to the IoX. If not, copy a couple of entries and post here. 3. Open the event viewer on admin console and set to level 3. You should see related entries for each request sent by PG3x. Based on what the results are for each step, we can narrow down where the problem is happening.
-
3.1.18 hasn't been released to production yet so if 3.1.16 is working then no PG3 upgrade is needed at this time.
-
I'm far from an expert on MH devices/app. I did create the current node server and like I said, it has been working well for me with the the 2 lights that I have. Have you looked at the node server's log file? It can provide a lot of information about what the node server is doing and whether or not it is even able to communicate with the devices. For example, in debug mode, I'll get these messages every few seconds as it checks the status of the lights: 2023-02-17 16:12:03,363 Thread-21705 udi_interface DEBUG rgbled:watcher: 70039f051de9: status 81 35 24 61 01 1E 00 00 00 00 07 FF 0F 6F 2023-02-17 16:12:03,364 Thread-21705 udi_interface DEBUG rgbled:levelOf: levelOf input: 0 0 0 2023-02-17 16:12:03,365 Thread-21705 udi_interface.node DEBUG node:setDriver: 70039f051de9:mh 192 168 92 94 No change in ST's value 2023-02-17 16:12:03,366 Thread-21705 udi_interface.node DEBUG node:setDriver: 70039f051de9:mh 192 168 92 94 No change in GV1's value 2023-02-17 16:12:03,367 Thread-21705 udi_interface.node DEBUG node:setDriver: 70039f051de9:mh 192 168 92 94 No change in GV2's value 2023-02-17 16:12:03,367 Thread-21705 udi_interface.node DEBUG node:setDriver: 70039f051de9:mh 192 168 92 94 No change in GV3's value So I know it's communicating with the lights.
-
Hmm, that's not what Michel said when I asked him what reboot meant, he claimed it was a restart of the isy service. I've been trying both and so far I've not seen any issues with WeatherFlow reporting data or the admin console displaying the data after either type of reboot. It is possible to trace the flow of data from the node server to the IoX using the PG3(x) log files. The log levels needs to be set to debug to do this. The node server log will report every time the node servers sends an update to PG3(x) The PG3(x) log will report every time it sends an update to IoX and will also show if it was successfully sent or not. The more node servers that are running the more difficult it is to trace, but since WeatherFlow should be reporting data every 60 seconds it's not too bad: For example, WeatherFlow node server reports 2023-02-17 15:59:08,766 MQTT udi_interface.interface INFO interface:_message: Successfully set 240018 :: WINDDIR to 2 UOM 76 then the PG3 log shows 2/17/2023, 15:59:08 [pg3] debug: ISY Response: [Try: 1] [00:21:b9:02:61:f1] :: [200] :: 0.935454ms http://127.0.0.1:8080/rest/ns/1/nodes/n001_240018/report/status/WINDDIR/2/76 2/17/2023, 15:59:08 [pg3] debug: MQTT Results: [ns/status/00:21:b9:02:61:f1,1] :: {"set"[{"address":240018,"driver":"WINDDIR","value":"2","uom":76}]} 2/17/2023, 15:59:08 [pg3] debug: [PUBLISH: udi/pg3/ns/clients/00:21:b9:02:61:f1_1] {"set":[{"address":240018,"driver":"WINDDIR","value":"2","uom":76}]} The first line says it sent it to the ISY and was successful (status 200). The second line is the results back to the node server, and the third line is it actually sending the results to the node server.
-
@DennisCI've seen your ticket, your problem seems different as the node servers are starting. I'm trying to reproduce and does "reboot" mean a system reboot or an IoX reboot?
-
Roku node server version 2.0.5 should fix the errors
-
Which platform and what version of PG3? The latest versions are: eisy: PG3x 3.1.22 polisy: PG3 3.1.17 The initial release of PG3x for eisy had a bug that would prevent all node servers from starting but that has been fixed in the later releases.