-
Posts
3265 -
Joined
-
Last visited
Everything posted by bpwwer
-
I just tested it myself and everything worked exactly like I expected. Changed password on admin account using AC Restarted AC to verify that I had to use the new password PG3 was reporting lots of 401 (authentication failed errors) which is to be expected since it is still using the old password. ISY -> edit current ISY, change password to new password and hit save PG3 stops reporting errors and starts communicating properly with ISY. All the node servers have continued to operate without any issues throughout this. I'm not sure what @Michel Kohanimmeant, but the only reason to delete and reinstall a node server would be if you changed to a different ISY. The node server doesn't know the ISY username or password so changing that will not effect the node server. (in general, there are some node servers that communicate directly with the ISY, those may need to restarted to get the new username/password) There may be something else going on with this that I'm not aware of so I can't say for sure that an uninstall/re-install wasn't required, but just changing the ISY password does not require node servers to be uninstalled/re-installed. @BumbershootIt sounds like there was something wrong with the credentials you entered in PG3. Possibly not even your fault. I seem to remember there being certain restrictions on what could be entered for the ISY password so that maybe what you actually typed wasn't what it ended up being set to. An example of this would be if the ISY limited the length of the password to 8 characters. The AC knows this limit so it always truncates what you type to 8 characters. PG3 doesn't so it would send the longer version and fail to authenticate.
-
If the password was changed on the ISY and PG3 was not updated with the new password, PG3 will report the errors in it's log (probably lots and lots of 401 (failed to authenticate) errors. You should just be able to update the ISY account info in PG3 under ISY -> Edit Current ISY. If that didn't work, it's a bug. There shouldn't be any need to restart anything let along delete and re-install.
-
That's been the main goal of PG3. A lot of the node servers for PG2 seem to have been developed but then abandoned since the time it takes to maintain them can quickly become larger than the initial development time. The expectation is that by being compensated for the effort, they'll be more likely to continue maintaining and supporting their node servers. At some point, the default configuration on a Polisy will switch to PG3 being installed and PG2 not being installed by default. Eventually, PG2 may be removed, but that will be a while. Since PG2 is an open source project, how long it remains viable to run on something other than an Polisy will be up to the "community".
-
In general, the PG3 and/or the node server log will provide information to help understand what is happening.
-
You'd want to do it from PG2. The PG2 delete will delete the configuration in PG2 and delete the configuration and nodes on the ISY. The delete on the AC will only delete the configuration from the ISY.
-
You're welcome. There actually is an issue entered to restore individual node servers from PG2. Right now, resources to work PG3 features is limited. A feature like that is a convenience for the initial migration and will likely never be used after that. So it's hard to justify spending resources on something like that.
-
Support thread for: ISY on Polisy (IoP) v5.4.4 (May 25, 2022)
bpwwer replied to Michel Kohanim's topic in IoX Support
Wake-On-Lan node server now in the PG3 node server store. -
No, it's probably not an ISY thing, I must still have something wrong in the profile files or you don't have the most recent profile files loaded in the AC. Here's what I see on mine: Try the "Upload Profile" button on the PG3 node details page and then restart the AC.
-
No, the restore is kind of an all or nothing. As part of the restore, it changes the configuration on the ISY to point back to the PG3 version. If you then delete the PG3 version, it will delete the node server from both PG3 and the ISY. The PG2 version will no longer work. Possibly you could then delete the PG2 version and re-install the PG2 . If you just want to partially move to PG3, I'd recommend not trying to use the restore from PG2 backup and just delete the node server from PG2, install the PG3 version in the same slot, re-configure and fix anything that broke. That's the cleanest way and you can concentrate on making sure each one works correctly before moving on to the next.
-
There was a typo in one of the OpenWeatherMap files. I've fixed so the latest version 3.1.3 should show connected/disconnected
-
In general you can't. But since the migration process is not great, it's not impossible to end up in that situration. You should be fine restoring from a PG2 backup. Just make sure you either delete the node servers from PG2 before doing the restore. The PG3 restore from PG2 backup can't clean up what's on PG2 after the restore.
-
There have been changes to the way node servers work since this node server was ported. The node server is not responsible for that status value, PG3 is. I've updated the node server so it should work correctly with the latest PG3 design.
-
It is slot dependent. So if there is a PG2 node server in slot that you are allowed to install and you have a different node server in PG3 slot 2. The restore will overwrite the existing node server in PG3. However, if there isn't any slot "overlap" between what's in the PG2 backup and what's currently installed on PG3, then none of the existing PG3 node servers would be effected.
-
Added this to the issues list for the node server so I've captured the request.
-
Try version 2.0.4, I made some changes that I think should fix that.
-
Seems like that used to work. I wonder if a new version of Python is now more strict about that sort of thing.
-
Have you installed PG3? I don't believe it is being installed by default yet so you have to manually install it using the instructions in the release announcements for PG3 before you can use it.
-
No command sent by the ISY to a node is synchronous. In this case, the ISY sends the backup command to the node server node. It does not wait for any response. The node server then queries every lighting type node for it's status. It does this one-by-one. How long it takes to complete will depend on how many devices you have and what else the ISY is busy doing. Polling is not used. It only does the queries when asked and same for the restore.
-
Can't Install HolidaysGoogle... Error: Node server object missing uuid
bpwwer replied to StangManD's topic in HolidaysGoogle
you can send it directly via a PM here. Ideally, just download the PG3 log and attach that to PM. Also, before you do that, can you verify what version of PG3 your are running? -
Can't Install HolidaysGoogle... Error: Node server object missing uuid
bpwwer replied to StangManD's topic in HolidaysGoogle
I did the port of this node server from PG2 to PG3 so it's possible I broke something when I did the port. I don't use Google Calendars so I'm not sure I can thoroughly test it myself. I have no issues installing it. The error "Node server object missing uuid" is not in any of the PG3 code. It also doesn't make any sense because object names can't have spaces so there isn't any objects in PG3 named "Node server". I don't doubt that you're getting an error, but with that information I'm not able to do anything to debug it. PG3 logs showing when you try to install it would help. -
Support thread for: ISY on Polisy (IoP) v5.4.4 (May 25, 2022)
bpwwer replied to Michel Kohanim's topic in IoX Support
I'm wondering what a node server for WOL would look like... Configuration could be via custom parameters: [host] : [mac address] [host2] : [mac address2] I would assume the goal would be to have a program action that sends the WOL packet to one of the configured devices. So would this create a node for each host and that node would have one command to send the WOL packet? Or would a single node with a dynamic parameter list for a send command be better? A node per host would be easier to implement, but that brings up the question @Michel Kohanim, what is the node limit for IOP? It could be a pretty simple node server, so what's it worth? -
Yes, that sounds right. Yes, they will lose the reference, but if you re-install the node server into the same slot, and it creates the same nodes with the same node addresses, that should "fix" most of those references. Also, you can remove node servers from the ISY/IOP directly using the Node Server -> Configure -> slot # -> delete button. Use this to clean up any that don't have Polyglot instance managing them. If a node server is being managed by a Polyglot instance and it gets removed from the ISY/IOP, the Polyglot delete should still delete it from the Polyglot database. So even if that one-to-one link between Polyglot and ISY/IOP is broken, you should be able to clean it up without doing anything drastic.
-
I know this is confusing, but again, nothing about how node servers were implemented in the past was designed to support migration of node servers. When a node server is installed, it is installed on both the Polyglot instance and on the ISY. The ISY gets configured to point at the Polyglot instance and the Polyglot instance gets configured to point at the ISY. Let me go through a simple example. Here's the names I'll use to try and make it clear what I believe happened. I ISY994 - a 994 based ISY controller ISYIOP - ISY controller running on a Polisy POLISY - The Polisy controller PG2 - Polyglot version 2 running on Polisy PG3 - Polyglot version 3 running on Polisy PG2NS1 - A node server from the PG2 node server store PG3NS1 - The new version of the node server from the PG3 node server store Originally, PG2NS1 was installed on ISY994 in slot #1. The configuration of that slot points back to PG2 on the Polisy. The PG2 database holds a reference that says PG2NS1 is installed on ISY994 in slot 1. Looking at the PG3 dashboard when PG3 is configured for the ISY994, it will show PG2NS1 in slot #1 as "Unmanaged". That means that while PG3 knows that something is installed in ISY994's slot 1, it has no reference in it's database for it and thus, can not manage it. Now we migrate ISY994 -> ISYIOP. The configuration on ISYIOP for the node server in slot #1 is copied from what was in ISY994. However, PG2 doesn't know that this happened. PG2 is has the reference to ISY994. Maybe the migration tool fixes the PG2 database entry as part of the migration, I don't know. If the node server continues to work after the ISY994 is unplugged, it must. Now you add ISYIOP to PG3. Through the ISY menu you should be able to switch between the two ISY instances and the dashboard will show what is installed in the currently selected ISY. NOTE: Both ISY994 and ISYIOP have the same PG2NS1 installed in slot 1 (you copied the configuration from ISY994 when you migrated to ISYIOP). So in PG3, the dashboard for each ISY should look the same. PG3 still does not have a reference to PGNS1 in it's database so it remains "Unmanaged". When you create a backup of PG3, you're not backing up the ISY(s). You're backing the PG3 database and the installed node servers. At this point no node servers have been installed on PG3 so all you really did was backup the PG3 database and restore the PG3 database. NOTE: the backup doesn't care which ISY is selected, it backs up everything. I believe this is where you are now. You have a couple of choices on how to proceed. 1) you can make a PG2 backup and attempt to restore that PG2 backup on PG3. The PG2 backup will be of PG2NS1 and it's configuration from slot #1. When restoring this on PG3, PG3 will look to see if PG3NS1 exists, and if it does, it will install it in slot one of which ever ISY is currently selected, overwriting the PG2 configuration that is already there. The PG3 dashboard should then show the PG3NS1 installed in slot #1. Switching PG3 to the other ISY will still show slot #1 as "Unmanaged" since you only restored the PG2 backup to the one ISY. 2) you can delete PG2NS1 from ISYIOP and then install PG3NS1 to slot #1 of ISYIOP. If the PG3 version of the node server is the same as the PG2 version, it should re-create the same nodes and everything will work. If it's not, you'll have to manually fix scenes and programs to use the PG3NS1 nodes instead of the PG2NS1 nodes. To answer your last question, you can't migrate a PG3 node server from one ISY to another. PG3 can manage node servers installed on multiple ISY's (I.E. install/delete/start/stop). But manage and migrate are two different things.
-
That's strange. I'd like to understand better what happened the first time so that we can prevent that from happening to others in the future, but if you don't know, it's OK.
-
From the polyglot dashboard you can select the node server details and then the "Log" button. From there you can change the logging level. I believe it is set to Info by default. Changing to Error or Warning should reduce the amount of logging significantly.