Jump to content

Cannot remove "unmanaged" node servers


Recommended Posts

Hello Everyone.

Sorry for the low level question, but I've reviewed documentation and searched the polyglot interface, but how do I push Version update?

My Polyglot is at PG# V 3.1.18 and I'm experiencing the issues above with no way to remove "Unmanaged" node servers.

Thanks kindly

Link to comment

Ok. Thought I had resolved this issue.

Updated my ISY in the polyglot dashboard thinking my Node server connectivity would be corrected as I failed to update when originally switching from my isy to my new Polisy.

Now stuck here. Unable to force update of Polyglot with reboot. Unsure of next steps.

Any suggestions greatly appreciated. 

image.thumb.png.f8a7cb9bd3bc0834e95806552774a5c1.png

Edited by PB11
Link to comment

@PB11 split your post out from a solved topic as you're asking about "Unmanaged" node servers.

What hardware are you using? My guess would be Polisy.

I believe if they are showing unmanaged they were managed by PG2 and you're logging into PG3. 

You either need to go back to PG2 and remove from there then install in PG3 side. Otherwise, you should be able to remove them from the IoX Admin Console and the Polyglot should update. I think @bpwwer can confirm that's an option.

As for the update issue review the release notes for 3.1.19. If you've attempted the normal reboot process then try from the admin console configuration tab to force a Polisy reboot. Last ditch effort would be a full power cycle (pull the power supply). Make sure you're allowing ample time for the updates and with the Polisy getting expected beeps when you reboot it.

 

  • Thanks 1
Link to comment

@PB11 I have a client with a Polisy that also has seven unmanaged PG2 node servers. I went so far as to remove the node severs manually from the Polisy. SSH and deleted all the node server directories from the PG2 directory. I also deleted them via AC, but they are still there.

It's something I've been meaning to create a ticket for, but since it's not breaking anything atm, it's been a low priority. I believe it has something to do with the portal account used during login to the portal. Since I have multiple clients I support, it is not out of the question that I used the wrong portal account when updating their system and now the node servers are somehow linked to that other portal account. I'm reaching, but since the node servers are not actually on the Polisy anymore, it's all I can think of.

It's just a theory, but I can't come up with another explanation as to why the node servers show up in PG2, but don't actually exist anywhere else.

Please keep us updated on what you find.

Edited by kzboray
  • Thanks 1
Link to comment

Unmanaged means that whatever instance of PG3 you're running is not able to manage (start/stop/remove) those node servers because they were installed on something else.

In general, removing them via the IoX admin console will remove the unmanaged entries from the PG3 database.  With one exception.

There is one case where PG3 won't remove the unamaged entries and that is when the IoX reports that no node servers are currently installed after previously reporting there were node server installed.

From PG3's point of view.  The IoX said it had node servers installed and now it says it has none.  There are two reasons this could happen

1) You, the user purposely removed all the node servers from the IoX.

2) Something went wrong, the IoX has been factory reset and you are about to restore the IoX from a backup

PG3 doesn't know which of those two cases caused the IoX to report no none servers installed.  If it assumes it is #1 and it wasn't, then you'll end up in a bad state.  If it assumes #2  and it wasn't you end up with the unmanaged entries which is annoying, but not fatal.  So it assumes #2.

So how can you force it to clear out those slots?  Install a node server in an unused slot.  Then IoX will report 1 node server is installed and it will assume the IoX configuration is valid and will update i'ts database to clear out the unmanaged entries.

  • Like 1
Link to comment

It doesn't really matter who/what installed the node servers.   The flow looks like:

- PG3 queries IoX for node servers installed on IoX
   - IoX returns the list.  Based on the info, PG3 knows if it installed the node server or something else installed the node server
   - For all node servers not installed by this PG3, it creates an "Unmanaged" database entry to show that something is already installed in that slot.

- PG3 queries IoX again for node servers installed on IoX
   - IoX returns an empty list

This triggers the behavior.  It doesn't matter who originally installed the node server, all that matters is that the list returned by the IoX previously had entries and now it has no entries.

If the list had 10 entries and you remove one via PG2 or via admin console so that the list now has 9, PG3 will also remove that entry.

If the list had 10 entries and you remove 9 of those either via PG2 or via admin console, then PG3 will remove those 9 entries.

If the list had 10 entries and you remove all 10, then you've triggered this special case handling.

Maybe this is a better way to explain it, if the number of node servers in the PG3 database > 0 and the IoX returns 0 node servers installed, PG3 assumes something has gone wrong and doesn't update it's database.  This is because it can't "un-update" the database if the IoX returning 0 node servers was a temporary error.

  • Thanks 1
Link to comment
Guest
This topic is now closed to further replies.

×
×
  • Create New...