-
Posts
4589 -
Joined
-
Last visited
Posts posted by Xathros
-
-
Hello everyone,
I just tried to take a backup of my system this afternoon and ran into a problem. See attached screenshot. I have tried multiple times and the backup failed at the same point on each try.
Any advice?
Here is an excerpt from the error log at the point of failure:
Thu 2023/09/07 12:35:56 PM 0 -170001 <s:Envelope><s:Body><u:GetSysConf xmlns:u="urn:udi-com:service:X_IoX_Service:1"><name>./FILES/CONF/NODES/UN0109.BIN</name></u:GetSysConf></s:Body></s:Envelope> Thu 2023/09/07 12:35:56 PM 0 -110007 0
Any advice?
Thanks in advance.
-Xathros
-
54 minutes ago, GSutherland said:
I have never ssh from my Mac have no idea how to do it..... Can you give me some suggestions from Terminal to make sure I do it, right
Hello GSutherland,
Open the Terminal app on you Mac. It's a native app provided with MacOS.
Type: ssh admin@<the-ip-address-of-your-eisy> (and press return)
Password is admin unless you have changed it. You wont see the password characters as you type them. (and press return)
From there, enter the commands that bmercier posted above. (followed by the return key) There may be errors or warnings reported, don't worry.
When completed, type: exit (and press return)
Then, return to the PG3x dashboard and start your noderserver(s)
Hope this helps.
-Xathros
-
1
-
1
-
-
I also have performed the reinstall of aioquic and also had build errors along the way (log excerpt below). My node servers are also now reconnected and appear to be working as expected.
Thanks!
-Xathros
QuoteDownloading pyOpenSSL-23.2.0-py3-none-any.whl (59 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 59.0/59.0 kB 1.8 MB/s eta 0:00:00
Building wheels for collected packages: aioquic
Building wheel for aioquic (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for aioquic (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [62 lines of output]
No `packages` or `py_modules` configuration, performing automatic discovery.
`src-layout` detected -- analysing ./src
discovered packages -- ['aioquic', 'aioquic.quic', 'aioquic.asyncio', 'aioquic.h0', 'aioquic.h3']
discovered py_modules -- []
running bdist_wheel
running build
running build_py
creating build
creating build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39
creating build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic
copying src/aioquic/buffer.py -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic
copying src/aioquic/tls.py -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic
copying src/aioquic/__init__.py -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic
creating build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/quic
copying src/aioquic/quic/events.py -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/quic
copying src/aioquic/quic/configuration.py -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/quic
copying src/aioquic/quic/__init__.py -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/quic
copying src/aioquic/quic/packet.py -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/quic
copying src/aioquic/quic/crypto.py -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/quic
copying src/aioquic/quic/logger.py -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/quic
copying src/aioquic/quic/packet_builder.py -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/quic
copying src/aioquic/quic/recovery.py -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/quic
copying src/aioquic/quic/rangeset.py -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/quic
copying src/aioquic/quic/connection.py -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/quic
copying src/aioquic/quic/stream.py -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/quic
copying src/aioquic/quic/retry.py -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/quic
creating build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/asyncio
copying src/aioquic/asyncio/protocol.py -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/asyncio
copying src/aioquic/asyncio/__init__.py -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/asyncio
copying src/aioquic/asyncio/client.py -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/asyncio
copying src/aioquic/asyncio/server.py -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/asyncio
creating build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/h0
copying src/aioquic/h0/__init__.py -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/h0
copying src/aioquic/h0/connection.py -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/h0
creating build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/h3
copying src/aioquic/h3/events.py -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/h3
copying src/aioquic/h3/connection.py -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/h3
copying src/aioquic/h3/__init__.py -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/h3
copying src/aioquic/h3/exceptions.py -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic/h3
running egg_info
writing src/aioquic.egg-info/PKG-INFO
writing dependency_links to src/aioquic.egg-info/dependency_links.txt
writing requirements to src/aioquic.egg-info/requires.txt
writing top-level names to src/aioquic.egg-info/top_level.txt
reading manifest file 'src/aioquic.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
adding license file 'LICENSE'
writing manifest file 'src/aioquic.egg-info/SOURCES.txt'
copying src/aioquic/_buffer.c -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic
copying src/aioquic/_crypto.c -> build/lib.freebsd-13.1-RELEASE-p7-amd64-cpython-39/aioquic
running build_ext
building 'aioquic._buffer' extension
creating build/temp.freebsd-13.1-RELEASE-p7-amd64-cpython-39
creating build/temp.freebsd-13.1-RELEASE-p7-amd64-cpython-39/src
creating build/temp.freebsd-13.1-RELEASE-p7-amd64-cpython-39/src/aioquic
cc -pthread -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -O2 -pipe -fstack-protector-strong -fno-strict-aliasing -fPIC -DPy_LIMITED_API=0x03070000 -I/usr/local/include/python3.9 -c src/aioquic/_buffer.c -o build/temp.freebsd-13.1-RELEASE-p7-amd64-cpython-39/src/aioquic/_buffer.o -std=c99
In file included from src/aioquic/_buffer.c:3:
/usr/local/include/python3.9/Python.h:11:10: fatal error: 'limits.h' file not found
#include <limits.h>
^~~~~~~~~~
1 error generated.
error: command '/usr/bin/cc' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for aioquic
Failed to build aioquic
ERROR: Could not build wheels for aioquic, which is required to install pyproject.toml-based projects
-
1
-
-
I just discovered a better place for this at:
Sorry for the extra topic. Admins feel free to delete this thread.
-Xathros
-
Ditto. Just created a new thread on the subject before I discovered this thread. Sorry for muddying the waters. Here is the text of my other post:
I saw the announcement for 3.2.3 and decided to upgrade packages. I waited 35 minutes then logged into PG3x to find all my node servers disconnected. I have tried restarting the node servers, reinstalling all node servers and finally a full system reboot but all remain disconnected.
Any advice?
Thanks in advance.
-Xathros
-
I saw the announcement for 3.2.3 and decided to upgrade packages. I waited 35 minutes then logged into PG3x to find all my node servers disconnected. I have tried restarting the node servers, reinstalling all node servers and finally a full system reboot but all remain disconnected.
Any advice?
Thanks in advance.
-Xathros
-
58 minutes ago, bpwwer said:
Logging for node servers is done using a Python logging library. I don't believe it had support for compressing the old logs back when first used.
I've changed it now to only keep 14 days, but this type of change will only really effect new node server installations, not existing one. I'm looking into compression to see if that's been added to the library.
I love how things get done around here! Issue discovered and a change in the pipeline less than 6 hrs later. Nowhere else does this happen!
What about updates/new versions to/of already installed node servers? Or re-installs of existing node servers? Just curious. I suspect I'll/We'll be fine with 30 days of logs as long as they aren't set to debug for normal ops.
-Xathros
-
1
-
-
1 hour ago, bpwwer said:
I wrong about the logging. The node server logging for PG3 node servers was copied from PG2 and has been configured to rotate daily and, I believe, keep 30 days of old logs. It's been like this for about 3 years (I.E. before PG3 existed).
It doesn't compress the logs.
As @Geddysays, there's no good reason to leave the level set to debug as some node servers can be very chatty when set to that.
Having a 10G quota on the PG3 directory seems like a new(ish) thing too.
After this, I will endeavor to only set debug level for debugging and not for normal daily operations. 10Gb of uncompressed text is still an awful lot of text. It should be more than enough. I do think 1 to 2 weeks of retained logs should be sufficient. Just curious though, why not compress on rotation?
-Xathros
-
2 hours ago, Geddy said:
Yeah, it's a note to not leave logs in "debug" mode. Why put them all in that stage and leave them that way? That's usually only meant to debug a specific error. Leaving them in "info" or "error" mode, whichever is default, should be fine for day-to-day operations. The few that I run are in "info" mode as I don't worry about errors until I start having them. Then figure error or debug is what I would use to try to learn what's happening.
Glad Michel got you sorted out.
I totally agree. During setup of a new poly, I like to see everything to get a feel for what's going on under the hood. I know the logs would be larger but, it's just text and compresses well and it's on a *nix platform so they'll rotate and it will be fine - or so I thought. Everything is set to error now and I've learned once again to never assume anything
As usual, Michel and UDI's support is second to none.
-Xathros
-
Thanks all.
Michel remoted in and helped me out today.
41 minutes ago, bpwwer said:No log data is stored in any database. However, it does keep a fair amount of log info and debug logs can get quite large.
Most of the log data should already be compressed and I believe cleared once it's older than 2 weeks.
Also, even if the Kasa log exceeded some limit, it shouldn't be effecting PG3x's ability to start and run.
I'd need more information on what PG3x is doing, how you're determining that it's log file related, etc.
It was mostly my Kasa node server. I have 7 polys running and had all of the logs set to debug. Kasa with 50 nodes is *VERY* chatty and filled the 10G allotment for the polyglot filesystem. Between Kasa and the other 6 polys, it ran out of space with less than 2 weeks of logs! With no free space, pg3 would not start. It appears to me that in the current version, logs are not being compressed daily if at all.
We deleted all the logs and I have set my log levels to error instead of debug.
I think there will be some changes coming to prevent this sort of thing from happening again.
-Xathros
-
Hello everyone,
pg3 is down on my eISY. After some research, I have determined that pg3 is not starting because it has exceeded its disk quota. Looks like the database has ballooned and I've reached the 10G limit. Pretty sure this is my fault for having the log level set to debug on my Kasa node server. I didn't realize that the log data was stored in the database. My bad.
How to clean up the log data, compact the DB and get back online?
Any assistance is greatly appreciated.
-Xathros
-
Hello all-
All working now for me too. I did need to enable SNI on all of my web hook resources which is new but now they work after IFTTT's patch rollout.
Thanks all for the community support!
-Xathros
-
Hello guags99,
My first failure was:
Tue 2021/11/30 04:23:52 PM System -170001 [TCP-Conn] -256/-140002, Net Module Rule: 2
Also CST and was the first time a maker event was called since the week before as we were away and the events would not have fired while we were gone.
I saw mention of 11/29 earlier in this thread as well so I suspect your right.
-Xathros
-
-
Hello all,
I have tested with SNI checked and unchecked. Timeout set to 20,000 - no noticeable difference in time to failure from timeout of 1000. Where do I find the Client settings to check (or uncheck) the Verify box?
Seems like something changed outside the scope of the ISY for this to all of a sudden be affecting so many clients. Probably something at the IFTTT host side but that'd just a guess on my part.Thanks in advance for any guidance.
-Xathros
-
1
-
-
Hello all,
I'm seeing the same thing with all of my outbound maker network resources.
Get, Post, URL Encoded or not, SNI or not, they all fail. Every failure results in the same error in my error log - example follows:
Wed 2021/12/01 04:45:49 PM System -170001 <s:Envelope><s:Body><u:TestNetResource xmlns:u= Wed 2021/12/01 04:45:49 PM System -170001 [TCP-Conn] -256/-140002, Net Module Rule: 5
I can call the web hook just fine from a browser on my computer.
Any insight would be appreciated.
-Xathros
-
1
-
-
Me Too!
Spent about an hour troubleshooting. Should have come here first...
Thanks all!
-
All,
Thanks to everyone that jumped in here to help.
I just completed a remote support session with UDI. Looks like the database had become corrupted likely due to a power failure. They very quickly had it repaired and I'm back up and running with no lost config or anything.
As always, the support offered by UDI is unmatched by anyone!
-Xathros
-
2
-
-
14 minutes ago, Michel Kohanim said:
Ugh. Lets see if my second attempt to open a ticket was a success...
Thanks.
-Xath
-
2 hours ago, Bumbershoot said:
Since you can log in via SSH, you'd think that your network settings have persisted, so you might not have lost everything. You might check to see if your nodeservers are still intact. They're located here: "/var/polyglot/nodeservers". The should be owned by user 'polyglot'. This command at the prompt should show you that:
ls -alt /var/polyglot/nodeservers/
I believe that Polisy uses the ZFS file system, so the file system should be able to withstand random power outages without corruption. There was a recent upgrade to many of the FreeBSD system files, so maybe something went wrong during that upgrade. Maybe it's just a credential reset associated with that upgrade.
In any event, I'd prepare a support request...
Hi Bumbershoot,
My polys are there with proper ownership. When I run top, I see one of them running - the KASA NodeServer. Must not be fully running though as the nodes are not making it through to the ISY. I have already submitted a ticket with UDI. I'll follow up here as things progress.
Thanks!
-Xath
-
Sure did. Same result as with the user I had configured.
-
Hello all,
After a recent power outage at my lake house, I am no longer able to log into my Polisy via the web interface. I noticed I had no data from my node servers. I attempted to log into polisy and can't gat past the username/password page. No error or any other kind of feedback when submitting credentials. I used my PDU to remotely reboot the polisy and get the same result. I am able to login via SSH but only with the default reds admin/admin. The user I had created for myself no longer appears to exist. I followed the little bit if guidance I could find in the documentation to update and reboot from the CLI only to find everything up to date and no change after reboot.
Any/all help here is appreciated.
Thanks in advance.
-Xathros
-
1 minute ago, Michel Kohanim said:
I suspect it's the cache in the browser. Can you please do any of the following and let me know:
1. Hit the refresh button on the browser
2. Clear the cache in the browser
3. If you have another browser, please try itWith kind regards,
MichelNailed it!
All better now.
-Xathros
-
1
-
-
43 minutes ago, asbril said:
Try "Restart Polyglot"
Thanks. I tried that and still showing the old FW version.
-Xathros
Support Thread for IoX 5.6.4
in IoX Support
Posted
Hi DennisC,
Thanks. I figured out which node the file belongs with and then removed the file. The node still appears to operate normally and now my backup completes.
I'll watch for any other anomalies and hope that I haven't done any damage by removing the file.
-Xathros