-
Posts
3255 -
Joined
-
Last visited
Everything posted by bpwwer
-
Not just you, it appears to be broken. I changed the backend of the backup proccess but didn't touch the UI portion and it's the UI portion that appears to be not working. It was working when I tested my changes. Go figure.
-
I just published version 2.0.1 that fixes at least some of the issues. Restarting the node server will automatically do the update to the new version.
-
You see a lot of these issues with the weather services for a couple of reasons. The weather services tend to have a lot of data so I tried to minimize how much was sent to the ISY and most will only update when there are actual changes. Because the i994 ISY has limited network resources, this is probably less of an issue with IoP, but since the node servers still support both, it's not going to change right now. The weather services tend to limit the number of API request so I don't enable interactive query since overuse of that could cause the limit to kick in and you'd end up getting no data until the limit was reset. In some cases, the weather services just aren't sending that particular data for your specific location. In that case, the node server will send the default data when it starts and it will never send it again until it is restarted because it never gets updates from the service. Or like #3, the data is sent very infrequently. NOAA alerts for example are only sent when there is an alert published. Lastly, most of the weather service node server were written for a early, pre-alpha version of PG3 and do need updates to work better with the current version of PG3. That's on the list but low priority at this time. Also, since this is only an issue if the ISY is rebooted, the work around at this time would be to restart the node servers after the ISY is rebooted. If you are having to reboot the ISY frequently for some reason, that is a bigger problem and should be addressed.
-
You can delete them, it should be creating new logs each day. PG3 uses /var/polyglot/pg3/logs
-
There are two different things being discussed above, but both may have the same root cause. If the ISY is not available when PG3 starts, it shouldn't effect the ability for PG3 to start node servers. If PG3 fails to start the node server, then that is probably a bug. However, if the ISY is not available, then PG3 then all status updates to the ISY will fail. Because PG3 typically only sends status updates when the status actually changes, if the ISY becomes available after PG3 has given up, it won't try to send updates again until the status changes again.
-
I don't know what that node server does so I don't know.
-
The log files are in /var/polyglot/logs They are supposed to restart each day and archive the previous day's.
-
What I was trying to say above was that in many cases, this is the expected behavior with the current PG3 design.
-
Node server status is just the status of the MQTT link between PG3 and the node server. It only changes when the node server is started or stopped. Restarting the ISY, has no impact on this. The ISY is clearing the status when restarted and unless the node server is stopped and then restarted, it won't send any updates to the ISY. Typically PG3 will try to minimize the traffic to the ISY so it won't send a status update unless the status has actually changed. Since the status hasn't changed, no update is ever sent. With PG2, the node server status was handled differently and what it reported was different. There could be other differences around when it would/wouldn't send updates as it was not trying to minimize the traffic like PG3 does. Thus PG2 and PG3 are not expected to behave the same.
-
Looks like one of the dependencies can no longer be installed on a Polisy. It will take some work to figure out if this can still be supported or not.
-
The node server store support is currently in-flux as I work to improve it. One of the goals is to eventually remove the 'install local node server' button in favor of installing from the 'local store'. This will make the process for developers be as close as possible to the same process users will use to install node servers. However, with the current release, there isn't any way to get node servers into the local store. That's what I've been working on for a few days now. While the local store is primarily intended for developers, it can also be used to provided pre-loaded nodes servers on the Polisy so that some node servers may be available even if the Polisy doesn't have an external network connection or the cloud based node server store is down for some reason.
-
You go to the configuration page and enter your Emporia username and password and save.
-
This is maybe not the best place to discuss some of this. I've some similar discussions over slack. If we want to create nodes that represent the node servers, I think we should make that mandatory and provide a well defined definition that lists what is required of those nodes. Today we have a mix of things. Some node servers have a node just for node server status, some mix the status in with the device nodes, some don't report status at all. In many cases, I've moved to not reporting status at all because what we have as node server status today isn't as useful as many think. It really only tracks the mqtt connection between the node server and PG3 (which is it's own thread in the node server) so everything else in the node server could be failing and your connection status would still be good.
-
If you want to make it a hard requirement, then yes. This is mostly just my opinion, but it seems like nodes are a representation of a device. If the device supports a heartbeat, then the node can also. Having a node represent the node server itself doesn't seem right to me. In general the node server should be invisible. In support of this, I look at how Insteon and z-wave are implemented and neither of those have "control" nodes that represent the controllers. I think of a node server as similar to a PLM or z-wave dongle just implemented in software vs. hardware. I know this isn't a perfect analogy but it seems pretty close. I know a node can represent anything and if we really want to create nodes that represent the node servers, then I think it should have a standard node definition and standardized driver types/command types and UOM's that are designed to represent that. One of the reasons I've been hesitant to create nodes for my node servers is because of nodes being a limited resource and using one just to report the status of my node server seemed like a waste of that resource. If this is no longer the case with IoP then I'm less opposed to these types of nodes.
-
Scroll down farther, you can select temperature in either F or C, just the C values come first in the list.
-
Then the ISY needs a way to accept a heartbeat.
-
I'm not saying that nothing will work. It depends on the node server design and what the node server author does with it. But because node servers are multi-threaded, it is very possible that one thread will crash while the thread that has the connection to PG3 continues and you'll never know that the node server isn't working. What we need is for something to independently monitor the process and report back to the ISY when a node server is not functioning, but the ISY doesn't have anyway to handle that yet.
-
That configuration looks ok to me. I can't actually get any data from your station with my token so I don't really know if it's OK on the WeatherFlow end.
-
Node servers are not required to have a node dedicated to providing node server status. Nor are node servers required to report node server status to the ISY. Node server status (as it has been typically done) doesn't even really make sense with PG3. If the node server dies or crashes, there isn't a running node server to report that fact. PG3 will only change the status if someone actually presses the button to start/stop the node server so if you're changing the status, you shouldn't need a notification to tell you that you changed the status. It is possible for a node server to configure PG3 to send connection status to one of it's node values. However, that is only tracking the connection status between the node server and PG3. It is possible (and probably likely) that a node server can fail while still remaining connected to PG3.
-
It works fine for me even with your station ID and the error is coming from WeatherFlow. So it has to be your configuration. The only error in the node server is because it's not getting valid data from the WeatherFlow server.
-
Something is wrong with the station ID. When the node server contacts the WeatherFlow server to get the information about the station, the WeatherFlow server is responding with: That is coming from WeatherFlow, not the node server.
-
If you ever ran version 3.0.39 of PG3, then the PG3 database got corrupted and that would cause problems with all of the node servers installed on it. But it shouldn't have removed any of the directories. The only time it should be removing the directory would be when deleting the node server or possibly when restoring from backup.
-
All values for every node are currently numbers. It's up to the UI to convert those to text based on the mapping provided by the node server. It's possible that there's something in the mapping that is causing UD mobile to fail, but I believe the admin console isn't having problems with it. If conditions are showing up as unknown, then WeatherFlow probably added some new ones that I don't know about. If they add something new, the mapping has to be updated so that the node server knows what to do with it. Yes, it takes a while for the forecasts to populate. There's something wrong in the node server and it ends up waiting for one LongPoll interval before it makes the query to WeatherFlow for that info.
-
You have the parameters in the wrong order for this command. Should be: sudo service isy status
-
Just pushed version 2.0.4 which I think will resolve the issue.