Jump to content

Does the NVMe post have lots of errors...


awysocki
Go to solution Solved by Michel Kohanim,

Recommended Posts

I was looking at Increase Performance or Add Redundancy? | UD Developer Docs (isy.io) post.  I'm not a unix/FreeBSD expert but script doesn't work.  It has 2 functions but the starting function is never called :-(  

It calls the shell script 

Quote

Open a file and call it setup_nvme_boot.sh. Then, copy/paste the following, save, and exit

But at the very bottom of the post it says to run the script but uses '.'  instead of '_'

Quote

 

Once done, type:

    ./setup.nvme.boot

Wait for the process to complete!

You are done!

 

And LAST....  You can only run this script ONCE if you choose the Performance option but run it many time if you select the Redundancy option???

I don't want to brick my machine....

Thanks

Link to comment

image.thumb.png.fc77a7d04f43caf53785773b9678841c.png

 

@Michel Kohanim could you flush this out a bit?  Does this mean it will copy your current system back to the Mme 50G and then after reboot allow you to run the new script with its features.  No guarantees in life I know but is this a path for those of us that went in the deep end with the first version to resize our new drive?  Sorry, the text seems to be saying this but obviously I would like to be sure.

Much appreciated.

 

  • Like 1
Link to comment

Micheal,

Love that we now have a supported way to use NVME, thank you! I’d like to see an intermediate option between the mirror and no-mirror options, as I want the speed of the later with the protections of the former. I think this could be accomplished well with an asynchronous “mirror” that syncs a couple times a day. If my drive does fail I’d much rather have a backup from some time in the last 24 hours as opposed to nothing, especially since there aren’t a lot of changes happening on my eisy.

Since you’re using ZFS this should be reasonably easy to accomplish using ZFS send/receive. At whatever scheduled time you take a snapshot of the NVME drive and send it to the eMMC, then delete the snapshot from the previous run. Could be a simple script you guys whip up or leverage something like zrepl or sanoid.

Thanks for considering!

  • Like 3
Link to comment
On 2/22/2024 at 3:36 PM, sjenkins said:

image.thumb.png.fc77a7d04f43caf53785773b9678841c.png

 

@Michel Kohanim could you flush this out a bit?  Does this mean it will copy your current system back to the Mme 50G and then after reboot allow you to run the new script with its features.  No guarantees in life I know but is this a path for those of us that went in the deep end with the first version to resize our new drive?  Sorry, the text seems to be saying this but obviously I would like to be sure.

Much appreciated.

 

@sjenkins & @Michel Kohanim, I am also curious about this...

Link to comment

@Michel Kohanim

Just ran through the new easy way.  Ran into an error which I tracked down.  Sort of my own fault but I think you could improve the script a bit to avoid it.

The prompt reads:

Would you like to add an extra ZFS partition for your own purposes? (Y/n)

As a regular *nix user I read this to be there are two choices yes or no and that yes is the default since it is capitalized.  As such I just hit Enter to accept the default since that is what I wanted to do.  This resulted in the following error:

[: =: unexpected operator

Checking the code is because the check on the result isn't properly in quotes.  It also ended up giving a result opposite to what I wanted.

Luckily the rest of the script continues on smoothly, just without creating the extra partition.  I was able to go back, manually remove the partitions and recreate them as the script would have, just a bit of annoyance and likely outside the skill set of many.

 

TLDR; I suggest changing line 352 in /usr/local/etc/udx.d/static/fl.ops from:

if [ $answer = "Y" ]

to something like:

if [ "$answer" = "Y" ] || [ "$answer" = "y" ] || [ "$answer" = "" ]

and while all the other questions in the script do properly quote $answer I'd be worth updating them to handle both cases as well as enter for accepting the default.

  • Like 3
Link to comment
  • 2 weeks later...

@Michel Kohanim

Ran into another small issue.  On reboots my zfs pool /storage wasn't mounting automatically.  It worked fine if I did a "zpool import storage" after boot.  I tracked down the issue to a missing /etc/zfs directory which stopped the creation of the normal zpool.cache file.  It's possible that it's something I missed in my manual steps recovering from the issue above but I don't see it created in any of the scripts.

So I'd suggest adding a line to the scripts somewhere to creates /etc/zfs.

 

  • Like 1
Link to comment

I suspect that this will be an issue for anyone with a zpool beyond the default eisy one(zudi). So folks that still boot from eMMC and add a drive, folks that mirror or boot from NVME and added an extra pool. Basically anyone that has an extra zpool (most likely called /storage).

Easy enough to test, just reboot and see if /storage is there (or visible is in “zpool status”).

Should be no harm in creating the directory:

“sudo mkdir /etc/zfs”

You may need to do a “sudo zpool export storage” then “sudo zpool import storage” to then get the correct cache file created once the directory is there.

Link to comment
Guest
This topic is now closed to further replies.

×
×
  • Create New...