Jump to content

Configuring eisy to boot from an NVMe SSD without losing VMs


giomania
Go to solution Solved by mapeter,

Recommended Posts

figured out the issue.  If you didn't do the partition version originally but did the simple version then this one won't work.  I had to do the below (remember this wipes out everything on the storage drive)

zpool destroy storage
gpart create -s gpt nvd0

after that I could run the script...

bash -c "source ~admin/setup_nvme_boot.sh; setup_nvme_boot"

For the faint of heart, wait for the packaged version later this week.

Edited by sjenkins
  • Like 3
Link to comment
12 hours ago, sjenkins said:

figured out the issue.  If you didn't do the partition version originally but did the simple version then this one won't work.  I had to do the below (remember this wipes out everything on the storage drive)

zpool destroy storage

gpart create -s gpt nvd0

after that I could run the script...

bash -c "source ~admin/setup_nvme_boot.sh; setup_nvme_boot"

For the faint of heart, wait for the packaged version later this week.

Did you have to unmount 'storage' first?  After zpool destroy storage, I get a "cannot unmount '/storage': pool or dataset is busy."

@sjenkins Got it, I had to use: zpool destroy -f storage

Edited by Whitehambone
Link to comment
33 minutes ago, Whitehambone said:

Did you have to unmount 'storage' first?  After zpool destroy storage, I get a "cannot unmount '/storage': pool or dataset is busy."

make sure you are not "in" the directory on your admin account before you goto sudo -i

or it won't let you umount it.

Also, for those watching this channel when this script is done your mmcsd0 drive is "gone"

if you want to use this internal old slow drive for any storage you can (I did) make it a drive for your use (backup?)...

make sure you are in sudu -i

zpool create storage /dev/mmcsd0
zpool list
chown admin:admin /storage
zpool export storage
zpool import storage

now you have /storage drive of ~50G to play with.  May not be of a whole lot of use as you just added a faster 1T drive but I was thinking of backing up the contents of my admin home directory there.

 

  • Like 1
  • Thanks 1
Link to comment
4 minutes ago, Michel Kohanim said:

Hello all,

Apologies for the delay and lack of presence here. The script that comes with udx in the next release, is fully tested, it also creates BIOS labels and allows you to keep the eMMC intact (as a mirror). 

With kind regards,
Michel

Does keeping the eMMC as a mirror with the new udx script still limit you to the size of the eMMC, and does the slowest (eMMC) performance decide the system throughput?

Link to comment
12 minutes ago, Michel Kohanim said:

Hello all,

Apologies for the delay and lack of presence here. The script that comes with udx in the next release, is fully tested, it also creates BIOS labels and allows you to keep the eMMC intact (as a mirror). 

With kind regards,
Michel

Sorry to barrage you Michel,

Had to look up BIOS labels & trying to figure out what that does to operations?

For those of us who jumped off the ledge and did this, anything in the script which will 'haunt' us later as a change?

Link to comment

@sjenkins,

BIOS/efi labels = if you are attached to a monitor + keyboard, when you boot up, you can keep clicking the F7 button and you get the boot up menu. Without the labels, they look like (A443333/533324) ... with the labels, they look like:

eisy.nvme.boot.loader
eisy.boot.loader (original eMMC)

With regard to the script, it will destroy everything! This said, you will be asked whether or not you want to proceed.

With kind regards,
Michel

  • Thanks 1
Link to comment
16 minutes ago, Whitehambone said:

Does keeping the eMMC as a mirror with the new udx script still limit you to the size of the eMMC, and does the slowest (eMMC) performance decide the system throughput?

Yes and Almost. I did my own testing and decided against the mirror. It was still better performing than eMMC alone but, in some instances, mirror-less performed much better (massive disk writes).

With kind regards,
Michel

  • Thanks 1
Link to comment
2 hours ago, sjenkins said:

Also, for those watching this channel when this script is done your mmcsd0 drive is "gone"

if you want to use this internal old slow drive for any storage you can (I did) make it a drive for your use (backup?)...

make sure you are in sudu -i

zpool create storage /dev/mmcsd0
zpool list
chown admin:admin /storage
zpool export storage
zpool import storage

now you have /storage drive of ~50G to play with.  May not be of a whole lot of use as you just added a faster 1T drive but I was thinking of backing up the contents of my admin home directory there.

@sjenkins Thanks for the tip on reclaiming this storage space.  Not sure if you plan to run Home Assistant on your eisy.  But since I now have that space named 'storage' again, I would have to modify ZFS pool name and mount path in the Home Assistant VM Helper Script, correct?  Otherwise, it will install on the slow eMMC flash drive.

# Where do we want to storge VM resources (ZFS pool name and mount path)
VMFILESET="storage/vms"
VMDIR="/storage/vms"
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH                      ALTROOT
storage  57.5G   135K  57.5G        -         -     0%     0%  1.00x    ONLINE                      -
zudi     50.5G  2.51G  48.0G        -      868G    20%     4%  1.00x    ONLINE						-

I would have to rename it to "zudi"/vms."  Have you looked at this Helper script yet?

Link to comment
31 minutes ago, Whitehambone said:

@sjenkins Thanks for the tip on reclaiming this storage space.  Not sure if you plan to run Home Assistant on your eisy.  But since I now have that space named 'storage' again, I would have to modify ZFS pool name and mount path in the Home Assistant VM Helper Script, correct?  Otherwise, it will install on the slow eMMC flash drive.

# Where do we want to storge VM resources (ZFS pool name and mount path)
VMFILESET="storage/vms"
VMDIR="/storage/vms"
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH                      ALTROOT
storage  57.5G   135K  57.5G        -         -     0%     0%  1.00x    ONLINE                      -
zudi     50.5G  2.51G  48.0G        -      868G    20%     4%  1.00x    ONLINE						-

I would have to rename it to "zudi"/vms."  Have you looked at this Helper script yet?

yes the script will need to be changed to a new location but you need to "make" a place for it (see below)

first though, did you notice the EXPANDSZ column?

your drive has not expanded to use the whole space available on the drive

you can fix this with:

zpool set autoexpand=on zudi
zpool online -e zudi nvd0p2
# note replace nvd0p2 with the name under zudi when you do a zpool status -v

 

here is where you "make" a place to put your VM or whatever.  As zfs likes to use the whole space rather than a bunch of gparted (they don't talk to each other well). What I did was create another drivefolder in the pool and mount it in a location I would use.

# create drive
zfs create zudi/extra
# mount it in root
zs set mountpoint=/extra zudi/extra
# set a quota on this drivefolder if you like
zfs set quota=200G zudi/extra
# permissions are nice
chown admin:admin /extra

now you can use /extra (or whatever you call it) as a place for the VM's

  • Thanks 1
Link to comment
3 hours ago, Michel Kohanim said:

@sjenkins,

BIOS/efi labels = if you are attached to a monitor + keyboard, when you boot up, you can keep clicking the F7 button and you get the boot up menu. Without the labels, they look like (A443333/533324) ... with the labels, they look like:

eisy.nvme.boot.loader
eisy.boot.loader (original eMMC)

cool, I added a label to mine as well (with a bit of help from google)

before:

root@eisy:~ # gpart show
=>        40  1953525088  nvd0  GPT  (932G)
          40      131072     1  efi  (64M)
      131112  1927282688     2  freebsd-zfs  (919G)
  1927413800     8388608     3  freebsd-swap  (4.0G)
  1935802408    17722720        - free -  (8.5G)

root@eisy:~ # gpart show -l
=>        40  1953525088  nvd0  GPT  (932G)
          40      131072     1  efi  (64M)
      131112  1927282688     2  (null)  (919G)
  1927413800     8388608     3  swap0  (4.0G)
  1935802408    17722720        - free -  (8.5G)

then:

gpart modify -l eisy.nvme -i2 nvd0

after:

nvd0p2 modified
root@eisy:~ # gpart show -l
=>        40  1953525088  nvd0  GPT  (932G)
          40      131072     1  efi  (64M)
      131112  1927282688     2  eisy.nvme  (919G)
  1927413800     8388608     3  swap0  (4.0G)
  1935802408    17722720        - free -  (8.5G)

thanks again @Michel Kohanim  !!!!

Edited by sjenkins
  • Like 1
Link to comment
2 hours ago, sjenkins said:

yes the script will need to be changed to a new location but you need to "make" a place for it (see below)

first though, did you notice the EXPANDSZ column?

your drive has not expanded to use the whole space available on the drive

you can fix this with:

zpool set autoexpand=on zudi
zpool online -e zudi nvd0p2
# note replace nvd0p2 with the name under zudi when you do a zpool status -v

 

here is where you "make" a place to put your VM or whatever.  As zfs likes to use the whole space rather than a bunch of gparted (they don't talk to each other well). What I did was create another drivefolder in the pool and mount it in a location I would use.

# create drive
zfs create zudi/extra
# mount it in root
zs set mountpoint=/extra zudi/extra
# set a quota on this drivefolder if you like
zfs set quota=200G zudi/extra
# permissions are nice
chown admin:admin /extra

now you can use /extra (or whatever you call it) as a place for the VM's

@sjenkins I followed both of your above recommendations, thank you!  I am still having some trouble running the helper script.  based on the above, wouldn't the pool and mount path be:

# Where do we want to storge VM resources (ZFS pool name and mount path)
VMFILESET="zudi/extra"
VMDIR="/zudi/extra"

 

Link to comment
3 hours ago, Whitehambone said:

@sjenkins I followed both of your above recommendations, thank you!  I am still having some trouble running the helper script.  based on the above, wouldn't the pool and mount path be:

# Where do we want to storge VM resources (ZFS pool name and mount path)
VMFILESET="zudi/extra"
VMDIR="/zudi/extra"

 

this is just opinion without knowledge but if you followed the above posts then the drive is already mounted at /extra

I would try that.

Link to comment

An update with all the "issues" I have seen I put in a ticket to get information on my path to get my Eisy setup to boot from the NVMe and HA installed on it again, @Michel Kohanim told me all the scripts and information will be updated next week so I will wait for the updates and hopefully have a smooth transition to getting it all setup again. Just to be clear my Eisy is running fine wit HA on the NVMe and I just want to change so the Eisy uses the NVMe for everything for the enhanced performance.

Link to comment
Guest
This topic is now closed to further replies.

  • Recently Browsing

    • No registered users viewing this page.
  • Forum Statistics

    • Total Topics
      36.6k
    • Total Posts
      367.8k
×
×
  • Create New...