Jump to content

apnar

Members
  • Posts

    211
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

apnar's Achievements

Experienced

Experienced (4/6)

55

Reputation

1

Community Answers

  1. That sucks. Are you still running in mirrored mode or did you move fully to nvme?
  2. What is the issue that is making you think you need to revert?
  3. VNC server runs on the EISY itself, so you connect to the eisy IP on port 5900 with something like a RealVNC client. Think of it as a screen and keyboard connected to the VM as opposed to VNC running within the VM. Just be sure to answer yes to that question in my script since isn't enabled by default.
  4. Yeah, I'd suggest starting from scratch. FYI, "vm info" will never show an actual IP address for these VMs. You can get the IP either from your router or you can connect to the console/vnc and look it up that way.
  5. Ok, please give the entire thing a go again (including another reboot) but capture the output of the script running, maybe there is something we can glean from that. When you say it was initially running, were you able to log in to home assistant? and when you say "after restarting" do you mean it worked fine until you restarted your eisy?
  6. @tibbar you seem to be stuck in an odd state, I wonder if there is an old attempt still running in the background. I'd try starting clean, rebooting, and trying again. This should get you a close to clean state, run each individually: sudo vm poweroff -f homeassistant sudo vm destroy -f homeassistant sudo vm switch destroy public sudo zfs destroy storage/vms sudo shutdown -r now FYI, the script starts the VM. So just give it some time after running the script then try your access.
  7. @tazman thanks for giving it a try. Took me a bit but I tracked down the issue. I use a significantly bigger sized virtual disk in my setup but had dropped it down to 16GB when I uploaded it to match what was in the original script. Even though the OVA file from HA is only 800MB they have it configured as a 32GB virtual disk so when it was trying to be copied to the smaller 16GB volume it errored out. Guess that's what I get for thinking a small tweak wouldn't cause any issues. As to the error on line 126, I'm not seeing that but suspect it might be related to posting the shell script on the forum. So I added some extra quoting and I'll upload it as a file here in addition to pasting it. Updated version: #!/bin/sh # Where do we want to storge VM resources (ZFS pool name and mount path) VMFILESET="storage/vms" VMDIR="/storage/vms" # Home Assistant VM name and how much resources to allocate HA_VM_NAME="homeassistant" HA_VM_CPU="2" HA_VM_MEM="1G" HA_VM_DISC="32G" # specify network interface - by default it's Ethernet re0 INTERFACE="re0" if [ $(id -u) != "0" ] then echo "Must be run with sudo" exit fi # Automatically pick the latest release from https://github.com/home-assistant/operating-system/releases/ HA_IMAGE_URL=$(curl -sL https://api.github.com/repos/home-assistant/operating-system/releases/latest | \ grep "browser_download_url.*haos_ova.*qcow2.xz" | sed -e 's/.*: "\(.*\)"/\1/') # Internal variables TMPDIR=`mktemp -d` IMAGE_NAME="${TMPDIR}/haos_ova-x86-64.img" VM_CONF=${VMDIR}/${HA_VM_NAME}/${HA_VM_NAME}.conf # make sure ifconfig_DEFAULT is not set as it causes tap0 interface issues # ensure re0 is set to DHCP sysrc -x ifconfig_DEFAULT sysrc ifconfig_re0="DHCP" echo "Make sure necessary packages are installed" pkg install -y vm-bhyve edk2-bhyve wget qemu-tools echo "Prepare /etc/rc.conf" sysrc vm_enable="YES" sysrc vm_dir="zfs:${VMFILESET}" # this makes Home Assistant VM start up automatically on boot, comment out if this is not desired sysrc vm_list=${HA_VM_NAME} echo "Create ZFS fileset for VMs and prepare templates" zfs create ${VMFILESET} vm init cp /usr/local/share/examples/vm-bhyve/*.conf ${VMDIR}/.templates/ # create VM networking (common for all VMs on the system) vm switch create public vm switch add public ${INTERFACE} echo "Downloading image" wget -O ${IMAGE_NAME}.xz ${HA_IMAGE_URL} echo "Extracting..." unxz ${IMAGE_NAME}.xz echo "Creating a VM" vm create -t linux-zvol -s ${HA_VM_DISC} ${HA_VM_NAME} echo "Image info:" qemu-img info ${IMAGE_NAME} echo echo "Copying image... (may take a bit of time)" qemu-img convert ${IMAGE_NAME} /dev/zvol/${VMFILESET}/${HA_VM_NAME}/disk0 rm -rf ${TMPDIR} GRAPHICS=0 echo -e "\n\nDo you want to enable unauthenticated VNC access to your Home Assistant virtual machine? (y/N)" read answer if [ "$answer" = "Y" ] || [ "$answer" = "y" ] then GRAPHICS=1 sysrc -f ${VM_CONF} graphics="yes" fi ## Initial setup TMPDIR=`mktemp -d` # set zvol to full so partitions are visable zfs set volmode=full ${VMFILESET}/${HA_VM_NAME}/disk0 sleep 2 # set baud rate for serial console mount -t msdosfs /dev/zvol/${VMFILESET}/${HA_VM_NAME}/disk0p1 ${TMPDIR} # add serial console to linux boot command line sed -i '' -e 's/console=ttyS0/console=ttyS0,115200/' ${TMPDIR}/cmdline.txt # umount device umount ${TMPDIR} SSH=0 echo -e "\n\nDo you want to enable SSH access to your Home Assistant virtual machine? (y/N)" read answer if [ "$answer" = "Y" ] || [ "$answer" = "y" ] then SSH=1 ## SSH Access # install ext4 fuse driver and load it pkg install -y fusefs-lkl kldload fusefs # mount vms /root/.ssh directory lklfuse -o type=ext4 /dev/zvol/${VMFILESET}/${HA_VM_NAME}/disk0p7 ${TMPDIR} sleep 5 mkdir -p ${TMPDIR}/root/.ssh chmod 700 ${TMPDIR}/root chmod 755 ${TMPDIR}/root/.ssh # create a new SSH keypair and at it as an authorized key in the VM ssh-keygen -t ed25519 -N "" -C "Access Home Assistant running on eisy on port 22222" -f /home/admin/ha-ssh-key cp /home/admin/ha-ssh-key.pub ${TMPDIR}/root/.ssh/authorized_keys # set perms so admin user can access the ssh keys chown admin:admin /home/admin/ha-ssh-key* umount ${TMPDIR} fi ## Clean up # switch zvol mode back to normal zfs set volmode=dev ${VMFILESET}/${HA_VM_NAME}/disk0 sleep 5 rmdir ${TMPDIR} sysrc -f ${VM_CONF} loader="uefi" sysrc -f ${VM_CONF} cpu=${HA_VM_CPU} sysrc -f ${VM_CONF} memory=${HA_VM_MEM} vm start ${HA_VM_NAME} vm info ${HA_VM_NAME} vm list echo -e "\n\n##########################################################################\n" echo "Please wait about 10 minutes and follow instructions at https://www.home-assistant.io/getting-started/onboarding/ to get your Home Assistant setup" echo "ISY integration: https://www.home-assistant.io/integrations/isy994/" echo -e "\nIf you need console access to the HA VM you can run \"sudo vm console ${HA_VM_NAME}\" then log in is as root. Use \"~~.\" to exit." if [ "$GRAPHICS" -eq "1" ]; then echo -e "\nYou can access your HA VM via VNC on port 5900 at your eisy's IP address." fi if [ "$SSH" -eq "1" ]; then echo -e "\nYou can access your HA VM via SSH using \"ssh -p22222 -i ha-ssh-key root@{HA VM IP}\"" fi md5 of script is 93f64d030f2531578df06c46d35c8056 create_ha_vm.sh
  8. I finally had a few cycles to take a look at this. I wrote up a modified script that will use the VM based Home Assistant images instead of the one meant for bare metal. I also added some other tweaks that allow you to choose to enable access to the VM via console, VNC, and SSH during install. Unfortunately there isn't an easy way to "fix" an existing install so you'll need to start from scratch or you can try backing up and restoring your HA config but no guarantees there. To start this will delete your current VM and all related configs. It will completely destroy your existing Home Assistant VM so only run this if you want to start fresh or are sure you have all the backups you need. sudo vm poweroff -f homeassistant sudo vm destroy -f homeassistant sudo vm switch destroy public sudo zfs destroy storage/vms Here is a new create_ha_vm.sh script. Save it and run it with sudo. see updated version below @Michel Kohanim it might be worth updating the blog post with this script so new installs get the proper image.
  9. The error is legit, but it is just a warning. The script on the blog uses the generic x64 OS image instead of one of the VM tailored images. They don’t have a FreeBSD VM image specifically but when I have some time I’ll see if I can get one of the other VM images running instead. For now it should be reasonably safe to ignore.
  10. 1) yes 2) In FreeBSD zfs keeps a list/cache of which file systems are mounted in the file "/etc/zfs/zpool.cache". Since the /etc/zfs directory was missing the file couldn't be created so on boot nothing was mounted. You first created the directory, but that didn't actually create the cache file. Reimporting the pool triggered it to be written to the cache file.
  11. Try the commands I posted above to see if it imports. There was a bug in the scripts where it didn’t create the /etc/zfs directory and that stopped any non-root zfs pools from importing on reboot.
  12. Look at “zpool list” and see if storage is there. If not try “sudo zpool import storage” and then look again. If you see it then you likely are hitting the bug with missing /etc/zfs directory which causes /storage not to mount at boot. To fix it do a “sudo mkdir /etc/zfs”, “sudo zpool export storage”, “sudo zpool import storage”. Edit: added fix and sudo command”
  13. hmm. You can try dropping into the OS in HA by running "login". Then you can try pinging both your router and the IP of your eisy. If that's not working something is up with the networking glue somewhere.
  14. I'd first make sure that MAC address listed is the one you have an assignment for on your router. Second you can try a static IP in the VM just to test by running something like this at the ha prompt (putting it correct IPs): network update enp0s5 --ipv4-method static --ipv4-address 192.168.1.143/24 --ipv4-gateway 192.168.1.1 --ipv4-nameservers 192.168.1.1 edit: added /24 on IP
×
×
  • Create New...