custom90gt

Members
  • Posts

    35
  • Joined

  • Last visited

Everything posted by custom90gt

  1. Is anyone running any ZFS vdevs? I am wanting to setup a ZFS Metadata special device with unraid but not sure if it's supported or not? I had tried SLOG devices for fun in the past but the array wouldn't start up properly (but that's likely my own fault). Is anyone out there running anything "non GUI supported?" Any hints on how to set this up? Thanks!
  2. Hummm I have the same issue sending files to my backup server via SSH. Will try rolling back to see if that fixes it.
  3. Sorry stupid question, which file shows the /mnt// error.
  4. I did, but I didn't try to restore my backups. It's just weird that none of my backups worked at all.
  5. The last diags I was trying to load the most recent backup before I upgraded to RC4. I also tried earlier backups to no avail. I don't know where the /mnt// error comes from, my only thought is that it has something to do with the new exclusive access since none of my backups work. Sadly I've got the server back up and running by manually reinstalling everything so I'm not willing to break it down just yet. Once my backup server is done running some hardware tests in a few days, I'll try upgrading to RC4 and seeing if things break once I select exclusive mode on it. If it does then I can do more testing for you.
  6. The problem is I have many current backups and none of them are able to mount, the backup that I tried to use was made 2 days ago. The one on my backup server from a week ago also doesn't mount. Not sure what's going on with it.
  7. Sadly it doesn't seem to be able to start after changing it to any of the backup images. I also tried copying the backup to /mnt/user/system/libvirt/libvirt.img and restarting but still no VMs. Still not able to get any of my dockers working as well. I'm reinstalling them but not my preferred thing for sure. harley-diagnostics-20230428-1327.zip
  8. Sadly I already did and it didn't work out. I didn't try to just point to it directly though. *on edit* I just tried to point directly to the backup folder and then it gives me the error that it cannot start the libvirt service.
  9. you are right, I must have interrupted it or something: root@Harley:~# find /mnt -name libvirt.img /mnt/user/system/libvirt/libvirt.img /mnt/user/system_backup/libvirt/libvirt.img /mnt/zfs/system_backup/libvirt/libvirt.img /mnt/cache/system/libvirt/libvirt.img
  10. I went back to RC3 but everything was still broken. Sadly that's the only thing that shows up after pasting in your command and waiting like 5 minutes
  11. That diagnostic also had the "replaced" libvirt.img Output: /mnt/user/system/libvirt/libvirt.img
  12. This may be why I lost all of my dockers and VMs...
  13. I changed the directory, hit apply, and then changed it back to default before downloading the diagnostics. Not sure where that extra forward slash is coming from. harley-diagnostics-20230428-0839.zip
  14. NAME USED AVAIL REFER MOUNTPOINT cache 151G 570G 160K /mnt/cache cache/appdata 3.20G 570G 3.20G /mnt/cache/appdata cache/domains 20.7G 570G 20.7G /mnt/cache/domains cache/downloads 83.0G 570G 83.0G /mnt/cache/downloads cache/isos 9.87G 570G 9.87G /mnt/cache/isos cache/system 564K 570G 564K /mnt/cache/system cache/temp 34.4G 570G 34.4G /mnt/cache/temp optane 27.4G 79.2G 104K /mnt/optane optane/plex_appdata 27.3G 79.2G 22.3G /mnt/optane/plex_appdata zfs 22.6T 29.0T 529K /mnt/zfs zfs/appdata_backup 3.38G 29.0T 3.38G /mnt/zfs/appdata_backup zfs/apps 77.6G 29.0T 77.6G /mnt/zfs/apps zfs/army 2.37G 29.0T 2.37G /mnt/zfs/army zfs/auto_stuff 11.7G 29.0T 11.7G /mnt/zfs/auto_stuff zfs/backup_server 205K 29.0T 205K /mnt/zfs/backup_server zfs/backups 83.4G 29.0T 83.4G /mnt/zfs/backups zfs/books 11.6G 29.0T 11.6G /mnt/zfs/books zfs/capture_backup 12.5G 29.0T 12.5G /mnt/zfs/capture_backup zfs/documents 1.43G 29.0T 1.43G /mnt/zfs/documents zfs/domains_backup 20.9G 29.0T 20.9G /mnt/zfs/domains_backup zfs/exercises 23.0G 29.0T 23.0G /mnt/zfs/exercises zfs/family_backups 432G 29.0T 432G /mnt/zfs/family_backups zfs/flash_backup 1.28G 29.0T 1.28G /mnt/zfs/flash_backup zfs/format 27.2G 29.0T 27.2G /mnt/zfs/format zfs/games 6.80T 29.0T 6.70T /mnt/zfs/games zfs/holly 7.63G 29.0T 7.63G /mnt/zfs/holly zfs/home_videos 17.8G 29.0T 17.8G /mnt/zfs/home_videos zfs/kid_movies 1.01T 29.0T 1.01T /mnt/zfs/kid_movies zfs/kid_tv_shows 1000G 29.0T 1000G /mnt/zfs/kid_tv_shows zfs/movies 5.84T 29.0T 5.84T /mnt/zfs/movies zfs/music 141G 29.0T 141G /mnt/zfs/music zfs/other_pictures 26.2G 29.0T 26.2G /mnt/zfs/other_pictures zfs/pictures 269G 29.0T 269G /mnt/zfs/pictures zfs/plex_backup 50.7G 29.0T 48.9G /mnt/zfs/plex_backup zfs/portuguese 19.8G 29.0T 19.8G /mnt/zfs/portuguese zfs/residency 7.74G 29.0T 7.74G /mnt/zfs/residency zfs/school_stuff 344G 29.0T 344G /mnt/zfs/school_stuff zfs/system_backup 1.03M 29.0T 1.03M /mnt/zfs/system_backup zfs/system_files 11.9G 29.0T 11.9G /mnt/zfs/system_files zfs/tv_shows 6.24T 29.0T 6.24T /mnt/zfs/tv_shows zfs/westerns 170G 29.0T 170G /mnt/zfs/westerns
  15. Maybe something is going on with my USB drive, but I just updated to RC4.1 and now my docker containers are missing as are my VMs. harley-diagnostics-20230428-0710.zip
  16. Well that makes sense, odd that it seems to have updated them from array to cache though. Or maybe that's my imagination.
  17. It makes it so the average user can easily understand. We get that you know how to do it with the old method, that's great.
  18. Maybe difficult to fully explain, but here is what I did: 1. In one of my share folders (documents) I selected my ZFS pool as my primary storage pool. 2. Hit apply. 3. Utilized the write settings to function to try to copy the settings to other applicable shares (games, movies, etc). 4. Checked the shares that I attempted to apply the settings to and they were set to "cache" instead. I tried it multiple times however it kept them as cache and not ZFS. I just tried the write settings to function again and it kept the primary storage as ZFS so maybe it was just a fluke? Just a minor annoyance having to manually set the primary storage location on 20+ shares manually.
  19. Weird that it worked just fine until the upgrade. Is the array data stored somewhere else? That's the only thing that popped back up, however the array was set to 26 devices or something like that when I only have the one USB drive.
  20. So far I'm liking the new way of listing the primary and secondary pools. Also with everything set to exclusive access, I have much less CPU usage. Excited to try some zfs transfers to see if it improves speed. I was getting around 550MB/s transfers to my backup server.
  21. Just upgraded from RC3 to RC4 and my 3 separate zfs pools are missing (cache, optane, and zfs). Thankfully I was able to just re-add them and the associated drives without losing anything, but it also means my share settings disappeared. Attaching my diagnostics if that's helpful. harley-diagnostics-20230427-1645.zip
  22. Sorry this may be a difficult thing to explain. I have multiple zfs pools and a USB drive as my "array." After upgrading to R4 I have to reset settings for all of my shares. If I select one of my zfs pools labeled "zfs" and apply the settings and then click write settings to the other shares it applies another zfs pool "cache" to the selected pools. I tried it a couple of times but it does the same thing. I can attach pictures if that helps explain things.
  23. Just discovered another issue. If I set a share for my zfs pool and try to write that setup to other shares it puts them on "cache" instead...
  24. Looks like just re-adding my drives to the pools causes them to show back up with all the datasets. Glad I didn't have to redo everything. I am re-adding permissions to shares and settings there though.
  25. Bummer, after going from R3 to R4 my ZFS pools are missing.