Jump to content

hawihoney

Members
  • Content Count

    822
  • Joined

  • Last visited

Community Reputation

14 Good

About hawihoney

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. What are the steps I need to do before moving from 6.83 to 6.90? I'm talking about passthru of adapters (2x HBA for two VMs, 2x USB for two VMs, 1x GPU for docker). Currently on 6.83 I have possibly redundant settings as follow: 1.) /boot/syslinux/syslinux.cfg: xen-pciback.hide (hide both HBAs, this one redundant?) [...] label unRAID OS menu default kernel /bzimage append xen-pciback.hide=(06:00.0)(81:00.0) initrd=/bzroot [...] 2.) /boot/config/vfio-pci.cfg: BIND (hide both HBAs, or this one redundant?) BIND=06:00.0 81:00.0 3.) First VM: 1x HBA, 1x USB [...] <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x0930'/> <product id='0x6544'/> <address bus='2' device='4'/> </source> <alias name='hostdev1'/> <address type='usb' bus='0' port='1'/> </hostdev> [...] 4.) Second VM [...] <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x81' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x8564'/> <product id='0x1000'/> <address bus='2' device='10'/> </source> <alias name='hostdev1'/> <address type='usb' bus='0' port='1'/> </hostdev> [...] What do I need to remove/add before booting with the new 6.90 system? Thanks in advance.
  2. Some questions allowed? 1.) The Plex folder on my cache contains "trillions" of directories and files. Those files change rapidly. New files are added at high frequency. Does that mean that the result are trillions of hardlinks? Stupid question, I know. But I never worked with rsync that way. 2.) What about backup to remote locations? I do feed backups to Unraid servers at two different remote locations (see below). Will this work and create hardlinks at the remote location? rsync -avPX --delete-during --protect-args -e ssh "/mnt/diskx/something/" "user@###.###.###.###:/mnt/diskx/Backup/something/" Thanks in advance.
  3. Did add it to User Scripts (Array Start). Will re-think upon 6.9 stable. Thanks, man.
  4. Can you please elaborate a little. A small Google search shows this as a BTRFS thing. Reading the 6.9 announcements shows that this is a Samsung, etc./Unraid incompatibility when using different sector alignments (that's what I understand).
  5. Don't know. I just read the 6.9 beta notes. In one of the past beta releases a complete handling has been introduced to move data off these disks, reformat them and move the data back. Beta releases are no option here, so I'm looking for a way to do something similar on stable 6.8.3. ***EDIT*** Your values look as if you are using the cache disk as a cache disk 500 TB read, 233 TB written look reasonable. I use the cache pool as docker/VM store only. My values are 473 GB read, 22 TB written. This is definitely not reasonable.
  6. Can please somebody point me to a documentation how to avoid massive writes on NVMe M.2 Cache SSDs running on Unraid 6.8.3? After reading the 6.9 Beta notes I checked my cache disks and they are heavily involved. Two 1 TB disks in a BTRFS cache pool show 21 TB of writes in 4 months while holding around 250 GB. Models in use are "Samsung SSD 970 EVO Plus 1TB". - Available spare 100% - Available spare threshold 10% - Percentage used 0% - Data units read 924,128 [473 GB] - Data units written 42,803,014 [21.9 TB] - Host read commands 12,644,896 - Host write commands 505,220,723 - Controller busy time 324 - Power cycles 2 - Power on hours 190 (7d, 22h) Needless to say that I want to avoid looong outages. Many thanks in advance.
  7. It was the long boot that made me get nervous. I could SSH into the starting server but the server killed the session after a minute or so. That was the point I took the screenshot with the putty error. I was no longer able to SSH into the server. I started IPMI (never needed it for a looooong time) just to find out that there's no JRE on my new laptop. Argh, my fault. So I gave up and rushed downstairs, pulled the USB stick and when I came up again, Unraid was started - without the /boot folder. So I rushed down again, pulled the stick and copied the old Unraid files back to the USB stick and booted. That came up fast and without a problem. In the meantime we chatted here. This morning I've set all Dockers and VMs to not autostart. Starting the full blown server even needs way more time. So I gave your files a second go. I pulled the USB stick from the server again, copied the new files to the stick und pushed it in the server again. This time I gave it moooore time before doing something. And voila, after what seems a very long time, the server came up. I double checked the GPU UUID in your kernel helper GUI, and double checked the device IDs for the 5 passed thru devices (2x HBAs, 1x GPU and 2x USB license sticks). Everything was identical. So I manually started all mounts, VMs and dockers. Everything looks good now. No idea what's hanging during the boots. But for me it's fine. I don't start the server that often. With that server grade backplanes and HBAs you don't need to shutdown the server for e.g. disk replacement in the JBODs. Only disk replacement on the bare metal needs to stop the array. It's running mostly 365/24.
  8. I'm running your build now. As there is a parity sync running right now I don't want to stress the server. NVENC and NVDEC seem to work - tested with my smartphone and a forced reduced bitrate. No errors/warning in syslog from last boot.
  9. Did CHKDSK, stick is fine. This morning I did boot two times with your precompiled files and one time with my old environment (Unraid NVIDIA) with the same USB stick. It was a huge difference here. If you say it can't be, then it must be on my side. Can live with that.
  10. Switched from Unraid NVIDIA to your precompiled NVIDIA kernel 6.8.3 this morning - and gained a near heart attack: Booting the server took three times as long as before (LSIO Unraid NVIDIA). During the long boot process I took some screenshots from error messages that passed by during the boot process (see below). In a rush I took the Unraid USB stick out, copied the 8 old Unraid files over and put the USB stick back in. In the meantime Unraid came up - with an empty boot folder. It was when I realized that extremly long boot process. So, I pulled the USB stick again, copied the new 8 files from you over, and did restart again. This time I gave a long time to boot before checking. Seems that the system came up now and is doing a parity check. Early in the morning, running the steps to the basements that many times, in my age. Puh. Please add a note about the long boot process. It may help people like me to cool down.
  11. Last question before I switch. In the Nvidia plugin description I found: Is the docker system modified too if using your prepared Nvidia build?
  12. I do have the GPU UUID in my Plex docker already. I took this UUID from the LSIO Nvidia plugin that I'm using currently. I asked my question because you mentioned in this thread somewhere, that the GPU UUID might change, when switching over to your approach. Just curious. Edit: What about Device IDs like "IOMMU group 36:[1000:0097]"? Can they change too?
  13. What's the correct way to switch over from NVIDIA plugin to this one? Need support for Unraid 6.8.3 with latest NVIDIA drivers. - Do I need to uninstall the NVIDIA plugin first? - If yes, how to get the GPU UUID? - Are there additional differences between 6.8.3 with NVIDIA plugin and this one here? Any hints are highly appreciated.