tjb_altf4

Members
  • Posts

    1266
  • Joined

  • Last visited

Everything posted by tjb_altf4

  1. Remove the IP assignment for the vlan in network settings.
  2. The vgpu_unlock is one thing... I understand devs don't want to poke the bear (Nvidia's legal department) giving consumer hardware access to enterprise features, although there are legitimate uses cases such as optimizing vgpu profiles. But vgpu support where it is fully supported and within Nvidia's licensing would be nice, I do have a Tesla P4 for this purpose which I will be using with proxmox on my new server setup, but I'd be happy to help with testing in Unraid should an official plugin be considered for development
  3. Just to check the basics here, there should be at least one path mapped into the container that corresponds to the drive you have setup
  4. You need the Advanced View toggled on (top right) when in the template view for your container. The setting will be visible in all path configs.
  5. Unraid GUI > Tools > New Permissions is a once off tool to clean up perms, it also shows the reference permissions/ownership that are set. (New Permissions tool)...changes file and directory ownership to nobody/users (i.e., uid/gid to 99/100), and sets permissions as follows: For directories: drwxrwxrwx For read/write files: -rw-rw-rw- For readonly files: -r--r--r--
  6. I just reread your first post, I think you talked about 2 seperate things. You stated the disk is mounted here: "/mnt/disks/212307B02", that reads like it has been mounted using UD. In which case if you want to use that option, the docker path being passed needs the slave option to be used. EDIT: Hoopster just beat me lol
  7. You can create a cache pool that is standalone using the "Enable user share assignment: No" option, on the Shares page, these show up under the Disk Shares section instead of being part of a User Shares. An alternative solution is before we had multiple pools, we used Unassigned Devices to do the same thing, although I personally prefer using UD for transient devices rather than permanent ones.
  8. You need to stop the docker service (not just containers) to change that setting
  9. Upgraded my secondary server from 6.11.3, smooth sailing so far.
  10. Once a container is on its own IP, you can't, and don't need to map ports, it just uses what the application uses natively.
  11. TPM support was added to allow Win11 VM support in Unraid version 6.10, you just need to choose the bios for the VM with TPM support (OVMF TPM). Also know the trial version of Unraid is fully unlocked, but time limited, so you can test your hardware and the Unraid OS out before committing to a license.
  12. Upgraded my secondary server from 6.11.1, all smooth sailing
  13. The main workaround for a Chia copy workload is to copy to one drive at a time, if you use "most free" and multiple drives of same free space, it will round robin to minimize issues. I generally set min free space to 2x plot size, and copy in the last 1-2 plots manually.
  14. Noticed an issue where if a upload gets a out of space error, the upload operation simply silently fails and does not notify the user.
  15. While running 6.11.1 I noticed this change has been implemented Thank you!
  16. I've been through the full process on one of the disks, followed by balancing and was able to fill that disk all the way up with about 150MB to spare! Thanks @JorgeB
  17. OK deleted just over 100GB (1 file) on each and gave a balance up to 90, output looks like this now: root@jaskier:~# btrfs fi usage -T /mnt/disk13 Overall: Device size: 16.37TiB Device allocated: 16.17TiB Device unallocated: 203.02GiB Device missing: 0.00B Used: 15.97TiB Free (estimated): 407.27GiB (min: 305.77GiB) Free (statfs, df): 407.27GiB Data ratio: 1.00 Metadata ratio: 2.00 Global reserve: 512.00MiB (used: 0.00B) Multiple profiles: no Data Metadata System Id Path single DUP DUP Unallocated -- --------- -------- -------- -------- ----------- 1 /dev/md13 16.13TiB 38.00GiB 12.00MiB 203.02GiB -- --------- -------- -------- -------- ----------- Total 16.13TiB 19.00GiB 6.00MiB 203.02GiB Used 15.94TiB 18.23GiB 1.73MiB root@jaskier:~# btrfs fi usage -T /mnt/disk14 Overall: Device size: 16.37TiB Device allocated: 16.27TiB Device unallocated: 103.01GiB Device missing: 0.00B Used: 16.07TiB Free (estimated): 305.57GiB (min: 254.07GiB) Free (statfs, df): 305.57GiB Data ratio: 1.00 Metadata ratio: 2.00 Global reserve: 512.00MiB (used: 0.00B) Multiple profiles: no Data Metadata System Id Path single DUP DUP Unallocated -- --------- -------- -------- -------- ----------- 1 /dev/md14 16.23TiB 38.00GiB 16.00MiB 103.01GiB -- --------- -------- -------- -------- ----------- Total 16.23TiB 19.00GiB 8.00MiB 103.01GiB Used 16.04TiB 18.37GiB 1.80MiB
  18. ah ok, I see, I'll make some additional space... here is the current output root@jaskier:~# btrfs fi usage -T /mnt/disk13 Overall: Device size: 16.37TiB Device allocated: 16.32TiB Device unallocated: 51.02GiB Device missing: 0.00B Used: 16.07TiB Free (estimated): 305.82GiB (min: 280.31GiB) Free (statfs, df): 305.82GiB Data ratio: 1.00 Metadata ratio: 2.00 Global reserve: 512.00MiB (used: 0.00B) Multiple profiles: no Data Metadata System Id Path single DUP DUP Unallocated -- --------- -------- -------- -------- ----------- 1 /dev/md13 16.28TiB 38.00GiB 12.00MiB 51.02GiB -- --------- -------- -------- -------- ----------- Total 16.28TiB 19.00GiB 6.00MiB 51.02GiB Used 16.04TiB 18.36GiB 1.72MiB root@jaskier:~# btrfs fi usage -T /mnt/disk14 Overall: Device size: 16.37TiB Device allocated: 16.37TiB Device unallocated: 1.01MiB Device missing: 0.00B Used: 16.17TiB Free (estimated): 204.13GiB (min: 204.13GiB) Free (statfs, df): 204.13GiB Data ratio: 1.00 Metadata ratio: 2.00 Global reserve: 512.00MiB (used: 0.00B) Multiple profiles: no Data Metadata System Id Path single DUP DUP Unallocated -- --------- -------- -------- -------- ----------- 1 /dev/md14 16.33TiB 38.00GiB 16.00MiB 1.01MiB -- --------- -------- -------- -------- ----------- Total 16.33TiB 19.00GiB 8.00MiB 1.01MiB Used 16.13TiB 18.48GiB 1.80MiB
  19. If you could post some details it would be greatly appreciated! One HDD already has 328GB free, the other 219GB. but I can delete a little more if you think its needed.
  20. Currently trying to chase down an issue with a couple of BTRFS formatted drives. Most of my drives were BTRFS formatted in 6.8.3, maybe early 6.9 series and have been able to utilize all the way down to the last 100MB in some cases, however I have a couple drives that I can't utilize past the last ~200GB of space. I've done the typical tricks of high balance values and scrubs, which has worked for all other drives, but for these two drives formatted in later Unraid versions there seems to be some other constraint. Now is there some format option that changed, or some free space guarding in the kernel that I can bypass? Data is easily replaceable and is WORM, so the free space buffer is not needed for future use. Any ideas? Offending drives btrfs filesystem df Data, single: total=16.28TiB, used=16.04TiB System, DUP: total=6.00MiB, used=1.72MiB Metadata, DUP: total=19.00GiB, used=18.36GiB GlobalReserve, single: total=512.00MiB, used=0.00B Data, single: total=16.33TiB, used=16.13TiB System, DUP: total=8.00MiB, used=1.80MiB Metadata, DUP: total=19.00GiB, used=18.48GiB GlobalReserve, single: total=512.00MiB, used=0.00B versus an example good utilization drive Data, single: total=16.33TiB, used=16.33TiB System, DUP: total=8.00MiB, used=1.84MiB Metadata, DUP: total=21.00GiB, used=19.60GiB GlobalReserve, single: total=512.00MiB, used=0.00B