Leaderboard

Popular Content

Showing content with the highest reputation on 06/22/20 in Report Comments

  1. One use case I will be using it for off the bat would be having a separate cache for docker and appdata formatted as XFS to prevent the 10x - 100x inflated writes that happen with a BTRFS cache. It is also a way of adding more then 30 drives if someone needed that. A second cache pool could be used as a more "classic" NAS with raid and apparently possible ZFS support in the future, really pushing into freeNAS territory there. Or simply setup cache pools based on usage and speed needs. For example a scratch drive that doesn't need reduntacy with a raid 0 setup on less trustworthy drives. another high speed cache with NVME drives for working projects. Then a high stability pool for normal writes to the array caching using raid1 and very good drives that has very low chance of failure. Just the first things that came to mind, If they make it, people will find uses for it that is for sure. For example this makes a tiered storage system fairly easy to implement in the future, this is a use case I would use for sure. Tired storage will move recently / frequently used data to faster storage pools and less used or old data to slower tiers automatically.
    2 points
  2. https://forums.unraid.net/bug-reports/prereleases/unraid-os-version-690-beta22-available-r955/?do=findComment&comment=9350
    1 point
  3. I agree, I didn't want to hack stuff apart too much to try and fix this so that any official fix would work properly. At this point the remount option is your best bet, it can be done with the array running, it does not change anything about the base unraid workings and it reverts to stock after a reboot. So no risk to give it a try. You will just need to run it again after every reboot. If you only have a single cache drive, then reformatting as XFS is the only true fix, dropped my writes from 7gb/hour to less then 200mb/hour. For those of us with a cache pool, there is no fix, we have to get creative, like me adding an SSD to the array formatted as XFS, this only works since I don't have a parity drive at the moment.
    1 point
  4. Really hoping that the 5.8 kernel makes it into a release not too far down the line. As I understand it, the combination of the GPU reset patches which were submitted for 5.8 alongside the team working on amdgpu driver support should hopefully fix the Navi reset issue without a custom kernel being required.
    1 point
  5. Same for me, using x570 auros master and it is a pity i cant use the onboard sound as passthrough and im really not comfortable with running a custom kernel. i was hoping for unraid 6.9 to be the version that could unleash the power of my new ryzen system, without the need for custom kernels
    1 point
  6. I appreciate you looking into it so quickly (really wow), it looks like that first reddit post I linked was updated just today to reflect those upstream changes to 5.8. I personally work around it currently by passing through a usb headset to the VM for sound because I didn't feel comfortable running a custom kernel even though it looks like many have with success. It isn't ideal but it works. I am sure there are a lot of people who would find these patches useful with the popularity of the new Ryzen processors and X570 motherboards. Thank you for your time.
    1 point
  7. Also, any users that created what should be a redundant pool from v6.7.0 should convert metadata to raid1 now, since even after this bug is fixed any existing pools will remain as they were, use: btrfs balance start -mconvert=raid1 /mnt/cache To check if it's using correct profile type: btrfs fi usage -T /mnt/cache Example of a v6.7 created pool, note that while data is raid1, metadata and system are single profile, i.e. some part of the metadata is on each device, and will be incomplete if one of them fails, all chunks types need to be raid1 for the pool to be redundant : Data Metadata System Id Path RAID1 single single Unallocated -- --------- --------- --------- -------- ----------- 2 /dev/sdg1 166.00GiB 1.00GiB - 764.51GiB 1 /dev/sdi1 166.00GiB 1.01GiB 4.00MiB 764.50GiB -- --------- --------- --------- -------- ----------- Total 166.00GiB 2.01GiB 4.00MiB 1.49TiB Used 148.08GiB 555.02MiB 48.00KiB
    1 point