trott

Members
  • Posts

    140
  • Joined

  • Last visited

Report Comments posted by trott

  1. actually even for the appdata on cache, there will be 2 setup

    1. map directly to /mnt/cache/xxx

    2. map to user share /mnt/user/xxx with setup "use cache disk" only

     

    by testing on this 2 setup, we might be able to isolate if the issue is fuse related or not

    • Like 1
  2. Again, are you guys are 100% sure this only impact the sqlite DB?  I don't have cache now, so I download to array directly using qbittorent,  recently I found MakeMAV failed to remux some movies, I have thought it might be movie issue, but I have a force recheck on those torrents today, it happaned they are not 100% complete

     

    I have no prove to say it is unraid issue, but it is not one torrent, but serveral, and I never have this issue before. I'm not happy on this beause I don't know if there is any other file are also corrpution during the moving to unaid, I have no way to check without the checksum

     

    Frankly speaking, I think unraid should pull back the 6.72 until they fixed this issue.

  3. I want to report that enable NCQ help me a lot on this issue,  my MAKEMKV got 35M/s compare with 2.3M/s before in the same situation.

     

    but unraid seems has the bug to enable NCQ,  even if you setup Tunable (enable NCQ) to yes in disk setting,  the queue_depth is still 1,  I have to manually setup queue_depth in CLI for each disk like below

     

    echo 31 > /sys/block/sdf/device/queue_depth

  4. 3 hours ago, Marshalleq said:

    So I'm now back on 6.7.2 and already I'm seeing issues again.  Specifically, while playing something on Plex, the mover process created a repeating image freeze / resume scenario on the client.  I had the opportunity therefore to look at top and saw the wa (I/O wait) reach approximately 0.20 vs the idle wait of 0.03.  While this may be indicative of the issue below, I'm looking deeper into it as it's not entirely unusual for moving data from SSD to HDD to create a high I/o wait obviously.  Perhaps someone can check it on 6.6.7 for me, my recollection was this only got to about 0.10 on that.

     

    The patch mentioned above was put into mainline kernel from 4.19.1.  So I've upgraded to the beta of unraid which has kernel 4.19.60.  I assume that is later and therefore a good way to test if this resolves the issue.  Will keep you posted.

    6.7.2 user kernel version: 4.19.56, if the fix is in 4.19.1, then I don't think 4.19.60 will help

  5. I'm now processing to move data to unraid from old HDD to do some testing, then I noticee the same issue,  when there is write activity going on,  the read speed is extremely slow( 3-4M/s, sometimes in KB),   no matter which disk the date is read from, the result is always the same;

     

    when there is no write activity, the read speed is return to normal,  I did 3 concurrent reading from 3 disk share, each reading can reach 150-200M/s