Jump to content

mpiva

Members
  • Posts

    16
  • Joined

  • Last visited

Posts posted by mpiva

  1. 19 minutes ago, Reynald said:

    Mover tuning plugin will not start by itself, yet.

     


    I'm interested to know how you achieved this. 

    Assuming you won't put more than 5-10% per hour on the cache, you can set a hourly mover schedule and set threshold to 90~95 in mover tuning.
     

    I think, that was the setting I have before, and that's why I rationalize it as auto-triggered.

    I was just editing my post when you responded.  

     

    Is the below possible?:

     

    Probably what I want can be solved with multiple schedules:

     

    One is 4:00 AM with 50% (real schedule)

    and other every hour with 95% (pseudo auto-trigger)

     

    The rationale, Is, that if you put 95% in the schedule at night, and you have it 80% full at night, it won't trigger, but if you move that 20% left the next day, you will be out of space, in the middle of the day, forcing to run the mover in the day with the accompanying server degradation at that hour)

     

    And if you have 95% every hour, it will trigger when it reaches that threshold. Independent of the night, when the server degradation is preferred.

     

    With both schedules, you're fresh every day to fill your raid with 50% (45%) of the cache without worrying about the mover, and the mover runs only at night, giving you no degradation during the day, and in the case, you almost fill the cache, the hour schedule will act as a fallback in case you moved to much.

     

     

  2. 1 hour ago, Kilrah said:

    There is never an automatic trigger, it's always schedule-based.

    Use min free space on the share to have things to to array when too full. 

     

    IDK, every time the cache is almost full, it triggers automatically in my setup. It doesn't wait till the scheduled time, My question is related to the %, if it applied, to the schedule, and the automatic trigger.

     

    Sorry misunderstood, gotcha.

     

    How min free space on the shared work on the cache, I thought the parameter was related to the storage in the raid.

     

    Probably what I want can be solved with multiple schedules.

     

    One is 4:00 AM with 50%

    and other every hour with 95%

     

    Again, the rationale, Is, that if you put 95% in the schedule at night, and you have it 80% full at night, it won't trigger, but if you move that 20% left the next day, you will be out of space, in the middle of the day, forcing to run the mover in the day with the accompanying server degradation at that hour)

     

     

     

     

     

     

  3. First, @Reynald thanks :)

     

    Question:

    What are the settings for the following scenario:

    I want the Mover to trigger when SSD/s becomes almost full automatically. IE: 95%

    But, at the same time, I want it scheduled every day at 4:00 AM, to move if threshold is bigger than 30%.

     

    The rationale is the following: When moving a high amount of data to the array. Trigger automatically at 95%.

    But as a maintenance schedule, move, the not prefer cache files to the array, on the night.

     

     

  4. I partially fix the issue, following Matias Hueber's "Performance optimizations for gaming on virtual machines (mathiashueber.com)"

     

    on his site, mentioned the topology of a Ryzen Processor .

     

    image.thumb.png.0eafaf680971a79b6d759d3532815662.png

     

    If you see his configuration it uses the second CCX for the guest

    image.png.8767788279fb9e0439e69d870b0a7029.png

     

    The strange thing, is the ordering is the same in the web UI. Which means cpuset = CPU number in UI. With the above all the HTs will be light up.

     

    But notice the CPU pairing is CPU 0 and CPU 8, CPU 1 and CPU 9.

     

    When in the above graphics the pairing is [CPU 0, CPU 1] [CPU 2, CPU 3]

     

    image.png.55c8c99ad77b41085172efb00bcab432.png

     

    So my question is, which one of this are wrong?

     

    It seems the UI is showing the wrong pairing. I used the UI to create the cpuset, and seems that created the bottleneck.

     

    IDK: if this also is replicated on CPU Pinning but since the UI is the same I guess so.  

     

     

  5. This happening for a long time, since Windows 10, updated virtIO several times. Current running version 100.90.104.21700 (2/23/2022).

     

    The strange thing is, it's not related to the SSD cache. But entirely when mover is running VM virtio network goes down to 4 MB/S. IN/OUT the unraid.

     

    Per example, if, I mount an external share from another computer in Unraid GUI, and start copying to unraid when mover is running I got 110 MB/S probably maxing the network. So there is no bottleneck on the SSD cache (2XSATA SSD in RAID-0).

     

    Tweaked iothreadpin on the VM, with no improvements. I'm kinda baffled, where the bottleneck is, Running Ryzen 1700. CPU usage when mover is running is negligible. 

     

    Of course I tested it without any load. And the issue remains.

     

    Any clues, where the bottleneck exists?

     

     

  6. Some background:

     

    I live in a country where HDD are at least two times more expensive than USA, and with the Chia craze, prices are not lowering yet, at least in here.

     

    That said.

     

    I Have a dual parity unraid, about 3 months ago, one disk start to grow bad sectors. 250 at the time of writing this.

     

    Yesterday, something unexpected happens, two disk, gone bad, and desynced. Smart is OK on both. The strange thing, is, reconstructing them, ends both after 10 minutes with 1024 errors, at the same time. Reading them directly seems to be fine. So, I'm suspecting there is power supply issue, since they're from different backplanes.

     

    That said again...

     

    Now, I have two desynced disks, that might or might not be reused, and one disk with 250 sectors bad. I Know a little about how reconstruction works, using XOR or Red Solomon. And the 3 disks are data disks. My question is the following:

     

    At the sector position where those 250 errors sectors exists, in the first disk. Can those be recuperated somehow? 

    at the same time in the same position, those 250 sectors in the desynced drives, these data become in danger?

     

    If the answer on the above questions, is you're fine. Then I have no issues. Is the answer, is, you might loose that data. I'm looking for the following:

     

    1) Is there a tool, that can point out what files are in danger, from the error sectors, and also extend it to other disks, same set of sectors, but different disk ?

     

    2) Can you mount unraid in read only mode. And mark those almost broken disks temporary synced, and mount them as a part of the unraid?

     

    In that way, they serve their limited purpose, those 250 sectors can be recuperated, files can be copied to other medium, and the disks can be replaced with no risk of loose data.

     

    Also this method, could serve to recuperate files, when disks have errors, not dead, but they're in different places.

     

     

     

     

     

     

     

     

     

×
×
  • Create New...