Jump to content

Vetl

Members
  • Posts

    13
  • Joined

  • Last visited

Everything posted by Vetl

  1. I have searched this topic with few key words but still was unable to see if it has been answered before. Is there a solution for daily scheduled tasks not to spin up disks in the array to read movie files data?
  2. If you look into radarr settings under tasks, there is list of scheduled tasks, there is one that refreshes files on daily basis, this is the one that triggers reads on the files. There are solutions to update DB values manually, but I would rather not, and get something more efficient. Post your issue in that support thread, lets see what they say.
  3. Same here, half of drives in my 28 disk array spin up and movie files are accessed ( meaning Radarr). Seems like daily task kicks in. @karter74 what images do you use binhex ? i tried posting this question in their support thread but no reply yet.
  4. How is this image different vs original one? Particularly interested in tasks scheduled if that can be changed or it was already changed in this image. Issue i'm having with the original one is that daily tasks spin up my array which is not good at all, does this image do the same thing?
  5. ha, you were right, just tested with dummy file it did work.
  6. you mean this one right ? " There is a quirk of the interaction between Linux and the Unraid user share system that users can encounter if they are working at the disk share level. The Linux command for 'move' is implemented by first trying a rename on the file (which is faster) and only if that fails does it do a copy-and-delete operation. This can mean when you try to move files from one user share to another from the command line, Linux will often simply rename the files so they have a different path on the same disk, in violation of any user share settings such as included disks. The workaround for this is to instead explicitly copy from source to destination so that new files get created following the user share settings, then deleting from the source."
  7. Hello everyone, I just got puzzled by the behaviour of Krusader, I might be missing something when I was moving files but my brain freezes and fails to explain the results that you might be able to. I have moved files from device pool allocated for torrent only to nas array which is actual array that is setup as CACHE->Array. However the files remained on torrent device pool and were just moved into NAS Array folder on that device. these are the "move from" and "move to" locations. I would expect with my NAS_Array setup files would be moved into Cache device before scheduler moves them to the array ? Krusader and if I navigate from my windows PC shows those files as in NAS_array location, however if I open torrent device pool in browser that where the files are: my shares are: pool devices: NAS_Array configs:
  8. hmmm tried all x8 pcie 3.0 ports with the same result, if used x16 it downgraded to Width x1. Could it be that I'm not using bandwidth and it scaled down ? i.e. when I will be hitting limit it will scale up? otherwise I guess I will have to add another card when I will start hitting the limit. Will adding another card impact the array ?
  9. pretty soon at 10 disks in the array pushing about 120mb/s and I'm only about 50% in my data migration, this is 46 bay case from microfocus. I have tried to put CPU in performance mode, II0 PCIe settings from auto to x8 on those ports but still getting "Width x2 (downgraded)" any way to push it for x8 ?
  10. Someone plugged the card into the only one PCIe 2.0 slot on the MB lol moved to the PCIe 3.0 x8 slot, and now getting the following: LnkSta: Speed 8GT/s, Width x2 (downgraded) seems like "Width x2 (downgraded)" part doesn't look good, should I be concerned ? if yes what would be the next steps? seems like reconstruct writing is back to 120-130mb/s which max fox those hhds. How many HHDs can that controller handle ? lspci_3.txt
  11. well, the speed tests show pretty much similar reads on all drives, for the second part I don't know how to validate
  12. please see the attached file ( was captured during read/modify/write mode if it makes difference) lspci.txt
  13. Hello everyone, I'm hoping someone can help out to troubleshoot where the bottleneck is on my system. As expected read/write/modify was slow since the beginning, so I switched to reconstruct method, it was working pretty well for some time, I missed the point where it degraded ( adding more HDDs into array). So now my speeds are around 40mb/s, with speed being faster in read/write/modify mode and slower in reconstruct. I did speedtest of drives, my parity drive is fast, I know my 4tbs are not great but, when connected to windows machine I can read/write 120mb/s, so much faster than 40mb/s. matrix-diagnostics-20240117-2155_reconst.zip matrix-diagnostics-20240117-2152_rd,wr,md.zip
×
×
  • Create New...