papnikol

Members
  • Posts

    341
  • Joined

  • Last visited

Everything posted by papnikol

  1. Thanks to both of you. I still have the same problem. Maybe i should reinstall mc?
  2. So, before removing some disks, I am copying their data to a larger installed disk. As you can see in the following image, I am trying to move all files form disk 9 to disk13. Both contain a '-MOVIES' directory at their root. I want the '-MOVIES' directory from disk9 to move to disk 13 (I dont mind any overwrites). But, as you can see, somehow the '-MOVIES' directory is moved inside the disk13 '-MOVIES' directory . What am I doing wrong?
  3. 1. I have no VMs. 2. I have already changed my MOBO/CPU/RAM to much better configuration (but the 2 SASLP controllers remain). MSI RD480/AMD Athlon 64 3700+/3GB -> Asrock FM2A88X Pro+/: AMD Athlon™ X4 840 Quad Core @ 3100/8GB 3. These are the results for one of the controllers with the new configuration: There is more bandwidth due to the move from PCI e 1.0 to PCIe 2.x. But still, the bandwidth is not balanced across all disks. The bandwidth usage is maximized for each drive until the last 2-3 drives have very little bandwidth available.
  4. Some thoughts: - You do not mention, but I am guessing that the disconnection happens after preclear is completed. Otherwise the signature not being valid is expected since it is the last think that happens when preclearing. If the disconnection happens earlier, you will naturally not have a valid signature. - Did you check the smart stats on the drive? - The only solution I can think of, until someone more knowledgable chimes in, is to try another drive, if you have one.
  5. I think no other task is consuming significant power (I think mdrecoveryd and unraidd should be running at parity check): And, yes, it seems parity check will last 2 days :(
  6. I see. Thank you for your input. I happen to have an Athlon X4 840 available (quad core, 3.1Mhz) and a mobo that supports it (Asrock FM2A88X Pro+, probably an unraiders dream since it has 8 SATA3 ports) . I was hoping to avoid the whole process but it seems I will have to give those a try....
  7. I though of it, but it seems strange. The CPU usage is 100% when there are only 2 disks involved in the parity check operation and it is lower when all 13 are reading/writing (reaching a collective read speed >500Mb/s. And it is a single core CPU but it seems abnormal for it to be limited at 50MB/s speed per disk.
  8. One of my arrays (12 data disks & 1 parity disk) is showing very slow and strange patterns of parity check. The setup is quite old and slow: Mobo: MSI RD-480 CPU: AMD Athlon 64 3700+ RAM: 3GB 1st SASLP (PCIe 8x) has 4 disks 2nd SASLP (PCIe 8x) has 6 disks Mobo controller has 3 disks The mobo has 2 16xPCIe slots but they are v1. Also, the mobo contollers support only SATA1. Still, the behavior I see when doing parity check seems strange. The array had 2x4TB disks and 11x2TB disks. I decided to replace most of them with larger 8TB disks and started by replacing the 4TB parity disk with an 8TB disk. The speed was slow but when the array reached the 2GB point (which means that only the 8TB & 4TB disks are reading/writing) I was expecting it to pick up. And yet, it is only ~45MB/s. The same was happening when I did a parity check with the 4TB disk as parity. The problem should not be the low RAM (according to the pic I attached). Nor should it be the bandwidth of the SASLP card (both drives are on an SASLP). It seems that somehow each disk's bandwidth cannot exceed the 50MB/s limit. This is the write speed for the parity drive during the whole parity check up to now: Can someone help me pinpoint the bottleneck? PS: sorry for the long post but I am trying to be precise.
  9. Thanks for taking the time to check the diagnostics. I actually have normally set the default at 1hr, I had just changed it to 30min in order to check the problem I mentioned. I must admit that since I rebooted I haven't seen the spinups I were describing. Maybe I am obsessing but after the reboot, without me accessing the disc in order to write to it, I saw a few writes only to the 3 XFS disks. Still, this might be a quirk of XFS...
  10. This is the nicest way I have ever been reminded that I can be stupid sometimes I have now attached the diagnostics file at the original post. Unfortunately, this time the drives happen to be spun down so that might be no helpful.
  11. Hi everyone, I have a 10 drive array (1P+9D) and I have noticed that all xfs drives tend to spin ups and remain spun up. Meanwhile, similar ReiserFS drives do not spin up unexpectedly. All disks belong to the same share and have the same settings: Spin down delay: use default Tools -> Disk Settings: Default spin down delay: 30 minutes Enable spinup groups: No Is there a reasonable explanation and a fix? I searched at the forums and found nothing.... towerp-diagnostics-20180504-0801.zip
  12. I know that gfjardim posted a fixed preclear version, but just FYI, preclear of an 8tb Seagate archive finshed succesfully with your patched preclear script (thanks). It took some time...: ... but that's partly due to the fact the particular array is slow (also, I used the script with the GUI plugin and I think the faster post-read option was not used). I am preclearing an 8TB WD red NAS in another (faster) array, using the faster post-read option, and it is going faster (37hrs in, it is at 36% post-read).
  13. To make what binhex says even clearer: - There is one plugin by gfjardim that works as a front end to the preclear script. This works fine. - There are 3 actual scripts that can perform the preclear action The original script by the great JoeL (which has not been updated for some time) The faster script by bjp999 which stresses the disk equally but is (obviously) faster The script by gfjardim which is probably the most efficient one and is used by default by the plugin At the moment the only script working is the bjp999 one, patched by binhex (link here). In order to take advantage of its faster post-read option it should be invoked with the -f argument. When gfjardim fixes the bugs (which will be announced in this thread), one should probably revert to using his script.
  14. Thanks for the clarification gfjardim. I did not expect that, tbh, since my respective times are ~180Mb/s and ~60Mb/s. That means that the (sum + verify zero sum) portion roughly takes twice the time than the reading portion ...
  15. got it, I will try it from shell. Just FYI, I noticed that the option is not available with the unpatched faster preclear script too.
  16. BTW, I tried to start preclearing a second disk and noticed that I do not get the option for faster post-read (as this post mentions):
  17. I think 65MB/s might be normal at the end of the disk, but is that the case for rvoosterhout? Because I am also preclearing an 8TB Seagate archive and although speeds for preread and zeroing were normal, I am getting a postread speed of 57 MB/s but I am only at 8% (I dont remember if I clicked the fast post-read option, still too slow). EDIT: I might be misunderstanding something because the web gui shows that: but the log (pressing the eye icon) shows that:
  18. . Will do. It will be some time as it is an 8tb disk. I will post the results in the thread you mentioned
  19. I am trying it now. One difference I noticed is that the starting preread speed is different, although this could be inherent to the scripts. The gfjardim script started at 175MB/s and the bjp666 one started at 133MB/s
  20. Wow, I like your graph much more than mine. So, my array has 2 AOC-SASLP controllers with 8 and 3 HDDs respectivelly. I moved one drive so that there are 7 and 4 HDDs on each controller, in order to exceed both controllers' max bandwidth. The balancing problem appears in both controllers. Maybe it is a problem with the controller settings. Maybe I should try disabling int 13h (although I never understood what that is)....
  21. Thanks for your prompt response, it works. It is difficult to follow all this stuff if you get out of touch for a few months. Does the same patch work on the bjp666 script? i.e. could i try sed -i -e "s/print \$9 /print \$8 /" -e "s/sfdisk -R /blockdev --rereadpt /" preclear_bjp.sh instead of sed -i -e "s/print \$9 /print \$8 /" -e "s/sfdisk -R /blockdev --rereadpt /" preclear_disk.sh ?
  22. I tried using the JoeL or bjp666 preclear scripts since the webgui gfjardim script is currently not working. I get errors with both of them. Have they become obsolete for Unraid 6.5?
  23. Hi. I run diskspeed for a controller that has 8 disks attached (attached file). The controller is an AOC-SASLp-MV8 (PCIe x4). Do those results indicate that the 4x PCIe lanes are saturated by the 4 first drives? Or could it be something else?
  24. So there is no solution other than changing hardware?
  25. For a long time I have been having a problem with my unraid setup. Parity check would find a few errors every time (like 5-20 errors - the array consists of 6TB HDs), and every time they are in different positions, so, the obvious assumption is that they are fake. I checked my memory, my disks, my PSU and I found no problem. The only remaining culprit is the mobo or the CPU, which are difficult to check. I then stumbled on one suggestion from settings > Fix Common Problems mentioning that I have a Marvel Hard Drive Controller and this has caused, sometimes, parity errors. Is there a simple way to check if this is the reason for my problem? PS: my mobo is the Asus P5Q deluxe (marvel controller: 88SE6121) and the CPU is the Core 2 Quad Q9550