jvdivx

Members
  • Posts

    27
  • Joined

  • Last visited

Everything posted by jvdivx

  1. Among other things because I have system heating problems. And these disks withstand higher temperatures, they can work at more than 80º. In addition the life of these disk is much longer and are prepared for 24/7 I put a source where you compare SATA and SAS disks https://www.diffen.com/difference/SATA_vs_Serial_Attached_SCSI
  2. With -r I got the same result. I planned to change the SATA drives to SAS drives. Much more reliable and faster.
  3. The unsuccessful result of launching the command on my system is the following: sg_start --stop /dev/sdda Dec 2 17:56:11 jvdivx-unraid kernel: sd 2:1:44:0: [sdaa] Spinning up disk... Dec 2 17:56:26 jvdivx-unraid kernel: .ready Dec 2 17:56:26 jvdivx-unraid kernel: sdaa: detected capacity change from 10000831348736 to 0 Dec 2 17:56:26 jvdivx-unraid kernel: sd 2:1:44:0: [sdaa] 2441609216 4096-byte logical blocks: (10.0 TB/9.10 TiB) Dec 2 17:56:26 jvdivx-unraid kernel: sdaa: detected capacity change from 0 to 10000831348736 Dec 2 17:56:26 jvdivx-unraid kernel: sdaa: sdaa1 Dec 2 17:56:26 jvdivx-unraid kernel: sdaa: sdaa1
  4. Because the disk server LED does not go out, I also see the active disk in grafana. SYSLOG:
  5. These are the results of some tests: root@jvdivx-unraid:~# sdparm --get STANDBY /dev/sdaa /dev/sdaa: HGST HUH721010AL4200 A21D STANDBY 0 [cha: y, def: 0, sav: 0] root@jvdivx-unraid:~# sdparm --set STANDBY=1 /dev/sdaa /dev/sdaa: HGST HUH721010AL4200 A21D root@jvdivx-unraid:~# sdparm --get STANDBY /dev/sdaa /dev/sdaa: HGST HUH721010AL4200 A21D STANDBY 1 [cha: y, def: 0, sav: 0] root@jvdivx-unraid:~# sdparm -S /dev/sdaa /dev/sdaa: HGST HUH721010AL4200 A21D Read write error recovery mode page: AWRE 1 [cha: y, def: 1, sav: 1] ARRE 1 [cha: y, def: 1, sav: 1] PER 0 [cha: y, def: 0, sav: 0] Disconnect-reconnect (SPC + transports) mode page: BFR 0 [cha: y, def: 0, sav: 0] Format (SBC) mode page: TPZ 10998 [cha: n, def:10998, sav:10998] Rigid disk (SBC) mode page: NOC 412982 [cha: n, def:412982, sav:412982] mode sense(10): transport: Host_status=0x07 [DID_ERROR] Driver_status=0x08 [DRIVER_SENSE] mode sense(10): transport: Host_status=0x07 [DID_ERROR] Driver_status=0x08 [DRIVER_SENSE] Verify error recovery (SBC) mode page: V_EER 0 [cha: n, def: 0, sav: 0] Caching (SBC) mode page: IC 0 [cha: y, def: 0, sav: 0] WCE 1 [cha: y, def: 0, sav: 1] RCD 0 [cha: y, def: 0, sav: 0] Control mode page: TST 0 [cha: n, def: 0, sav: 0] SWP 0 [cha: n, def: 0, sav: 0] Control extension mode page: DLC 0 [cha: n, def: 0, sav: 0] Application tag (SBC) mode page: AT_LAST 1 [cha: y, def: 1, sav: 1] AT_LAST.1 0 [cha: y, def: 0, sav: 0] AT_LBAT.1 0x0 [cha: y, def: 0x0, sav: 0x0] AT_LBA.1 0x0 [cha: y, def: 0x0, sav: 0x0] AT_COUNT.1 0x0 [cha: y, def: 0x0, sav: 0x0] AT_LAST.2 0 [cha: y, def: 0, sav: 0] AT_LBAT.2 0x0 [cha: y, def: 0x0, sav: 0x0] AT_LBA.2 0x0 [cha: y, def: 0x0, sav: 0x0] AT_COUNT.2 0x0 [cha: y, def: 0x0, sav: 0x0] AT_LAST.3 0 [cha: y, def: 0, sav: 0] AT_LBAT.3 0x0 [cha: y, def: 0x0, sav: 0x0] AT_LBA.3 0x0 [cha: y, def: 0x0, sav: 0x0] AT_COUNT.3 0x0 [cha: y, def: 0x0, sav: 0x0] AT_LAST.4 0 [cha: y, def: 0, sav: 0] AT_LBAT.4 0x0 [cha: y, def: 0x0, sav: 0x0] AT_LBA.4 0x0 [cha: y, def: 0x0, sav: 0x0] AT_COUNT.4 0x0 [cha: y, def: 0x0, sav: 0x0] AT_LAST.5 0 [cha: y, def: 0, sav: 0] AT_LBAT.5 0x0 [cha: y, def: 0x0, sav: 0x0] AT_LBA.5 0x0 [cha: y, def: 0x0, sav: 0x0] AT_COUNT.5 0x0 [cha: y, def: 0x0, sav: 0x0] AT_LAST.6 0 [cha: y, def: 0, sav: 0] AT_LBAT.6 0x0 [cha: y, def: 0x0, sav: 0x0] AT_LBA.6 0x0 [cha: y, def: 0x0, sav: 0x0] AT_COUNT.6 0x0 [cha: y, def: 0x0, sav: 0x0] AT_LAST.7 0 [cha: y, def: 0, sav: 0] AT_LBAT.7 0x0 [cha: y, def: 0x0, sav: 0x0] AT_LBA.7 0x0 [cha: y, def: 0x0, sav: 0x0] AT_COUNT.7 0x0 [cha: y, def: 0x0, sav: 0x0] AT_LAST.8 0 [cha: y, def: 0, sav: 0] AT_LBAT.8 0x0 [cha: y, def: 0x0, sav: 0x0] AT_LBA.8 0x0 [cha: y, def: 0x0, sav: 0x0] AT_COUNT.8 0x0 [cha: y, def: 0x0, sav: 0x0] AT_LAST.9 0 [cha: y, def: 0, sav: 0] AT_LBAT.9 0x0 [cha: y, def: 0x0, sav: 0x0] AT_LBA.9 0x0 [cha: y, def: 0x0, sav: 0x0] AT_COUNT.9 0x0 [cha: y, def: 0x0, sav: 0x0] AT_LAST.10 1 [cha: y, def: 1, sav: 1] AT_LBAT.10 0x0 [cha: n, def: 0x0, sav: 0x0] AT_LBA.10 0x0 [cha: n, def: 0x0, sav: 0x6989e07fd7f0000] AT_COUNT.10 0x800000000000ffff [cha: y, def: 0x800000000000ffff, sav: 0x300000000000000] AT_LAST.11 1 [cha: n, def: 0, sav: 0] AT_LBAT.11 0xffff [cha: n, def: 0x0, sav: 0x0] AT_LBA.11 0xffffffffffffffff [cha: y, def: 0xffffffffffffffff, sav: 0xa0489e07fd7f0000] AT_COUNT.11 0x800000000000ffff [cha: n, def: 0x0, sav: 0x100000000000000] AT_LAST.12 1 [cha: n, def: 0, sav: 0] AT_LBAT.12 0xffff [cha: n, def: 0x0, sav: 0x0] AT_LBA.12 0xffffffffffffffff [cha: n, def: 0x0, sav: 0x12f400000000000] AT_COUNT.12 0x800000000000ffff [cha: n, def: 0x0, sav: 0xffffffff00000000] AT_LAST.13 1 [cha: n, def: 0, sav: 0] AT_LBAT.13 0xffff [cha: n, def: 0x0, sav: 0x0] AT_LBA.13 0xffffffffffffffff [cha: n, def: 0x0, sav: 0x0] AT_COUNT.13 0x800000000000ffff [cha: n, def: 0x0, sav: 0x0] AT_LAST.14 1 [cha: n, def: 0, sav: 0] AT_LBAT.14 0xffff [cha: n, def: 0x0, sav: 0x0] AT_LBA.14 0xffffffffffffffff [cha: n, def: 0x0, sav: 0x0] AT_COUNT.14 0x800000000000ffff [cha: n, def: 0x0, sav: 0xffffffff] AT_LAST.15 1 [cha: n, def: 0, sav: 0] AT_LBAT.15 0xffff [cha: n, def: 0x0, sav: 0x0] AT_LBA.15 0xffffffffffffffff [cha: n, def: 0x0, sav: 0x0] AT_COUNT.15 0x800000000000ffff [cha: n, def: 0x0, sav: 0x0] mode sense(10): transport: Host_status=0x07 [DID_ERROR] Driver_status=0x08 [DRIVER_SENSE] mode sense(10): transport: Host_status=0x07 [DID_ERROR] Driver_status=0x08 [DRIVER_SENSE] mode sense(10): transport: Host_status=0x07 [DID_ERROR] Driver_status=0x08 [DRIVER_SENSE] mode sense(10): transport: Host_status=0x07 [DID_ERROR] Driver_status=0x08 [DRIVER_SENSE] mode sense(10): transport: Host_status=0x07 [DID_ERROR] Driver_status=0x08 [DRIVER_SENSE] mode sense(10): transport: Host_status=0x07 [DID_ERROR] Driver_status=0x08 [DRIVER_SENSE] mode sense(10): transport: Host_status=0x07 [DID_ERROR] Driver_status=0x08 [DRIVER_SENSE] mode sense(10): transport: Host_status=0x07 [DID_ERROR] Driver_status=0x08 [DRIVER_SENSE] mode sense(10): transport: Host_status=0x07 [DID_ERROR] Driver_status=0x08 [DRIVER_SENSE] mode sense(10): transport: Host_status=0x07 [DID_ERROR] Driver_status=0x08 [DRIVER_SENSE] mode sense(10): transport: Host_status=0x07 [DID_ERROR] Driver_status=0x08 [DRIVER_SENSE] mode sense(10): transport: Host_status=0x07 [DID_ERROR] Driver_status=0x08 [DRIVER_SENSE] Protocol specific logical unit mode page: LUPID 6 [cha: n, def: 6, sav: 6] Protocol specific port mode page: PPID 6 [cha: n, def: 6, sav: 6] Power condition mode page: PM_BG 0 [cha: y, def: 0, sav: 0] Power consumption mode page: ACT_LEV 0 [cha: n, def: 0, sav: 0] mode sense(10): transport: Host_status=0x07 [DID_ERROR] Driver_status=0x08 [DRIVER_SENSE] Informational exceptions control mode page: PERF 0 [cha: y, def: 0, sav: 0] EWASC 1 [cha: y, def: 1, sav: 1] DEXCPT 0 [cha: y, def: 0, sav: 0] MRIE 6 [cha: y, def: 3, sav: 6] Background control (SBC) mode page: S_L_FULL 0 [cha: y, def: 0, sav: 0] mode sense(10): transport: Host_status=0x07 [DID_ERROR] Driver_status=0x08 [DRIVER_SENSE] mode sense(10): transport: Host_status=0x07 [DID_ERROR] Driver_status=0x08 [DRIVER_SENSE] mode sense(10): transport: Host_status=0x07 [DID_ERROR] Driver_status=0x08 [DRIVER_SENSE] mode sense(10): transport: Host_status=0x07 [DID_ERROR] Driver_status=0x08 [DRIVER_SENSE] mode sense(10): transport: Host_status=0x07 [DID_ERROR] Driver_status=0x08 [DRIVER_SENSE] mode sense(10): transport: Host_status=0x07 [DID_ERROR] Driver_status=0x08 [DRIVER_SENSE]
  6. Sorry, I didn't see this thread. And I asked for the same. https://forums.unraid.net/topic/85673-sas-disks-spin-down-68-rc7/ I have installed version 6.8.0-rc7 that has support for sdparm. Of the 27 disks that I have in array 4 of them are SAS disks. But I can't spin-down I was testing the sg_start command in both version 6.7.2 and 6.8-rc7, without satisfactory results. Execute the command without errors but no spin-down disks. If needed I can help you as a tester for your developments Thanks.
  7. I was testing the sg_start command in both version 6.7.2 and 6.8-rc7, without satisfactory results. Execute the command without errors but no spin-down disks. Thanks.
  8. I have installed version 6.8.0-rc7 that has support for sdparm. Of the 27 disks that I have in array 4 of them are SAS disks. But I can't spin-down. If needed I can help you as a tester for your developments. Errors.
  9. I have managed to lower the temperature of the controllers but I still have the same problem. Both the controller, as the SAS expander and the disks have an acceptable temperature after more than 6 hours of operation.
  10. Well it is very possible, I have temperature problems in summer with the server. It is possible that the 5 minute stop. lower the temperature sufficiently and work properly again. If it is that as soon as autumn / winter arrives. I will not need this workarround. Thank you.
  11. I have found a workaround while I find the definitive solution. Using the CA Users Scripts, I have scheduled a task to copy for 15 minutes and stop, and launch it again after 5 minutes. So I always get the top transfer I will adjust it if I see that I can extend the times more I have transferred 300Gb in 1 hour.
  12. I also changed the value of md_write_method but it didn't improve much because I don't use parity-disk. I only write to one disk at a time.
  13. Result of everything the array has at the moment:
  14. I think that solution is not going to help me, I am doing a migration of more than 100Tb
  15. Result of an SMR disk: Result of an non-SMR disk:
  16. The non-SMR discs, I already filled them and I had no problems. Even so, the friend who recommended me this system has the same disks as me and does not have these problems. If it were for the SMR I don't working well for more than 1 hour. What happens is that I have 18 SMR disks and in no system I had these problems. Very thanks.
  17. I have tried with version 6.6.7 and the exact same thing happens. Does anyone think of something? I am very interested in the product but if I do not solve it it would be a serious problem for my decision to make the purchase.
  18. When it goes wrong It is seen that rclone processes disappear in the iotop. Although the rclone is still running. If I stop the rclone the situation is even more strange since no process is supposed to be writing on the disk but the iotop is triggered and is usually like this for 5 minutes.
  19. Unfortunately, I only have those controllers.
  20. This is the result of the iotop when everything works fine
  21. It's not my case. Only the 4 files are being copied to a disk and if I use rsync the same thing happens even if I write one file at a time. The funny thing is that it happens when it takes more than 1 hour and sometimes at 30 minutes. I have run out of 15 days license. If I turn off the array, can I continue trying some more time?
  22. The disks are on the server connected to the same controller, source disk not assigned to the array, destination disk in the array. Source disk format btrfs (mixed) and target disk xfs format Example rclone working properly (For 1 hour and quarter) Transferred: 446.711G / 6.819 TBytes, 6%, 100.116 MBytes/s, ETA 18h34m5s Errors: 0 Checks: 386 / 386, 100% Transferred: 739 / 5331, 14% Elapsed time: 1h16m9s Transferring: * FolderX/***********: 98% /751.240M, 38.018M/s, 0s * FolderX/***********: 63% /751.004M, 30.781M/s, 8s * FolderX/***********: 88% /776.334M, 37.486M/s, 2s * FolderX/***********: 74% /748.467M, 26.427M/s, 7s The moment when you lower the write to disk to minimos and no longer recover. Transferred: 447.869G / 6.819 TBytes, 6%, 98.352 MBytes/s, ETA 18h53m51s Errors: 0 Checks: 386 / 386, 100% Transferred: 742 / 5331, 14% Elapsed time: 1h17m43s Transferring: * FolderX/***********: 20% /559.182M, 2.834M/s, 2m36s * FolderX/***********: 12% /559.518M, 2.286M/s, 3m34s * FolderX/***********: 95% /559.471M, 1.920M/s, 13s * FolderX/***********: 11% /558.863M, 2.042M/s, 4m3s jvdivx-unraid-diagnostics-20190826-1236.zip
  23. It was just a test to rule out problems with the array.
  24. Hello everyone, I am a new Unraid User, for now a Trial user. But I'm thinking to move to Plus version because right now I'm migrating my current storage system to Unraid. I did some tests and seems unraid cover all my needs. I'm in the middle of a migration from my current system to Unraid and I find out some issues regarding write speed drops. To be able to do this migration I'm moving the data from my current system to unraid formated disks in XFS, and No parity disks for the moment. The copy using any utility (rsync, rclone) drops from 100Mb/s to 10Mb/s When it takes about an hour and no longer recovers until I stop copying for a while and run the copy again. I have tried to copy directly to disk(/mnt/disk3), to share(/mnt/user) Also mounting the XFS disks manually(with the array stopped) All the cases I got the same results. If I solve the problem I will buy the plus license. since the system gives me everything I need Docker, VM, storage control, IPMI, etc ... This is my hardware specs: Supermicro X9DRi-LN4 + 2 Intel Xeon CPU E5-2650 2.00Ghz LSI Corp SAS2x36 LSI Corp SAS2x28 2 Adaptec - Adaptec RAID 71605 3 10TB HGST discs 18 8TB Seagate Archive Disks. Kind Regards, Jvdivx.