Jump to content

LJQ

Members
  • Posts

    14
  • Joined

  • Last visited

Everything posted by LJQ

  1. 1.After powering on the computer, clicking on the icon to the right of the disk to manually put it to sleep will show the spin-down log, but it doesn't take effect; 2.When setting the disk sleep time to 15 minutes for the first time and saving, the log shows the system attempting to put all disks to sleep, but it doesn't take effect; 3.Once the disk settings have been modified, both manual and automatic sleep completely fail, and clicking on sleep no longer outputs any disk sleep logs. 4.When I click on hard drive sleep, the system only logs: Apr 11 10:04:34 LJQ-UNRAID emhttpd: spinning down /dev/sdb Apr 11 10:04:35 LJQ-UNRAID emhttpd: sdspin /dev/sdb down: 1 When I click on sleep and the system does not respond, there are no logs printed. 5.After installing the SAS Spin Down plugin, manual and automatic sleep for SAS drives (disk1-disk6) in the array returns to normal, while SATA drives still show no response and no logs. Q: Still unsure why the SATA drives (disk6-23) in the array cannot sleep, and how to make them sleep.šŸ¤£ljq-unraid-syslog-20230411-0529.zip Attached are some log entries.
  2. However, I encountered this problem with a new unraid system as well. Even if the manual click did not work due to browser or system issues, automatic hard drive sleep should not have failed. šŸ¤£ It seems that I am currently stuck in this predicament. I will try a new unraid 6.11.5 over the weekend and investigate any other potential issues, such as browser or operating system etc.šŸ˜Ÿ Could you please tell me the keywords that the diagnostic log prints when I click SPIN DOWN DEVICE? I will need to test this repeatedly and report any progress made.
  3. I am sure that I clicked on the green dot next to the hard drive, and I have recorded another video as evidence. I am also attaching the latest logs for your reference.Automatic sleep also did not work. 2023-04-06 21-55-57.mp4 ljq-unraid-diagnostics-20230406-2157.zip
  4. What are the keywords for the SAS plugins you mentioned? In my array, I have 24 hard drives, some of which are SAS and some are SATA. I have just clicked on three 16TB SATA hard drives: Disk 17 ī¤‡WUH721816ALE6L4_2PG5EGST - 16TB (sdb) Disk 18 ī¤‡WUH721816ALE6L4_2CHWBNTJ - 16TB (sdg) Disk 19 ī¤‡WUH721816ALE6L4_2BH7UM9N - 16TB (sdh) I am attaching the latest logs for your reference. ljq-unraid-diagnostics-20230406-1023.zip
  5. Well, the hardware hasn't changed, but I have switched to a new unraid USB drive (with many plugins, but without Docker or virtual machines enabled) and rolled back to version 6.11.5. However, the hard drive sleep and manual SPIN DOWN DEVICE still aren't working. Are there any other possible reasons for this, and how can I troubleshoot the issue? I am attaching the latest logs for your reference. ljq-unraid-diagnostics-20230405-2211.zip
  6. Version: unraid 6.12.0-rc2 Description: Freshly installed, no plugins态docker and virtual machine , only set hard drive sleep to 15 minutes after installation. Problem: Hard drives are not sleeping, and manually clicking SPIN DOWN DEVICE on the interface does not put the hard drives to sleep either. What could be causing this issue? Hardware: EPYC 7551p Supermicro H11SSL-i Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 (rev 02) 24 disks connected to the motherboard via LSI passthrough card Hardware information is available in the screenshot, and logs have been uploaded. tower-diagnostics-20230405-1744.zip
  7. After testing various scenarios, the following results were observed: 1.Unraid A array (disk1-disk4) reading simultaneously == parallel rsync over 10G LAN ==> Unraid B (disk10-disk13) writing simultaneously; no noticeable bottlenecks for both reading and writing. 2.Unraid B cache pool (disk1-disk4) reading simultaneously == local ==> Unraid B cache pool (disk10-disk13) writing simultaneously; no noticeable bottlenecks for both reading and writing. 3.Unraid B array (disk1-disk4) reading simultaneously == local ==> Unraid B (disk10-disk13) writing simultaneously; no noticeable bottleneck for reading, but a significant bottleneck for writing. From these results, we can maybe draw the following conclusions: 1.When multiple drives in an Unraid array are reading and writing simultaneously, there may be a bottleneck for writing, resulting in a total writing speed of 200-300MB/s. 2.When multiple drives are writing to or reading from Unraid through the network, no noticeable bottleneck is observed. 3.When multiple cache pools are set up in Unraid and read and write operations are performed simultaneously between them, no noticeable bottleneck is observed. 4.When using the dd command to directly operate on dev/sd* and bypass the array's md driver, there is no speed bottleneck. This may reveal that the implementation of Unraid arrays can potentially lead to bottlenecks when a large number of disks are writing simultaneously within the array.
  8. Alright, in order to copy the data from the 8 old disks to the 8 new disks, I have tried too many methods and spent a lot of time. I are now put the 8 old disks back to another host and transferring the data to the new disks in Unraid via a 10-gigabit local area network. Once I have finished transferring all the data, I will find time to test the read and write issues of the Unraid array disks using the methods You suggested, and will update the thread with any new findings. Thank you again for your patient guidance.
  9. Thank you for your patience. In my previous post, I mentioned using the dd command to directly copy disk sdh (8TB) to disk sdx (12TB) and then expand disk sdx, which resulted in Unraid not recognizing sdx. You responded to that as well. If I perform the following operations: nohup rsync -avP --append /dev/md1p1 /mnt/md10p1 > rsync1.log 2>&1 & nohup rsync -avP --append /dev/md2p1 /mnt/md11p1 > rsync2.log 2>&1 & ..... I will not bypass the md driver, and there should be no speed bottleneck for copying multiple disks? However, after expanding the sdx (12TB) disk using fdisk and xfs_growfs, Will this cause Unraid to still not recognize it?
  10. Your meaning isā€”ā€”Using the dd command bypasses the md driver, allowing for maximum read and write speeds across multiple disks, while rsync and cp commands still operate disks through the md driver, resulting in significant performance limitationsļ¼ŸBased on this, Can we draw the following conclusions: 1.Unraid's use of the md driver may cause a bottleneck when reading and writing across multiple disks simultaneously, with total write speeds only reaching 200-300MB/s; 2.When transferring data via a 10Gb network, the speed of reading from multiple Unraid disks and writing to TrueNAS, or reading from TrueNAS and writing to multiple Unraid disks, is faster than direct read and write speeds between Unraid disks. This may be due to partially bypassing the md driver through the network? 3.IF the array is stopped and XFS file system disks are manually mounted, followed by using rsync to transfer data across multiple disks, this may also bypass the md driver, resulting in higher transfer speeds? owever, will this operation have any hidden risks?
  11. Translation: I first formatted the disk to the XFS filesystem through the web interface and then used the dd command to copy the data. However, the dd command would definitely overwrite the target disk's filesystem and partitions, causing Unraid to be unable to recognize the XFS partition. Based on this conclusion, it is not recommended to use the dd command for full-disk copying in Unraid, or it's not a problem if the disks are of the same size? I'm still not quite clear about Unraid's unique partitions and identifiers.
  12. Unraid version: 6.11.5/6.12.0-rc2 Background: I've set up an array with 24 disks. Among them, there are 8 old hard drives, disk1-disk8, each 8TB, xfs system, filled with old data. The other disks, disk10-disk17, are new hard drives, each 12TB, xfs/zfs system, with no data. I want to copy all the data from disk1-disk8 to disk10-disk17 simultaneously. Problems encountered: No matter what method I use to copy data, the total copy speed for multiple disks is 100-300MB, far below the disk read and write speed. Copying from a single old hard drive to a new hard drive can reach 250MB+, but the speed cannot be increased when copying from multiple hard drives simultaneously, and it may even decrease. Tried methods (ineffective): Running multiple rsync background processes to sync data simultaneously, such as: nohup rsync -avP --append /mnt/disk1/ /mnt/disk10 > rsync1.log 2>&1 & nohup rsync -avP --append /mnt/disk2/ /mnt/disk11 > rsync1.log 2>&1 & Running multiple cp background processes to sync data simultaneously, such as: nohup cp -R /mnt/disk1/* /mnt/disk10/ > cp1.log 2>&1 & nohup cp -R /mnt/disk2/* /mnt/disk11/ > cp2.log 2>&1 & Formatting disk10-disk17 as xfs system, then performing a full data copy Formatting disk-disk17 as xfs system, then performing a full data copy Downgrading unraid to version 6.11.5 or upgrading to 6.12.0-rc2; Adjusting the process priority of rsync/cp commands using nice/renice. Adjusting zfs_arc_max from the default 32GB to 128GB. Main host configuration: LSI2308 12GB direct-attached card, 258GB memory, AMD Epyc 7551P CPU It shouldn't be a CPU, memory, or disk bottleneck The only effective solution (effective): Parallel dd commands can achieve speeds of 1.5GB-1.7GB nohup dd if=/dev/sdh of=/dev/sdx bs=128K status=progress > dd1.log 2>&1 & nohup dd if=/dev/sdi of=/dev/sdy bs=128K status=progress > dd2.log 2>&1 & nohup dd if=/dev/sdj of=/dev/sdr bs=128K status=progress > dd3.log 2>&1 & nohup dd if=/dev/sdg of=/dev/sds bs=128K status=progress > dd4.log 2>&1 & nohup dd if=/dev/sdc of=/dev/sdt bs=128K status=progress > dd5.log 2>&1 & nohup dd if=/dev/sdd of=/dev/sdu bs=128K status=progress > dd6.log 2>&1 & nohup dd if=/dev/sde of=/dev/sdv bs=128K status=progress > dd7.log 2>&1 & nohup dd if=/dev/sdf of=/dev/sdw bs=128K status=progress > dd8.log 2>&1 & I will upload some screenshots and logs. Supplement; 1.Previously, I started multiple rsync processes on a TrueNAS machine, and copied data to another Unraid machine's disk1-disk4 through a 10Gb network. The speed could reach 600MB+ without any bottleneck observed. 2.Also, I started multiple rsync processes on an Unraid machine, and copied disk1-8 data to another TrueNAS machine through a 10Gb network. The speed could reach 700MB+ without any bottleneck observed. ljq-unraid-diagnostics-20230403-1751.zip
  13. @Moderators Your meaning is, if I format the sdx disk as an XFS system in unRAID, and then use the dd command to copy the data, it will cause the XFS partition to be unrecognizable? Is there any solution to this? manually adjust the partition layout and signature? I don't want to format the new hard drive again and then copy 8TB of data again using rsync or cp commands.šŸ¤£
  14. Unraid version: 6.11.5 Operating process: 1.I built a 24-disk array, where the sdh disk is 8TB, XFS system, containing a large amount of data; the sdx disk is 12TB, XFS system, without data; 2.I used the dd command to copy all data from sdh disk to sdx disk; 3.Since the sdx disk is 12TB, after the data copying is completed, I used fdisk to adjust the partition and used growfs to expand the XFS file system, as well as xfs_repair to fix the file system; 4.After all operations are completed, I manually mount /dev/sdx1 mnt/sdx1, and I can see the files in the sdx disk, which seems to be no problem. Existing problems: After performing the operation, whether restarting Unraid, rebuilding the array, the Unraid interface always shows sdx disk (with the file system already selected as XFS system) as "Unmountable: Unsupported partition layout"怂 Some key commands : nohup dd if=/dev/sdh of=/dev/sdx bs=128K status=progress > dd1.log 2>&1 & fdisk /dev/sdx xfs_repair /dev/sdx1 mount /dev/sdx1 /mnt/sdx1 xfs_growfs /mnt/sdx1 I also upload the logs. Question: How should I operate to make Unraid recognize the XFS system of sdx disk? ljq-unraid-diagnostics-20230403-1330.zip
Ɨ
Ɨ
  • Create New...