electron286

Members
  • Posts

    213
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by electron286

  1. Are you using QEMU to create a virtual array that UNRAID is then in turn working with? not sure of any real advantages there, but a bunch of potential issues if there is need for an array rebuild. Is the cache drive getting direct access with Unraid? it looks like it is. In my tests, many of the NVME drives at the elevated temperature you show in your earlier picture, will slow down. If temperature related, additional heat sinking and/or air flow cooling on the NVME may resolve your problem.
  2. I think the real answer is to look at how there is a mismatch between the parity and the data. Something happened, or they would match. if you run successive parity checks, WITHOUT PARTITY CORRECTIONS BEING WRITTEN, and have identical results, even if having sync errors, then yes, I agree it looks like there is not currently a hardware issue. if the results are NOT consistent, then there really probably is a hardware issue. unless you have past logs to look at, to determine where the error occurred, you really do not know if it is a data or parity drive in error. However, there are tools people have used in the past to identify which files may be affected under such situations. The data files then can be verified to be correct or not, depending on how good you are with a backup strategy. This way you can verify your data drives are correct, then re-build you parity drive(s). Hardware issues or power bumps are the two main causes of bad data being written to either the data drives or the parity drives. another many times overlooked cause is timing and voltage settings on motherboards. Some newer motherboards have default settings that now are set with GAMERS in mind, and go for default performance instead of reliability settings. One example is many ASUS motherboards s now. Pushing the timing and voltage settings for better gaming performance is the opposite of what we should be seeking on a data server. We want stable, reliable, and repeatable results. Regardless, after data is written to the array, and data and parity writes are complete, any and all parity checks afterwards should have NO sync errors. If there are errors, something is wrong, no matter how much things seem to be ok. On critical data, I even use PAR files to create additional protection and recovery options files for sets of data files. This allows me to verify the data files, and recover from damaged and even MISSING data files. I then store all of them, the data files and the PAR files, on the DATA drives on unraid. There are many programs that work with PAR and PAR2 files. It is a similar concept of how the 2 parity drives work in UNRAID, but at the file level instead of the drive level. QuickPar is one such utility, though I have not used that one myself.
  3. While the N5105 is a very capable low per CPU, the N100 seems to be a better option overall for Unraid. I am currently testing various Asmedia, and JMicron controllers. The JMB585 is a great controller, and so far in my testing works very well, it is a PCIe Gen 3x2 bridge which converts to 5 SATA III ports. Regardless of the hate people give to multiplexers/port multipliers, I am also next going to be testing stacked multiplexers/port multipliers combined with the JMB585. If done properly, paying attention to bandwidth and limits, they can be used in a very usable array configuration that will be able to outperform physical spinning hard drives. Specifically I am going to be testing with the JMB575, 1 SATA port in (host) to 5 SATA ports, all SATA III 6Gb/s capable.
  4. Cheap low power motherboard NAS array options... Yes port multipliers divide bandwidth. But when planned out properly can be very useful for building an array. The ASMedia port multipliers do not seem as reliable so far when compared with the JMicron devices. Also there are some very attractively priced N100 based boards now with JMicron PCIe - SATA controllers onboard. It is also best to keep to one family of controller/multiplier when cascading devices to maintain the best interoperability and reliability. I am starting tests with various ASMedia PCIe to SATA controllers for use to upgrade older PCI based systems. I am also looking at the N100 w/JMicron route for testing, which seems a better option for larger arrays. Obviously there is less bandwidth to work with than the LSI 12Gb SAS/SATA dual link via LSI Port multipliers. But, for a new low power system, the N100 option looks very attractive. And if not pushed to limits causing bandwidth throttling, (see option 2 below), with SPINNING hard drives, the new cheaper option looks like it should be able to even do parity checks at comparable speeds to that of the LSI build! (possibly limited more by the new CPU) N100 based MB - NAS bandwidth calculations: w/ JMB585 PCIe-SATA Bridge Controller - PCIe Gen 3 x2 to x5 SATA III 6Gb/sec ADD - JMB575 Port multipliers - 1 to 5 ports SATA 6Gb/s, Cascaded mode: up to x15 drives from 1 SATA port Cascaded mode: up to x75 drives from 5 JMB585 SATA ports! NOTE: 6Gb/sec SATA = 600 MB/sec max potential bandwidth per SATA port unshared bandwidth OPTION 1 PCIe Gen 3 x2 = (PCI-E 3.0 2 lanes = 2GB/sec) = 5 drives/ports = 400 MB/sec per drive! 5ea JMB575 multipliers, 1 per port from JMB585 ports 25 ports total - SATA 6Gb/sec SATA = 600 MB/sec max potential bandwidth per SATA port unshared bandwidth 400 MB/sec per port in FULL USAGE AVERAGED = 80 MB/sec per drive averaged over 25 drives OPTION 2 3ea JMB575 multipliers (1st level), 1 per port from 3 JMB585 ports 15ea JMB575 multipliers (2nd level), 1 per port from 1st level JMB575 multipliers 75 ports total - SATA 6Gb/sec SATA = 600 MB/sec max potential bandwidth per SATA port unshared bandwidth 1st level non-limiting - 600 MB/sec (200 MB unused bandwidth from PCIe) 2nd level 600 MB/sec per port in FULL USAGE AVERAGED = 120 MB/sec per drive averaged over 75 drives FULL USAGE = Full array activity - Parity check/build, Drive rebuild
  5. Missing drives in the unassigned drives window under the main tab after updating to unraid version 6.12.8 and updating the unassigned devices plugin to version dated 2024.03.19. (updated from 6.6.7 and unassigned devices dated 2019.03.31) It looks like the SAMBA shares are still working on the drives that were previously shared, but MOST of the assigned drives did not show on the MAIN tab of unraid. only two initially showed up. Each time I press the REFRESH DISKS AND CONFIGURATION icon, it adds ONE drive to the list... I am now seeing 7 of the 10 drives not in the array. MOST of them were precleared and ready to add to the array when needed. 3 were being used as unprotected temporary data drives for misc. uses. is there a concern with the drives not showing up as expected? Also, previously under FS it listed precleared drives as being precleared. Has that functionality been deliberately removed? it was pretty convenient for hot spares. Thanks for the plugin, it has served me well for many years.
  6. If the SSD(s) are getting hot, they may be slowing down. Increased ventilation will often improve the performance of sustained data transfer on SSDs.
  7. various system component speeds - - I made this little breakdown of various system components to assist my decision making on my upgrade. depending on actual use, this can be used to help decide where to focus on upgrades on hardware... I Hope someone else may also benefit from it. Newer SSDs, (SATA and NVME) while not listed here, are so fast overall for actual needs, the faster options only really benefit in parity check and array rebuilds. Read/write speeds of 500/400MB/sec per device are screaming fast overall for most unraid uses. It may be very beneficial to performance however to still use a higher end and faster drive for Parity and/or cache drives. But watch the caveat, some faster drives also have reduced MTBF ratings... Here is the speed breakdowns for various components, using general best case data on a well designed system: PCI-E to replace old PCI or PCI-X SATA controller cards - - and - Speed comparisons of various system components SATA - SATA 1 - 1.5 Gb/sec = 150 MB/sec SATA 2 - 3.0 Gb/sec = 300 MB/sec SATA 3 - 6.0 Gb/sec = 600 MB/sec Hard Drives 5400 RPM up to ? - up to 180-210 MB/sec - 7200 RPM up to 1030 Mb/sec disc to buffer - up to 255 MB/sec - 204 MB/sec sustained NETWORK limits 94% efficiency- - 10Mb = 1.18 MB/sec 100Mb = 11.8 MB/sec 1Gb = 118 MB/sec 2.5Gb = 295 MB/sec 10Gb = 1180 MB/sec PCI 32-bit 33 Mhz 133.33 MB/sec = 8 drives = 16.66 MB/sec per drive! 133.33 MB/sec = 4 drives = 33.33 MB/sec per drive! 133.33 MB/sec = 2 drives = 66.66 MB/sec per drive! PCI 32-bit 66 Mhz 266 MB/sec = 8 drives = 33.25 MB/sec per drive! 266 MB/sec = 4 drives = 66.5 MB/sec per drive! 266 MB/sec = 2 drives = 133 MB/sec per drive! PCI-X 64 bit 133 Mhz 1072 MB/sec = 8 drives = 134 MB/sec per drive! 1072 MB/sec = 4 drives = 268 MB/sec per drive! 1072 MB/sec = 2 drives = 536 MB/sec per drive! PCI-E 3.0 1 lanes = 1GB/sec = 16 drives = 62.5 MB/sec per drive! PCI-E 4.0 1 lanes = 2GB/sec = 16 drives = 125 MB/sec per drive! PCI-E 3.0 1 lanes = 1GB/sec = 8 drives = 125 MB/sec per drive! PCI-E 4.0 1 lanes = 2GB/sec = 8 drives = 250 MB/sec per drive! PCI-E 3.0 1 lanes = 1GB/sec = 4 drives = 250 MB/sec per drive! PCI-E 4.0 1 lanes = 2GB/sec = 4 drives = 500 MB/sec per drive! PCI-E 3.0 2 lanes = 2GB/sec = 16 drives = 125 MB/sec per drive! PCI-E 4.0 2 lanes = 4GB/sec = 16 drives = 250 MB/sec per drive! PCI-E 3.0 2 lanes = 2GB/sec = 8 drives = 250 MB/sec per drive! PCI-E 4.0 2 lanes = 4GB/sec = 8 drives = 500 MB/sec per drive! PCI-E 3.0 2 lanes = 2GB/sec = 4 drives = 500 MB/sec per drive! PCI-E 4.0 2 lanes = 4GB/sec = 4 drives = 1000 MB/sec per drive! PCI-E 3.0 4 lanes = 4GB/sec = 16 drives = 250 MB/sec per drive! PCI-E 4.0 4 lanes = 8GB/sec = 16 drives = 500 MB/sec per drive! PCI-E 3.0 4 lanes = 4GB/sec = 8 drives = 500 MB/sec per drive! PCI-E 4.0 4 lanes = 8GB/sec = 8 drives = 1000 MB/sec per drive! PCI-E 3.0 4 lanes = 4GB/sec = 4 drives = 1000 MB/sec per drive! PCI-E 4.0 4 lanes = 8GB/sec = 4 drives = 2000 MB/sec per drive!
  8. It looks like I will just need to copy/move the files over to new/re-formatted drives on the array for use under V6.x. Not fun, but needed it looks like. it also seems it may be time to consider a motherboard/SATA controller upgrade, (but probably after I migrate to V6.x I think). 8 or 16 port SATA controllers on PCI-e will definitely increase parity check/rebuild speed. I was trying to delay any big changes since the hardware has been working so well for so many years... but looking at options now, I am not really sure.
  9. The oldest one is now running a Core 2 Duo (64-bit) E7200, with 2 GB of RAM, expandable up to 4 GB of RAM. It is on a 4GB Flash drive SATA drives are running on: 1 ea - SuperMicro SAT2-MV8 (running in PCI mode) (8-drives) 1 ea - Promise FASTTRAK S150 TX4 PCI card (4-drives) The rest of the drives are on the Motherboard SATA ports (4-drives)
  10. I bought 2 more licenses to bring two more servers online a few years back. One is seeing heavy use, the other only gets odd tests as I think of them.
  11. Ok, this may seem a bit odd, but most of my UNRAID servers are way back on 4.5.3 They have been rock stable forever, with the exceptions of power outages, and a failed motherboard, and a few power supplies here and there over the years. I am finally done with my tests on 6.x... and am ready to commit to retiring and upgrading from the 4.x world... But, I see no link that actually takes me to any upgrade guide now. Am I missing something?
  12. most of mine are way back on 4.5.3 Finally considering updating them...
  13. Thanks, I just sent you two e-mails, one for each server, they have different controllers. I included the debug files for each server.
  14. No, it gives an error so I played around till I got the flags set properly for my controller it was prompting in the error. The following commands do properly return the respective drive serial numbers; smartctl -i /dev/twa1 -d 3ware,1 smartctl -i /dev/twa1 -d 3ware,0 smartctl -i /dev/twa1 -d 3ware,2 smartctl -i /dev/twa0 -d 3ware,0 smartctl -i /dev/twa0 -d 3ware,1
  15. I also see what looks like the same results on the 2nd server.
  16. I just saw there have been a few updates to the tool. Downloaded the latest version and this is what I now get, it no longer stalls, but I have this; DiskSpeed - Disk Diagnostics & Reporting tool Version: 2.1 Scanning Hardware 12:44:12 Spinning up hard drives 12:44:12 Scanning system storage 12:44:25 Scanning USB Bus 12:44:32 Scanning hard drives Lucee 5.2.9.31 Error (application) MessageError invoking external process Detail/usr/bin/lspci: option requires an argument -- 's' Usage: lspci [<switches>] Basic display modes: -mm Produce machine-readable output (single -m for an obsolete format) -t Show bus tree Display options: -v Be verbose (-vv for very verbose) -k Show kernel drivers handling each device -x Show hex-dump of the standard part of the config space -xxx Show hex-dump of the whole config space (dangerous; root only) -xxxx Show hex-dump of the 4096-byte extended config space (root only) -b Bus-centric view (addresses and IRQ's as seen by the bus) -D Always show domain numbers Resolving of device ID's to names: -n Show numeric ID's -nn Show both textual and numeric ID's (names & numbers) -q Query the PCI ID database for unknown ID's via DNS -qq As above, but re-query locally cached entries -Q Query the PCI ID database for all ID's via DNS Selection of devices: -s [[[[<domain>]:]<bus>]:][<slot>][.[<func>]] Show only devices in selected slots -d [<vendor>]:[<device>][:<class>] Show only devices with specified ID's Other options: -i <file> Use specified ID database instead of /usr/share/misc/pci.ids.gz -p <file> Look up kernel modules in a given file instead of default modules.pcimap -M Enable `bus mapping' mode (dangerous; root only) PCI access options: -A <method> Use the specified PCI access method (see `-A help' for a list) -O <par>=<val> Set PCI access parameter (see `-O help' for a list) -G Enable PCI access debugging -H <mode> Use direct hardware access (<mode> = 1 or 2) -F <file> Read PCI configuration dump from a given file StacktraceThe Error Occurred in /var/www/ScanControllers.cfm: line 456 454: <CFSET tmpbus=Replace(Key,":","-","ALL")> 455: <CFFILE action="write" file="#PersistDir#/lspci-vmm-s_#tmpbus#_exec.txt" output="/usr/bin/lspci -vmm -s #Key#" addnewline="NO" mode="666"> 456: <cfexecute name="/usr/bin/lspci" arguments="-vmm -s #Key#" timeout="300" variable="lspci" /> 457: <CFFILE action="delete" file="#PersistDir#/lspci-vmm-s_#tmpbus#_exec.txt"> 458: <CFFILE action="write" file="#PersistDir#/lspci-vmm_#tmpbus#.txt" output="#lspci#" addnewline="NO" mode="666"> called from /var/www/ScanControllers.cfm: line 455 453: <!--- Get the controller information ---> 454: <CFSET tmpbus=Replace(Key,":","-","ALL")> 455: <CFFILE action="write" file="#PersistDir#/lspci-vmm-s_#tmpbus#_exec.txt" output="/usr/bin/lspci -vmm -s #Key#" addnewline="NO" mode="666"> 456: <cfexecute name="/usr/bin/lspci" arguments="-vmm -s #Key#" timeout="300" variable="lspci" /> 457: <CFFILE action="delete" file="#PersistDir#/lspci-vmm-s_#tmpbus#_exec.txt"> Java Stacktracelucee.runtime.exp.ApplicationException: Error invoking external process at lucee.runtime.tag.Execute.doEndTag(Execute.java:258) at scancontrollers_cfm$cf.call_000046(/ScanControllers.cfm:456) at scancontrollers_cfm$cf.call(/ScanControllers.cfm:455) at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:933) at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:823) at lucee.runtime.listener.ClassicAppListener._onRequest(ClassicAppListener.java:66) at lucee.runtime.listener.MixedAppListener.onRequest(MixedAppListener.java:45) at lucee.runtime.PageContextImpl.execute(PageContextImpl.java:2464) at lucee.runtime.PageContextImpl._execute(PageContextImpl.java:2454) at lucee.runtime.PageContextImpl.executeCFML(PageContextImpl.java:2427) at lucee.runtime.engine.Request.exe(Request.java:44) at lucee.runtime.engine.CFMLEngineImpl._service(CFMLEngineImpl.java:1090) at lucee.runtime.engine.CFMLEngineImpl.serviceCFML(CFMLEngineImpl.java:1038) at lucee.loader.engine.CFMLEngineWrapper.serviceCFML(CFMLEngineWrapper.java:102) at lucee.loader.servlet.CFMLServlet.service(CFMLServlet.java:51) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:292) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:94) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:492) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:80) at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:620) at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:684) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:502) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1152) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:684) at org.apache.tomcat.util.net.AprEndpoint$SocketWithOptionsProcessor.run(AprEndpoint.java:2464) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:748) Timestamp9/8/19 12:44:32 PM PDT
  17. Sorry, been a little while since I checked for a reply. Here are the drive details of the first 3 drives, with the complete serial numbers for ease, all viewed with unRAID... Under Drive details for the drives I see; (Note using as the SMART controller type: 3Ware 2 /dev/twa1) Model family: SAMSUNG SpinPoint F3 Device model: SAMSUNG HD502HJ Serial number: S27FJ9FZ404491 (Note using as the SMART controller type: 3Ware 1 /dev/twa1) Model family: SAMSUNG SpinPoint F3 Device model: SAMSUNG HD502HJ Serial number: S27FJ9FZ404504 (Note using as the SMART controller type: 3Ware 1 /dev/twa0) Model family: Seagate Barracuda 7200.7 and 7200.7 Plus Device model: ST3160828AS Serial number: 5MT44SV6 And under MAIN tab in Unraid under Devices, I see this; Device Identification Parity 1AMCC_FZ404491000000000000 - 500 GB (sdf) Parity 2 1AMCC_FZ404504000000000000 - 500 GB (sdg) Disk 1 1AMCC_5MT44SV6000000000000 - 160 GB (sdc)
  18. Also, after adding all your "local" drives, Parity Protected Array, Cache Pool, Unassigned Devices (via plugin), you can still add even more resources to be usable via the add Remote SMB/NFS share, and ISO Image features, (I am not sure but I think that is a standard feature now, I do not remember adding it as a plugin)...
  19. Of course everyone has different needs, and even different applications they use for functions which may be similar to what someone else is using. I use PLEX to share my media collection with my other users, and for when I am not at home and want to access my media. I have a few arrays feeding to my PLEX server, as well as a few Windows machines with standard drive shares also showing as part of my PLEX content. With PLEX I can add a very large number of "paths" to each share as they show to my PLEX users. They have NO idea how many different servers my media is spread across. This works well for both the WINDOWS hosted PLEX server, as well running PLEX as a Docker on Unraid. It is a bit easier with the Windows installation of PLEX, but with adding Remote SMB/NFS shares, works very well under the Dockerized PLEX too, though much more involved to set-up.
  20. I personally get a bit nervous going over 16 drives in an Unraid array, with 2 parity drives, I feel OK at about 24 drives, as long as the individual drives are not over 4 TB. I think this comes down more to a comfort level than real risk and recovery capability for many of us however. I also have a nice large connected chassis to one of my servers that has 45 drive bays, it started as a test server to see how the performance of SAS hardware compared with the normal SATA I have been using. I could with this server have a total of 69 spinning drives on-line at one time, if I chose to... But I doubt I ever would... I prefer running multiple servers, to spread the load and possibility of failure across the hardware. If I have a drive failure, only one of the servers would see the rebuild load, instead of my full on-line resources. So While I doubt I would ever use more than 30 drives in an array, the option to do so would be welcome.
  21. Not sure if you have had a chance to look at my reply posts yet or not. I expect there may not be much that can be done with the one server I am running a 9650SE-8LP controller on possibly, at least not with auto-identifying the drives. I would think it may still be possible to actually test the drives if it got past the identifying drive phase, which I think it is hanging/terminating at. The other server I really have no idea what is happening unless there is an issue with the program not liking that controller configuration also, as it does seem to be able to properly report the drive type with the "lshw -c disk" command.
  22. Windows 7 64-bit (PRE-SP1 ISO) VM Install. It took a while, it was easy to get the 32-bit Windows 7 VM running with a pre-SP1 ISO, but I was not able to get the 64-bit to see a drive to install to. After reading many threads on many sites with many complaints of the same problem, this thread finally got me where I needed to go. Part of the problem I was having was I am running a much newer version of Unraid now compared with the age of the many threads I was reading. There are similarities, and also differences that was making it a little harder to try to figure out what to do next. I tied installing the drivers where all I could do was try to load drivers and hope for a drive to finally show up where I could install to, but this does not seem to work with the pre-SP1 ISO media. I then saw the posts about editing the XML to change the bus for the <disk><target> from 2015 August, I thought that sounded like it may do what I needed, but I kept reading on in case there was something about solutions for a newer Unraid version that would be closer to the 6.5.3 version I was working with. Then I lastly saw the post from "assassinmunky" posted 2017 May 05, about how he got it to work "by changing the vDisk bus to SATA (from VirtIO)"! I had tried so many combinations of things, I was no longer sure what I had or had not tried, so I did this, and IT WORKED! (the option was not quite in the same place, but I found it!) I am now happily installing a 64-bit Windows 7 VM! I had tried the 32-bit since that was all I could seem to install, made all the updates to it, then launched the software I need to run, only to then be shown again why I needed the 64-bit version... The software I need to run will only run under a 64-bit environment. So now after the new 64-bit VM is running, I can go through all the updates on it, then install my programs, and it should be all set to run! :-) Thanks everyone, even though I did not ask for help on this, as it was from all the great help provided over the years, that I was able to find the help I needed! My VM adventure has now officially begun! :-)
  23. On the second system, It looks normal for what I would expect to see. Here is the output with the serial numbers anomonized. Running Unraid 6.6.7 # lshw -c disk *-disk:0 description: ATA Disk product: WDC WD40EZRZ-00G vendor: Western Digital physical id: 0.0.0 bus info: scsi@3:0.0.0 logical name: /dev/sdd version: 0A80 serial: WD-WCC********* size: 3726GiB (4TB) capacity: 3726GiB (4TB) capabilities: 15000rpm gpt-1.00 partitioned partitioned:gpt configuration: ansiversion=6 guid=dad24cf9-32fd-4f76-82ce-4141b844a1be logicalsectorsize=512 sectorsize=4096 *-disk:1 description: SCSI Disk product: ST4000NM0023 vendor: SEAGATE physical id: 0.1.0 bus info: scsi@3:0.1.0 logical name: /dev/sde version: GE09 serial: Z1Z***** size: 3726GiB (4TB) capabilities: 7200rpm gpt-1.00 partitioned partitioned:gpt configuration: ansiversion=6 guid=3bc3749a-8974-42e8-822a-2c8c91479016 logicalsectorsize=512 sectorsize=512 *-disk:2 description: SCSI Disk product: ST4000NM0023 vendor: SEAGATE physical id: 0.2.0 bus info: scsi@3:0.2.0 logical name: /dev/sdf version: GE09 serial: Z1Z***** size: 3726GiB (4TB) capabilities: 7200rpm gpt-1.00 partitioned partitioned:gpt configuration: ansiversion=6 guid=2b239099-face-4e70-87af-922b9c828b65 logicalsectorsize=512 sectorsize=512 *-disk:3 description: SCSI Disk product: HUS724040ALS640 vendor: HGST physical id: 0.3.0 bus info: scsi@3:0.3.0 logical name: /dev/sdg version: A280 serial: PCJ***** size: 3726GiB (4TB) capacity: 4859GiB (5217GB) capabilities: 7200rpm partitioned partitioned:dos configuration: ansiversion=6 logicalsectorsize=512 sectorsize=512 *-disk description: SCSI Disk product: Cruzer Glide vendor: SanDisk physical id: 0.0.0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: 1.00 serial: ******************** size: 29GiB (31GB) capabilities: removable configuration: ansiversion=6 logicalsectorsize=512 sectorsize=512 *-medium physical id: 0 logical name: /dev/sda size: 29GiB (31GB) capabilities: partitioned partitioned:dos *-cdrom description: DVD reader product: DV-28E-V vendor: TEAC physical id: 0.0.0 bus info: scsi@1:0.0.0 logical name: /dev/sr0 version: 1.AB capabilities: removable audio dvd configuration: ansiversion=5 status=nodisc *-disk description: ATA Disk product: WDC WD1600AAJS-0 vendor: Western Digital physical id: 0.0.0 bus info: scsi@4:0.0.0 logical name: /dev/sdb version: 3A01 serial: WD-WCA********* size: 149GiB (160GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 *-disk description: ATA Disk product: WDC WD1600AAJS-0 vendor: Western Digital physical id: 0.0.0 bus info: scsi@5:0.0.0 logical name: /dev/sdc version: 3A01 serial: WD-WCA********* size: 149GiB (160GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512
  24. On the first system, I think the problem may be due to the controller that is in use. It masks the drive information. Here is the output with the serial numbers anomonized. Running Unraid 6.5.3 # lshw -c disk *-disk:0 description: SCSI Disk product: 9650SE-8LP DISK vendor: AMCC physical id: 0.0.0 bus info: scsi@1:0.0.0 logical name: /dev/sdb version: 3.08 serial: ********000000000000 size: 149GiB (160GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 *-disk:1 description: SCSI Disk product: 9650SE-8LP DISK vendor: AMCC physical id: 0.1.0 bus info: scsi@1:0.1.0 logical name: /dev/sdc version: 3.08 serial: ********000000000000 size: 149GiB (160GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 *-disk:2 description: SCSI Disk product: 9650SE-8LP DISK vendor: AMCC physical id: 0.2.0 bus info: scsi@1:0.2.0 logical name: /dev/sdd version: 3.08 serial: ********000000000000 size: 149GiB (160GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 *-disk:0 description: SCSI Disk product: 9650SE-8LP DISK vendor: AMCC physical id: 0.0.0 bus info: scsi@4:0.0.0 logical name: /dev/sde version: 3.08 serial: ********000000000000 size: 1397GiB (1500GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 signature=03c10c2e *-disk:1 description: SCSI Disk product: 9650SE-8LP DISK vendor: AMCC physical id: 0.1.0 bus info: scsi@4:0.1.0 logical name: /dev/sdf version: 3.08 serial: ********000000000000 size: 465GiB (500GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 *-disk:2 description: SCSI Disk product: 9650SE-8LP DISK vendor: AMCC physical id: 0.2.0 bus info: scsi@4:0.2.0 logical name: /dev/sdg version: 3.08 serial: ********000000000000 size: 465GiB (500GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 *-disk description: SCSI Disk product: Cruzer Fit vendor: SanDisk physical id: 0.0.0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: 1.27 serial: ******************** size: 14GiB (16GB) capabilities: removable configuration: ansiversion=6 logicalsectorsize=512 sectorsize=512 *-medium physical id: 0 logical name: /dev/sda size: 14GiB (16GB) capabilities: partitioned partitioned:dos *-cdrom description: DVD reader product: DVD-ROM SR-8178 vendor: MATSHITA physical id: 0.1.0 bus info: scsi@2:0.1.0 logical name: /dev/sr0 version: PZ16 serial: [ capabilities: removable audio dvd configuration: ansiversion=5 status=nodisc
  25. This looks like a great tool! I only have two systems that are running a somewhat recent version of Unraid however. I have loaded this into both of my newer installs, 6.5.3, and 6.6.7, and I see the same thing on both... DiskSpeed - Disk Diagnostics & Reporting tool Version: beta 6a Scanning Hardware 07:30:21 Spinning up hard drives 07:30:21 Scanning system storage 07:30:34 Scanning USB Bus 07:30:40 Scanning hard drives then it just sits there. How long does it sit there normally, and how long normally does it take for the cool screens to appear? Does it happen over time, what should we see as the data is being collected and the tests are running? I may have missed something, but I did not see anything that helps in this set of posts, at least that I noticed or stuck out.