Jump to content

electron286

Members
  • Content Count

    201
  • Joined

  • Last visited

  • Days Won

    1

electron286 last won the day on April 23

electron286 had the most liked content!

Community Reputation

6 Neutral

About electron286

  • Rank
    Advanced Member
  • Birthday 09/27/1961

Converted

  • Gender
    Male
  • Location
    USA
  1. Thanks, I just sent you two e-mails, one for each server, they have different controllers. I included the debug files for each server.
  2. No, it gives an error so I played around till I got the flags set properly for my controller it was prompting in the error. The following commands do properly return the respective drive serial numbers; smartctl -i /dev/twa1 -d 3ware,1 smartctl -i /dev/twa1 -d 3ware,0 smartctl -i /dev/twa1 -d 3ware,2 smartctl -i /dev/twa0 -d 3ware,0 smartctl -i /dev/twa0 -d 3ware,1
  3. I also see what looks like the same results on the 2nd server.
  4. I just saw there have been a few updates to the tool. Downloaded the latest version and this is what I now get, it no longer stalls, but I have this; DiskSpeed - Disk Diagnostics & Reporting tool Version: 2.1 Scanning Hardware 12:44:12 Spinning up hard drives 12:44:12 Scanning system storage 12:44:25 Scanning USB Bus 12:44:32 Scanning hard drives Lucee 5.2.9.31 Error (application) MessageError invoking external process Detail/usr/bin/lspci: option requires an argument -- 's' Usage: lspci [<switches>] Basic display modes: -mm Produce machine-readable output (single -m for an obsolete format) -t Show bus tree Display options: -v Be verbose (-vv for very verbose) -k Show kernel drivers handling each device -x Show hex-dump of the standard part of the config space -xxx Show hex-dump of the whole config space (dangerous; root only) -xxxx Show hex-dump of the 4096-byte extended config space (root only) -b Bus-centric view (addresses and IRQ's as seen by the bus) -D Always show domain numbers Resolving of device ID's to names: -n Show numeric ID's -nn Show both textual and numeric ID's (names & numbers) -q Query the PCI ID database for unknown ID's via DNS -qq As above, but re-query locally cached entries -Q Query the PCI ID database for all ID's via DNS Selection of devices: -s [[[[<domain>]:]<bus>]:][<slot>][.[<func>]] Show only devices in selected slots -d [<vendor>]:[<device>][:<class>] Show only devices with specified ID's Other options: -i <file> Use specified ID database instead of /usr/share/misc/pci.ids.gz -p <file> Look up kernel modules in a given file instead of default modules.pcimap -M Enable `bus mapping' mode (dangerous; root only) PCI access options: -A <method> Use the specified PCI access method (see `-A help' for a list) -O <par>=<val> Set PCI access parameter (see `-O help' for a list) -G Enable PCI access debugging -H <mode> Use direct hardware access (<mode> = 1 or 2) -F <file> Read PCI configuration dump from a given file StacktraceThe Error Occurred in /var/www/ScanControllers.cfm: line 456 454: <CFSET tmpbus=Replace(Key,":","-","ALL")> 455: <CFFILE action="write" file="#PersistDir#/lspci-vmm-s_#tmpbus#_exec.txt" output="/usr/bin/lspci -vmm -s #Key#" addnewline="NO" mode="666"> 456: <cfexecute name="/usr/bin/lspci" arguments="-vmm -s #Key#" timeout="300" variable="lspci" /> 457: <CFFILE action="delete" file="#PersistDir#/lspci-vmm-s_#tmpbus#_exec.txt"> 458: <CFFILE action="write" file="#PersistDir#/lspci-vmm_#tmpbus#.txt" output="#lspci#" addnewline="NO" mode="666"> called from /var/www/ScanControllers.cfm: line 455 453: <!--- Get the controller information ---> 454: <CFSET tmpbus=Replace(Key,":","-","ALL")> 455: <CFFILE action="write" file="#PersistDir#/lspci-vmm-s_#tmpbus#_exec.txt" output="/usr/bin/lspci -vmm -s #Key#" addnewline="NO" mode="666"> 456: <cfexecute name="/usr/bin/lspci" arguments="-vmm -s #Key#" timeout="300" variable="lspci" /> 457: <CFFILE action="delete" file="#PersistDir#/lspci-vmm-s_#tmpbus#_exec.txt"> Java Stacktracelucee.runtime.exp.ApplicationException: Error invoking external process at lucee.runtime.tag.Execute.doEndTag(Execute.java:258) at scancontrollers_cfm$cf.call_000046(/ScanControllers.cfm:456) at scancontrollers_cfm$cf.call(/ScanControllers.cfm:455) at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:933) at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:823) at lucee.runtime.listener.ClassicAppListener._onRequest(ClassicAppListener.java:66) at lucee.runtime.listener.MixedAppListener.onRequest(MixedAppListener.java:45) at lucee.runtime.PageContextImpl.execute(PageContextImpl.java:2464) at lucee.runtime.PageContextImpl._execute(PageContextImpl.java:2454) at lucee.runtime.PageContextImpl.executeCFML(PageContextImpl.java:2427) at lucee.runtime.engine.Request.exe(Request.java:44) at lucee.runtime.engine.CFMLEngineImpl._service(CFMLEngineImpl.java:1090) at lucee.runtime.engine.CFMLEngineImpl.serviceCFML(CFMLEngineImpl.java:1038) at lucee.loader.engine.CFMLEngineWrapper.serviceCFML(CFMLEngineWrapper.java:102) at lucee.loader.servlet.CFMLServlet.service(CFMLServlet.java:51) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:292) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:94) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:492) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:80) at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:620) at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:684) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:502) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1152) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:684) at org.apache.tomcat.util.net.AprEndpoint$SocketWithOptionsProcessor.run(AprEndpoint.java:2464) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:748) Timestamp9/8/19 12:44:32 PM PDT
  5. Sorry, been a little while since I checked for a reply. Here are the drive details of the first 3 drives, with the complete serial numbers for ease, all viewed with unRAID... Under Drive details for the drives I see; (Note using as the SMART controller type: 3Ware 2 /dev/twa1) Model family: SAMSUNG SpinPoint F3 Device model: SAMSUNG HD502HJ Serial number: S27FJ9FZ404491 (Note using as the SMART controller type: 3Ware 1 /dev/twa1) Model family: SAMSUNG SpinPoint F3 Device model: SAMSUNG HD502HJ Serial number: S27FJ9FZ404504 (Note using as the SMART controller type: 3Ware 1 /dev/twa0) Model family: Seagate Barracuda 7200.7 and 7200.7 Plus Device model: ST3160828AS Serial number: 5MT44SV6 And under MAIN tab in Unraid under Devices, I see this; Device Identification Parity 1AMCC_FZ404491000000000000 - 500 GB (sdf) Parity 2 1AMCC_FZ404504000000000000 - 500 GB (sdg) Disk 1 1AMCC_5MT44SV6000000000000 - 160 GB (sdc)
  6. Also, after adding all your "local" drives, Parity Protected Array, Cache Pool, Unassigned Devices (via plugin), you can still add even more resources to be usable via the add Remote SMB/NFS share, and ISO Image features, (I am not sure but I think that is a standard feature now, I do not remember adding it as a plugin)...
  7. Of course everyone has different needs, and even different applications they use for functions which may be similar to what someone else is using. I use PLEX to share my media collection with my other users, and for when I am not at home and want to access my media. I have a few arrays feeding to my PLEX server, as well as a few Windows machines with standard drive shares also showing as part of my PLEX content. With PLEX I can add a very large number of "paths" to each share as they show to my PLEX users. They have NO idea how many different servers my media is spread across. This works well for both the WINDOWS hosted PLEX server, as well running PLEX as a Docker on Unraid. It is a bit easier with the Windows installation of PLEX, but with adding Remote SMB/NFS shares, works very well under the Dockerized PLEX too, though much more involved to set-up.
  8. I personally get a bit nervous going over 16 drives in an Unraid array, with 2 parity drives, I feel OK at about 24 drives, as long as the individual drives are not over 4 TB. I think this comes down more to a comfort level than real risk and recovery capability for many of us however. I also have a nice large connected chassis to one of my servers that has 45 drive bays, it started as a test server to see how the performance of SAS hardware compared with the normal SATA I have been using. I could with this server have a total of 69 spinning drives on-line at one time, if I chose to... But I doubt I ever would... I prefer running multiple servers, to spread the load and possibility of failure across the hardware. If I have a drive failure, only one of the servers would see the rebuild load, instead of my full on-line resources. So While I doubt I would ever use more than 30 drives in an array, the option to do so would be welcome.
  9. Not sure if you have had a chance to look at my reply posts yet or not. I expect there may not be much that can be done with the one server I am running a 9650SE-8LP controller on possibly, at least not with auto-identifying the drives. I would think it may still be possible to actually test the drives if it got past the identifying drive phase, which I think it is hanging/terminating at. The other server I really have no idea what is happening unless there is an issue with the program not liking that controller configuration also, as it does seem to be able to properly report the drive type with the "lshw -c disk" command.
  10. Windows 7 64-bit (PRE-SP1 ISO) VM Install. It took a while, it was easy to get the 32-bit Windows 7 VM running with a pre-SP1 ISO, but I was not able to get the 64-bit to see a drive to install to. After reading many threads on many sites with many complaints of the same problem, this thread finally got me where I needed to go. Part of the problem I was having was I am running a much newer version of Unraid now compared with the age of the many threads I was reading. There are similarities, and also differences that was making it a little harder to try to figure out what to do next. I tied installing the drivers where all I could do was try to load drivers and hope for a drive to finally show up where I could install to, but this does not seem to work with the pre-SP1 ISO media. I then saw the posts about editing the XML to change the bus for the <disk><target> from 2015 August, I thought that sounded like it may do what I needed, but I kept reading on in case there was something about solutions for a newer Unraid version that would be closer to the 6.5.3 version I was working with. Then I lastly saw the post from "assassinmunky" posted 2017 May 05, about how he got it to work "by changing the vDisk bus to SATA (from VirtIO)"! I had tried so many combinations of things, I was no longer sure what I had or had not tried, so I did this, and IT WORKED! (the option was not quite in the same place, but I found it!) I am now happily installing a 64-bit Windows 7 VM! I had tried the 32-bit since that was all I could seem to install, made all the updates to it, then launched the software I need to run, only to then be shown again why I needed the 64-bit version... The software I need to run will only run under a 64-bit environment. So now after the new 64-bit VM is running, I can go through all the updates on it, then install my programs, and it should be all set to run! :-) Thanks everyone, even though I did not ask for help on this, as it was from all the great help provided over the years, that I was able to find the help I needed! My VM adventure has now officially begun! :-)
  11. On the second system, It looks normal for what I would expect to see. Here is the output with the serial numbers anomonized. Running Unraid 6.6.7 # lshw -c disk *-disk:0 description: ATA Disk product: WDC WD40EZRZ-00G vendor: Western Digital physical id: 0.0.0 bus info: scsi@3:0.0.0 logical name: /dev/sdd version: 0A80 serial: WD-WCC********* size: 3726GiB (4TB) capacity: 3726GiB (4TB) capabilities: 15000rpm gpt-1.00 partitioned partitioned:gpt configuration: ansiversion=6 guid=dad24cf9-32fd-4f76-82ce-4141b844a1be logicalsectorsize=512 sectorsize=4096 *-disk:1 description: SCSI Disk product: ST4000NM0023 vendor: SEAGATE physical id: 0.1.0 bus info: scsi@3:0.1.0 logical name: /dev/sde version: GE09 serial: Z1Z***** size: 3726GiB (4TB) capabilities: 7200rpm gpt-1.00 partitioned partitioned:gpt configuration: ansiversion=6 guid=3bc3749a-8974-42e8-822a-2c8c91479016 logicalsectorsize=512 sectorsize=512 *-disk:2 description: SCSI Disk product: ST4000NM0023 vendor: SEAGATE physical id: 0.2.0 bus info: scsi@3:0.2.0 logical name: /dev/sdf version: GE09 serial: Z1Z***** size: 3726GiB (4TB) capabilities: 7200rpm gpt-1.00 partitioned partitioned:gpt configuration: ansiversion=6 guid=2b239099-face-4e70-87af-922b9c828b65 logicalsectorsize=512 sectorsize=512 *-disk:3 description: SCSI Disk product: HUS724040ALS640 vendor: HGST physical id: 0.3.0 bus info: scsi@3:0.3.0 logical name: /dev/sdg version: A280 serial: PCJ***** size: 3726GiB (4TB) capacity: 4859GiB (5217GB) capabilities: 7200rpm partitioned partitioned:dos configuration: ansiversion=6 logicalsectorsize=512 sectorsize=512 *-disk description: SCSI Disk product: Cruzer Glide vendor: SanDisk physical id: 0.0.0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: 1.00 serial: ******************** size: 29GiB (31GB) capabilities: removable configuration: ansiversion=6 logicalsectorsize=512 sectorsize=512 *-medium physical id: 0 logical name: /dev/sda size: 29GiB (31GB) capabilities: partitioned partitioned:dos *-cdrom description: DVD reader product: DV-28E-V vendor: TEAC physical id: 0.0.0 bus info: scsi@1:0.0.0 logical name: /dev/sr0 version: 1.AB capabilities: removable audio dvd configuration: ansiversion=5 status=nodisc *-disk description: ATA Disk product: WDC WD1600AAJS-0 vendor: Western Digital physical id: 0.0.0 bus info: scsi@4:0.0.0 logical name: /dev/sdb version: 3A01 serial: WD-WCA********* size: 149GiB (160GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 *-disk description: ATA Disk product: WDC WD1600AAJS-0 vendor: Western Digital physical id: 0.0.0 bus info: scsi@5:0.0.0 logical name: /dev/sdc version: 3A01 serial: WD-WCA********* size: 149GiB (160GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512
  12. On the first system, I think the problem may be due to the controller that is in use. It masks the drive information. Here is the output with the serial numbers anomonized. Running Unraid 6.5.3 # lshw -c disk *-disk:0 description: SCSI Disk product: 9650SE-8LP DISK vendor: AMCC physical id: 0.0.0 bus info: scsi@1:0.0.0 logical name: /dev/sdb version: 3.08 serial: ********000000000000 size: 149GiB (160GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 *-disk:1 description: SCSI Disk product: 9650SE-8LP DISK vendor: AMCC physical id: 0.1.0 bus info: scsi@1:0.1.0 logical name: /dev/sdc version: 3.08 serial: ********000000000000 size: 149GiB (160GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 *-disk:2 description: SCSI Disk product: 9650SE-8LP DISK vendor: AMCC physical id: 0.2.0 bus info: scsi@1:0.2.0 logical name: /dev/sdd version: 3.08 serial: ********000000000000 size: 149GiB (160GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 *-disk:0 description: SCSI Disk product: 9650SE-8LP DISK vendor: AMCC physical id: 0.0.0 bus info: scsi@4:0.0.0 logical name: /dev/sde version: 3.08 serial: ********000000000000 size: 1397GiB (1500GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 signature=03c10c2e *-disk:1 description: SCSI Disk product: 9650SE-8LP DISK vendor: AMCC physical id: 0.1.0 bus info: scsi@4:0.1.0 logical name: /dev/sdf version: 3.08 serial: ********000000000000 size: 465GiB (500GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 *-disk:2 description: SCSI Disk product: 9650SE-8LP DISK vendor: AMCC physical id: 0.2.0 bus info: scsi@4:0.2.0 logical name: /dev/sdg version: 3.08 serial: ********000000000000 size: 465GiB (500GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 *-disk description: SCSI Disk product: Cruzer Fit vendor: SanDisk physical id: 0.0.0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: 1.27 serial: ******************** size: 14GiB (16GB) capabilities: removable configuration: ansiversion=6 logicalsectorsize=512 sectorsize=512 *-medium physical id: 0 logical name: /dev/sda size: 14GiB (16GB) capabilities: partitioned partitioned:dos *-cdrom description: DVD reader product: DVD-ROM SR-8178 vendor: MATSHITA physical id: 0.1.0 bus info: scsi@2:0.1.0 logical name: /dev/sr0 version: PZ16 serial: [ capabilities: removable audio dvd configuration: ansiversion=5 status=nodisc
  13. This looks like a great tool! I only have two systems that are running a somewhat recent version of Unraid however. I have loaded this into both of my newer installs, 6.5.3, and 6.6.7, and I see the same thing on both... DiskSpeed - Disk Diagnostics & Reporting tool Version: beta 6a Scanning Hardware 07:30:21 Spinning up hard drives 07:30:21 Scanning system storage 07:30:34 Scanning USB Bus 07:30:40 Scanning hard drives then it just sits there. How long does it sit there normally, and how long normally does it take for the cool screens to appear? Does it happen over time, what should we see as the data is being collected and the tests are running? I may have missed something, but I did not see anything that helps in this set of posts, at least that I noticed or stuck out.
  14. Personally, I have had more FAILED NEW DRIVES than used ones when pre-clearing them over the years. I would never just trust a new drive, or an old drive when adding it to an array. To me the whole purpose of using unraid is for more data safety and peace of mind. Not an increase in headaches and risk while gambling with my data.
  15. just my 2 cents... I always pre-clear my drives, usually on a separate computer, to complete my initial stress testing. No you do not need a dedicated computer for it, it only needs to be available for the dedicated purpose of running pre-clear on disks as it is pre-clearing disks! I have a few computers that I have a flash drive next to, that IF I need them to pre-clear a drive, I just stick the USB flash in the computer, hook up the drive(s) I need to pre-clear, boot and start pre-clearing. When I am done, I shut off the computer, remove the pre-cleared drives, and the USB flash, then I am ready to use the computer again under normal OS uses for the computer. One word of caution, either unplug the other hard drives in the computer before pre-clearing drives, or MAKE SURE YOU ARE TRIPLE checking you are selecting the correct drives to pre-clear! The alternative is that I actually run the pre-clear on the machine running unraid that is getting the new drive. I also find this method ok sometimes, but more frequently limiting as I also need to stop the array first, and shut down the computer. As long as the drive passes, no problem. If the drive fails however, which can and does happen with new drives or old drive being re-purposed, I have lost some additional server time with additional needed power downs. This is why I usually use a seperate computer for pre-clears. Also, the computers running pre-clear do not need to be as powerful as the one you would normally want to run unraid on now. All my computers that I run pre-clears on separately are just old P4 computers. they work very well for pre-clearing drives!