electron286

Members
  • Posts

    219
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by electron286

  1. So far, with my testing, I would NOT recommend using more than 3 of the SATA ports on the JMB585 chip. That gives about 200MB/s of unused bandwidth on the PCIe 3.0 x2 port connection. I like NO bottlenecks when possible. Using 4 or 5 ports has worked well in my tests, but has definitely been a slow down due to only having two lanes from the PCIe bus. Similar with the JMB575 multipliers, I would not recommend more than 3 of the 5 ports to be used. That way the 600MB/s port is shared with only 3 SATA ports providing an average of 200MB/s. This would make this 24 port board, usable in my mind for only 9 drives at what I would describe as nice quick speeds for Parity build/check and drive rebuild. (about 7 hours in tests for 4TB drive sizes, actually averaging about 150 MB/s per device, mixed HDD and SSD devices.) But, used (even some new) LSI/Broadcom 9207-8i (or 8e for that matter), can be found rather cheaply. And if you have a motherboard that has a couple 16x size PCIe 3.0 slots open with 8 lanes or more each available... 2ea 9207-8i boards could be used at a MASSIVE speed available to 16 SATA drives with 1000MB/s available per drive that would never be used since each drive would be limited to 600MB/s by the drive interfaces! Of course a SAS Expander could be used with one controller, but the total price is about the same! The testing is fun, and I have learned from it so far, but ultimately, realistic limitations apply at every stage, bandwidth of the generation of PCIe, number of available lanes, any limitations of shared SATA lines or SAS lines using various controllers, and expanders/multipliers, and the total number of drives desired. Of course again big differences in SSDs and HDDs, and even between various models and brands of each. Surprisingly many of the SATA SSDs I have tested so far, have actually been slower than spinning hard drives on many of my tests. Even at 73MB/s per disk, it would be a big speed increase on my oldest server which is currently running old PCI mode controllers. But looking at an upgrade, not where I want to target for a hardware update.
  2. Very cool! How well does the 9207-8i work with only 4 lanes via the M.2 slot?
  3. I have recently been testing the JMB585 controller on cards, paired with multiplier boards using the JMB575. So far all of my tests have been showing very positive results, and fall into the speeds I calculated and expected. I have also been testing the AS media controller, with no issues yet. I just noticed this card about 30 minutes ago on Amazon, and realized it has everything combined into one card, instead of playing with one controller card, and 5 separate multiplexer boards. Plus, one of my concerns was heat on the controller and multiplexer chips, this board has a nice heatsink to take care of that concern too! Do I recommend this chip set solution? I am going to upgrade 1 or 2 of my older Unraid systems going this route, as it will drastically speed up parity checks and drive builds if needed. But, as not many people are running the combination, I would say proceed with caution.
  4. SAS is available in faster speeds too. SAS allows a higher transfer speed (SAS-1, SAS-2, SAS-3, and SAS-4 supports data bandwidth of 3, 6, 12, and 24 Gbits/sec, respectively) SATAIII is limited to 6Gbits/sec. it is unlikely we sill see a faster SATA standard. When picking SAS HBAs and Expanders, it is possible to find ones that support dual linking! Dual link uses two SAS links for increased total bandwidth, PLUS can provide for interconnectivity redundancy. So you can choose hardware that will provide 24Gbits,/sec. using 2 12 Gbit links, that if one link is lost, all downstream drives will still be available at a reduced total bandwidth of 12Gbits. The same can be done with the previous SAS-2 cards, at half those data transfer speeds. If carefully choosing SAS/SATA capable boards, and cables/chassis, a very nice SATA or mixed SAS/SATA drive array can be setup with high data throughput.
  5. SSDs and newer NVMEs are seriously cool and FAST compared with older spinning HDDs. I am looking at options, and running tests as I plan to upgrade my oldest Unraid Server and bring it out of the dark ages (v4.5.3). It has been a good reliable server, and still is. for a quick reference point, here are various system speeds: SATA - SATA 1 - 1.5 Gb/sec = 150 MB/sec SATA 2 - 3.0 Gb/sec = 300 MB/sec SATA 3 - 6.0 Gb/sec = 600 MB/sec Hard Drives (typical maximum speeds as of Jan 2024) 5400 RPM up to ? - up to 180-210 MB/sec - 7200 RPM up to 1030 Mb/sec disc to buffer - up to 255 MB/sec - 204 MB/sec sustained NETWORK limits 94% efficiency- - 10Mb = 1.18 MB/sec 100Mb = 11.8 MB/sec 1Gb = 118 MB/sec 2.5Gb = 295 MB/sec 10Gb = 1180 MB/sec As can be seen quickly looking at the above numbers, HDD can still be used with great throughput results with 1Gb and under networks, and even be fully acceptable on 2.5Gb networks. They still cost less to purchase, but do consume more energy when running than SSDs/NVMEs. We will assume a case with decent airflow should be used for any drive array to increase the usable life of the data and reduce the likelihood of data corruption and loss. For HDDs CMR (Conventional side by side recording) is preferable to SMR (Shingled overlayed recording) storage technologies. Following are based on my experiences to date: HDDs typically give advance warning of failure, and from what I have seen, usually can have all or most of the data recovered from them if needed when they go bad. SSDs seem to usually fail abruptly, with no option to read data from, them once they fail. I expect the same from NVMEs. HDDs have a higher likelihood of soft read errors (that can be recovered) than SSDs. Read errors are not common on SSDs. HDDs typically are more consistent in transfer speeds than SSDs. So my original thoughts were that HDDs may still be the best overall for Unraid use, where the main purpose is to store then stream movies/TV shows, and Music. SSDs would be faster for Parity Builds, Checks and Drive Rebuilds, but would see little in performance increase for network usage. While testing different devices and array configurations, I was surprised when I cam upon two 2.5" SSDs from different vendors, that when hot with a large queue of read requests during an array parity build/parity check that the read speed on each device plummeted to 100MB/s and less! This was in ALL SSD/NVME drive pool tests that started out in the 500MB/s range for a little while. Removing BOTH of these drives allowed for full array operation at 450MB+ for the full duration of Parity Build/Verification. Introducing either of the drives into the array dropped sustained speeds back down to the 100MB/s and under speeds. Now, placing two 4TB WD Black spinning drives in as Parity drives of course limits the array speed for Parity reads and writes to about 250MB/s. Interestingly this added delay on the array, prevents the two slow SSDs in the previous tests from becoming saturated in read queues, allowing them to complete the full array Parity Build/Verification cycles at 250MB/s! No array speed change when adding another 4TB WD Black spinner to the data array Adding another 4TB WD Red Plus, to the data array slows the array to about 200MB/s for parity operations. These test speeds are consistent when the SATA SSDs and HDDS used with the motherboard or expansion card SATA III ports when not limited by bandwidth such as too many SATA ports going though a PCIe lane. More tests, data, and numbers to come.
  6. Nice update. Happy to see you found the bottleneck. As newer standards and higher speeds come out on all the hardware, from busses on the motherboards, and faster Ethernet standards, and faster SSDs, etc., sadly it is common to eventually hit unexpected bottlenecks. With Ethernet, sometimes some level of incompatibility pops up between brands of chipsets in the controllers, and even switches, and frame sizes used, and cache memory used at all the connection points. Sometimes a large improvement can be seen by either increasing the frame size, or reducing the frame size. Jumbo frames have some advantages in potentially reducing overhead, but sometimes actually slow down data transfers due to the specific cache designs on various chipsets. Also, if there are data errors, a resent smaller packet is resent much quicker than a much larger jumbo frame, which can quickly result in much slower overall transfer speeds using jumbo frames if everything is not running 100% correctly. Notice the retries in your transfers. Something is definitely not happy. Even with your direct connection between computers you are seeing some retries for some reason. Cable types and terminations are of course a first place to check. Sadly even factory built cables can at times be defective, and not meet the standards. Your results remind me of when I was first switching over to gigabit on my network. Overall it seemed pretty great versus 100Mbit, but the numbers were not as I expected. I was seeing excessive retries going through my switches, even using switches from multiple vendors, yielded similar results. I found two cables that were more problematic than the rest, so I swapped them out. I also banned jumbo frames from my network, which also helped quite a bit. About 6 months later things started getting worse, one computer after another started dropping down to 100Mbit speed. So I bought some Intel Gb network cards to replace the Realtek ones for additional testing. With no other changes, network speeds were better than any of my prior tests using the Realtek devices. I bought more Intel NICs in bulk to get better pricing, and began to swap out all the rest of the Realtek Gb NICs. I did not switch them out immediately, but at first replaced the Realtek NICs as the performance died on them. In the end, about 60% of the Realtek NICs died in about 18 months from initial installation. Then I completed pulling out the rest and replaced with the Intel NICs. I have run fully INTEL NICs since. This past year, I have finally bought some MBs that have Realtek 2.5Gb NICs built-in. I will be adding a 2.5Gb switch soon to actually test stress them. Going back to Jumbo Frames, unless your full network supports them, they can be problematic, even transitioning to routers and modems can be an issue and source for lost performance. At best, you would typically only be able to get a 10% overall speed boost with Jumbo Frames, which for the sake of data integrity and quicker packet recovery when needed, as well as better compatibility, just does not make sense to me, to even think of enabling Jumbo ever again. If I am setting up a system just for top speeds, sure, but for everyday use, no way.
  7. Are you using QEMU to create a virtual array that UNRAID is then in turn working with? not sure of any real advantages there, but a bunch of potential issues if there is need for an array rebuild. Is the cache drive getting direct access with Unraid? it looks like it is. In my tests, many of the NVME drives at the elevated temperature you show in your earlier picture, will slow down. If temperature related, additional heat sinking and/or air flow cooling on the NVME may resolve your problem.
  8. I think the real answer is to look at how there is a mismatch between the parity and the data. Something happened, or they would match. if you run successive parity checks, WITHOUT PARTITY CORRECTIONS BEING WRITTEN, and have identical results, even if having sync errors, then yes, I agree it looks like there is not currently a hardware issue. if the results are NOT consistent, then there really probably is a hardware issue. unless you have past logs to look at, to determine where the error occurred, you really do not know if it is a data or parity drive in error. However, there are tools people have used in the past to identify which files may be affected under such situations. The data files then can be verified to be correct or not, depending on how good you are with a backup strategy. This way you can verify your data drives are correct, then re-build you parity drive(s). Hardware issues or power bumps are the two main causes of bad data being written to either the data drives or the parity drives. another many times overlooked cause is timing and voltage settings on motherboards. Some newer motherboards have default settings that now are set with GAMERS in mind, and go for default performance instead of reliability settings. One example is many ASUS motherboards s now. Pushing the timing and voltage settings for better gaming performance is the opposite of what we should be seeking on a data server. We want stable, reliable, and repeatable results. Regardless, after data is written to the array, and data and parity writes are complete, any and all parity checks afterwards should have NO sync errors. If there are errors, something is wrong, no matter how much things seem to be ok. On critical data, I even use PAR files to create additional protection and recovery options files for sets of data files. This allows me to verify the data files, and recover from damaged and even MISSING data files. I then store all of them, the data files and the PAR files, on the DATA drives on unraid. There are many programs that work with PAR and PAR2 files. It is a similar concept of how the 2 parity drives work in UNRAID, but at the file level instead of the drive level. QuickPar is one such utility, though I have not used that one myself.
  9. While the N5105 is a very capable low per CPU, the N100 seems to be a better option overall for Unraid. I am currently testing various Asmedia, and JMicron controllers. The JMB585 is a great controller, and so far in my testing works very well, it is a PCIe Gen 3x2 bridge which converts to 5 SATA III ports. Regardless of the hate people give to multiplexers/port multipliers, I am also next going to be testing stacked multiplexers/port multipliers combined with the JMB585. If done properly, paying attention to bandwidth and limits, they can be used in a very usable array configuration that will be able to outperform physical spinning hard drives. Specifically I am going to be testing with the JMB575, 1 SATA port in (host) to 5 SATA ports, all SATA III 6Gb/s capable.
  10. Cheap low power motherboard NAS array options... Yes port multipliers divide bandwidth. But when planned out properly can be very useful for building an array. The ASMedia port multipliers do not seem as reliable so far when compared with the JMicron devices. Also there are some very attractively priced N100 based boards now with JMicron PCIe - SATA controllers onboard. It is also best to keep to one family of controller/multiplier when cascading devices to maintain the best interoperability and reliability. I am starting tests with various ASMedia PCIe to SATA controllers for use to upgrade older PCI based systems. I am also looking at the N100 w/JMicron route for testing, which seems a better option for larger arrays. Obviously there is less bandwidth to work with than the LSI 12Gb SAS/SATA dual link via LSI Port multipliers. But, for a new low power system, the N100 option looks very attractive. And if not pushed to limits causing bandwidth throttling, (see option 2 below), with SPINNING hard drives, the new cheaper option looks like it should be able to even do parity checks at comparable speeds to that of the LSI build! (possibly limited more by the new CPU) N100 based MB - NAS bandwidth calculations: w/ JMB585 PCIe-SATA Bridge Controller - PCIe Gen 3 x2 to x5 SATA III 6Gb/sec ADD - JMB575 Port multipliers - 1 to 5 ports SATA 6Gb/s, Cascaded mode: up to x15 drives from 1 SATA port Cascaded mode: up to x75 drives from 5 JMB585 SATA ports! NOTE: 6Gb/sec SATA = 600 MB/sec max potential bandwidth per SATA port unshared bandwidth OPTION 1 PCIe Gen 3 x2 = (PCI-E 3.0 2 lanes = 2GB/sec) = 5 drives/ports = 400 MB/sec per drive! 5ea JMB575 multipliers, 1 per port from JMB585 ports 25 ports total - SATA 6Gb/sec SATA = 600 MB/sec max potential bandwidth per SATA port unshared bandwidth 400 MB/sec per port in FULL USAGE AVERAGED = 80 MB/sec per drive averaged over 25 drives OPTION 2 3ea JMB575 multipliers (1st level), 1 per port from 3 JMB585 ports 15ea JMB575 multipliers (2nd level), 1 per port from 1st level JMB575 multipliers 75 ports total - SATA 6Gb/sec SATA = 600 MB/sec max potential bandwidth per SATA port unshared bandwidth 1st level non-limiting - 600 MB/sec (200 MB unused bandwidth from PCIe) 2nd level 600 MB/sec per port in FULL USAGE AVERAGED = 120 MB/sec per drive averaged over 75 drives FULL USAGE = Full array activity - Parity check/build, Drive rebuild
  11. Missing drives in the unassigned drives window under the main tab after updating to unraid version 6.12.8 and updating the unassigned devices plugin to version dated 2024.03.19. (updated from 6.6.7 and unassigned devices dated 2019.03.31) It looks like the SAMBA shares are still working on the drives that were previously shared, but MOST of the assigned drives did not show on the MAIN tab of unraid. only two initially showed up. Each time I press the REFRESH DISKS AND CONFIGURATION icon, it adds ONE drive to the list... I am now seeing 7 of the 10 drives not in the array. MOST of them were precleared and ready to add to the array when needed. 3 were being used as unprotected temporary data drives for misc. uses. is there a concern with the drives not showing up as expected? Also, previously under FS it listed precleared drives as being precleared. Has that functionality been deliberately removed? it was pretty convenient for hot spares. Thanks for the plugin, it has served me well for many years.
  12. If the SSD(s) are getting hot, they may be slowing down. Increased ventilation will often improve the performance of sustained data transfer on SSDs.
  13. various system component speeds - - I made this little breakdown of various system components to assist my decision making on my upgrade. depending on actual use, this can be used to help decide where to focus on upgrades on hardware... I Hope someone else may also benefit from it. Newer SSDs, (SATA and NVME) while not listed here, are so fast overall for actual needs, the faster options only really benefit in parity check and array rebuilds. Read/write speeds of 500/400MB/sec per device are screaming fast overall for most unraid uses. It may be very beneficial to performance however to still use a higher end and faster drive for Parity and/or cache drives. But watch the caveat, some faster drives also have reduced MTBF ratings... Here is the speed breakdowns for various components, using general best case data on a well designed system: PCI-E to replace old PCI or PCI-X SATA controller cards - - and - Speed comparisons of various system components SATA - SATA 1 - 1.5 Gb/sec = 150 MB/sec SATA 2 - 3.0 Gb/sec = 300 MB/sec SATA 3 - 6.0 Gb/sec = 600 MB/sec Hard Drives 5400 RPM up to ? - up to 180-210 MB/sec - 7200 RPM up to 1030 Mb/sec disc to buffer - up to 255 MB/sec - 204 MB/sec sustained NETWORK limits 94% efficiency- - 10Mb = 1.18 MB/sec 100Mb = 11.8 MB/sec 1Gb = 118 MB/sec 2.5Gb = 295 MB/sec 10Gb = 1180 MB/sec PCI 32-bit 33 Mhz 133.33 MB/sec = 8 drives = 16.66 MB/sec per drive! 133.33 MB/sec = 4 drives = 33.33 MB/sec per drive! 133.33 MB/sec = 2 drives = 66.66 MB/sec per drive! PCI 32-bit 66 Mhz 266 MB/sec = 8 drives = 33.25 MB/sec per drive! 266 MB/sec = 4 drives = 66.5 MB/sec per drive! 266 MB/sec = 2 drives = 133 MB/sec per drive! PCI-X 64 bit 133 Mhz 1072 MB/sec = 8 drives = 134 MB/sec per drive! 1072 MB/sec = 4 drives = 268 MB/sec per drive! 1072 MB/sec = 2 drives = 536 MB/sec per drive! PCI-E 3.0 1 lanes = 1GB/sec = 16 drives = 62.5 MB/sec per drive! PCI-E 4.0 1 lanes = 2GB/sec = 16 drives = 125 MB/sec per drive! PCI-E 3.0 1 lanes = 1GB/sec = 8 drives = 125 MB/sec per drive! PCI-E 4.0 1 lanes = 2GB/sec = 8 drives = 250 MB/sec per drive! PCI-E 3.0 1 lanes = 1GB/sec = 4 drives = 250 MB/sec per drive! PCI-E 4.0 1 lanes = 2GB/sec = 4 drives = 500 MB/sec per drive! PCI-E 3.0 2 lanes = 2GB/sec = 16 drives = 125 MB/sec per drive! PCI-E 4.0 2 lanes = 4GB/sec = 16 drives = 250 MB/sec per drive! PCI-E 3.0 2 lanes = 2GB/sec = 8 drives = 250 MB/sec per drive! PCI-E 4.0 2 lanes = 4GB/sec = 8 drives = 500 MB/sec per drive! PCI-E 3.0 2 lanes = 2GB/sec = 4 drives = 500 MB/sec per drive! PCI-E 4.0 2 lanes = 4GB/sec = 4 drives = 1000 MB/sec per drive! PCI-E 3.0 4 lanes = 4GB/sec = 16 drives = 250 MB/sec per drive! PCI-E 4.0 4 lanes = 8GB/sec = 16 drives = 500 MB/sec per drive! PCI-E 3.0 4 lanes = 4GB/sec = 8 drives = 500 MB/sec per drive! PCI-E 4.0 4 lanes = 8GB/sec = 8 drives = 1000 MB/sec per drive! PCI-E 3.0 4 lanes = 4GB/sec = 4 drives = 1000 MB/sec per drive! PCI-E 4.0 4 lanes = 8GB/sec = 4 drives = 2000 MB/sec per drive!
  14. It looks like I will just need to copy/move the files over to new/re-formatted drives on the array for use under V6.x. Not fun, but needed it looks like. it also seems it may be time to consider a motherboard/SATA controller upgrade, (but probably after I migrate to V6.x I think). 8 or 16 port SATA controllers on PCI-e will definitely increase parity check/rebuild speed. I was trying to delay any big changes since the hardware has been working so well for so many years... but looking at options now, I am not really sure.
  15. The oldest one is now running a Core 2 Duo (64-bit) E7200, with 2 GB of RAM, expandable up to 4 GB of RAM. It is on a 4GB Flash drive SATA drives are running on: 1 ea - SuperMicro SAT2-MV8 (running in PCI mode) (8-drives) 1 ea - Promise FASTTRAK S150 TX4 PCI card (4-drives) The rest of the drives are on the Motherboard SATA ports (4-drives)
  16. I bought 2 more licenses to bring two more servers online a few years back. One is seeing heavy use, the other only gets odd tests as I think of them.
  17. Ok, this may seem a bit odd, but most of my UNRAID servers are way back on 4.5.3 They have been rock stable forever, with the exceptions of power outages, and a failed motherboard, and a few power supplies here and there over the years. I am finally done with my tests on 6.x... and am ready to commit to retiring and upgrading from the 4.x world... But, I see no link that actually takes me to any upgrade guide now. Am I missing something?
  18. most of mine are way back on 4.5.3 Finally considering updating them...
  19. Thanks, I just sent you two e-mails, one for each server, they have different controllers. I included the debug files for each server.
  20. No, it gives an error so I played around till I got the flags set properly for my controller it was prompting in the error. The following commands do properly return the respective drive serial numbers; smartctl -i /dev/twa1 -d 3ware,1 smartctl -i /dev/twa1 -d 3ware,0 smartctl -i /dev/twa1 -d 3ware,2 smartctl -i /dev/twa0 -d 3ware,0 smartctl -i /dev/twa0 -d 3ware,1
  21. I also see what looks like the same results on the 2nd server.
  22. I just saw there have been a few updates to the tool. Downloaded the latest version and this is what I now get, it no longer stalls, but I have this; DiskSpeed - Disk Diagnostics & Reporting tool Version: 2.1 Scanning Hardware 12:44:12 Spinning up hard drives 12:44:12 Scanning system storage 12:44:25 Scanning USB Bus 12:44:32 Scanning hard drives Lucee 5.2.9.31 Error (application) MessageError invoking external process Detail/usr/bin/lspci: option requires an argument -- 's' Usage: lspci [<switches>] Basic display modes: -mm Produce machine-readable output (single -m for an obsolete format) -t Show bus tree Display options: -v Be verbose (-vv for very verbose) -k Show kernel drivers handling each device -x Show hex-dump of the standard part of the config space -xxx Show hex-dump of the whole config space (dangerous; root only) -xxxx Show hex-dump of the 4096-byte extended config space (root only) -b Bus-centric view (addresses and IRQ's as seen by the bus) -D Always show domain numbers Resolving of device ID's to names: -n Show numeric ID's -nn Show both textual and numeric ID's (names & numbers) -q Query the PCI ID database for unknown ID's via DNS -qq As above, but re-query locally cached entries -Q Query the PCI ID database for all ID's via DNS Selection of devices: -s [[[[<domain>]:]<bus>]:][<slot>][.[<func>]] Show only devices in selected slots -d [<vendor>]:[<device>][:<class>] Show only devices with specified ID's Other options: -i <file> Use specified ID database instead of /usr/share/misc/pci.ids.gz -p <file> Look up kernel modules in a given file instead of default modules.pcimap -M Enable `bus mapping' mode (dangerous; root only) PCI access options: -A <method> Use the specified PCI access method (see `-A help' for a list) -O <par>=<val> Set PCI access parameter (see `-O help' for a list) -G Enable PCI access debugging -H <mode> Use direct hardware access (<mode> = 1 or 2) -F <file> Read PCI configuration dump from a given file StacktraceThe Error Occurred in /var/www/ScanControllers.cfm: line 456 454: <CFSET tmpbus=Replace(Key,":","-","ALL")> 455: <CFFILE action="write" file="#PersistDir#/lspci-vmm-s_#tmpbus#_exec.txt" output="/usr/bin/lspci -vmm -s #Key#" addnewline="NO" mode="666"> 456: <cfexecute name="/usr/bin/lspci" arguments="-vmm -s #Key#" timeout="300" variable="lspci" /> 457: <CFFILE action="delete" file="#PersistDir#/lspci-vmm-s_#tmpbus#_exec.txt"> 458: <CFFILE action="write" file="#PersistDir#/lspci-vmm_#tmpbus#.txt" output="#lspci#" addnewline="NO" mode="666"> called from /var/www/ScanControllers.cfm: line 455 453: <!--- Get the controller information ---> 454: <CFSET tmpbus=Replace(Key,":","-","ALL")> 455: <CFFILE action="write" file="#PersistDir#/lspci-vmm-s_#tmpbus#_exec.txt" output="/usr/bin/lspci -vmm -s #Key#" addnewline="NO" mode="666"> 456: <cfexecute name="/usr/bin/lspci" arguments="-vmm -s #Key#" timeout="300" variable="lspci" /> 457: <CFFILE action="delete" file="#PersistDir#/lspci-vmm-s_#tmpbus#_exec.txt"> Java Stacktracelucee.runtime.exp.ApplicationException: Error invoking external process at lucee.runtime.tag.Execute.doEndTag(Execute.java:258) at scancontrollers_cfm$cf.call_000046(/ScanControllers.cfm:456) at scancontrollers_cfm$cf.call(/ScanControllers.cfm:455) at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:933) at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:823) at lucee.runtime.listener.ClassicAppListener._onRequest(ClassicAppListener.java:66) at lucee.runtime.listener.MixedAppListener.onRequest(MixedAppListener.java:45) at lucee.runtime.PageContextImpl.execute(PageContextImpl.java:2464) at lucee.runtime.PageContextImpl._execute(PageContextImpl.java:2454) at lucee.runtime.PageContextImpl.executeCFML(PageContextImpl.java:2427) at lucee.runtime.engine.Request.exe(Request.java:44) at lucee.runtime.engine.CFMLEngineImpl._service(CFMLEngineImpl.java:1090) at lucee.runtime.engine.CFMLEngineImpl.serviceCFML(CFMLEngineImpl.java:1038) at lucee.loader.engine.CFMLEngineWrapper.serviceCFML(CFMLEngineWrapper.java:102) at lucee.loader.servlet.CFMLServlet.service(CFMLServlet.java:51) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:292) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:94) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:492) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:80) at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:620) at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:684) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:502) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1152) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:684) at org.apache.tomcat.util.net.AprEndpoint$SocketWithOptionsProcessor.run(AprEndpoint.java:2464) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:748) Timestamp9/8/19 12:44:32 PM PDT
  23. Sorry, been a little while since I checked for a reply. Here are the drive details of the first 3 drives, with the complete serial numbers for ease, all viewed with unRAID... Under Drive details for the drives I see; (Note using as the SMART controller type: 3Ware 2 /dev/twa1) Model family: SAMSUNG SpinPoint F3 Device model: SAMSUNG HD502HJ Serial number: S27FJ9FZ404491 (Note using as the SMART controller type: 3Ware 1 /dev/twa1) Model family: SAMSUNG SpinPoint F3 Device model: SAMSUNG HD502HJ Serial number: S27FJ9FZ404504 (Note using as the SMART controller type: 3Ware 1 /dev/twa0) Model family: Seagate Barracuda 7200.7 and 7200.7 Plus Device model: ST3160828AS Serial number: 5MT44SV6 And under MAIN tab in Unraid under Devices, I see this; Device Identification Parity 1AMCC_FZ404491000000000000 - 500 GB (sdf) Parity 2 1AMCC_FZ404504000000000000 - 500 GB (sdg) Disk 1 1AMCC_5MT44SV6000000000000 - 160 GB (sdc)
  24. Also, after adding all your "local" drives, Parity Protected Array, Cache Pool, Unassigned Devices (via plugin), you can still add even more resources to be usable via the add Remote SMB/NFS share, and ISO Image features, (I am not sure but I think that is a standard feature now, I do not remember adding it as a plugin)...
  25. Of course everyone has different needs, and even different applications they use for functions which may be similar to what someone else is using. I use PLEX to share my media collection with my other users, and for when I am not at home and want to access my media. I have a few arrays feeding to my PLEX server, as well as a few Windows machines with standard drive shares also showing as part of my PLEX content. With PLEX I can add a very large number of "paths" to each share as they show to my PLEX users. They have NO idea how many different servers my media is spread across. This works well for both the WINDOWS hosted PLEX server, as well running PLEX as a Docker on Unraid. It is a bit easier with the Windows installation of PLEX, but with adding Remote SMB/NFS shares, works very well under the Dockerized PLEX too, though much more involved to set-up.