Jump to content

electron286

Members
  • Content Count

    193
  • Joined

  • Last visited

  • Days Won

    1

electron286 last won the day on April 23

electron286 had the most liked content!

Community Reputation

6 Neutral

About electron286

  • Rank
    Advanced Member
  • Birthday 09/27/1961

Converted

  • Gender
    Male
  • Location
    USA
  1. Not sure if you have had a chance to look at my reply posts yet or not. I expect there may not be much that can be done with the one server I am running a 9650SE-8LP controller on possibly, at least not with auto-identifying the drives. I would think it may still be possible to actually test the drives if it got past the identifying drive phase, which I think it is hanging/terminating at. The other server I really have no idea what is happening unless there is an issue with the program not liking that controller configuration also, as it does seem to be able to properly report the drive type with the "lshw -c disk" command.
  2. Windows 7 64-bit (PRE-SP1 ISO) VM Install. It took a while, it was easy to get the 32-bit Windows 7 VM running with a pre-SP1 ISO, but I was not able to get the 64-bit to see a drive to install to. After reading many threads on many sites with many complaints of the same problem, this thread finally got me where I needed to go. Part of the problem I was having was I am running a much newer version of Unraid now compared with the age of the many threads I was reading. There are similarities, and also differences that was making it a little harder to try to figure out what to do next. I tied installing the drivers where all I could do was try to load drivers and hope for a drive to finally show up where I could install to, but this does not seem to work with the pre-SP1 ISO media. I then saw the posts about editing the XML to change the bus for the <disk><target> from 2015 August, I thought that sounded like it may do what I needed, but I kept reading on in case there was something about solutions for a newer Unraid version that would be closer to the 6.5.3 version I was working with. Then I lastly saw the post from "assassinmunky" posted 2017 May 05, about how he got it to work "by changing the vDisk bus to SATA (from VirtIO)"! I had tried so many combinations of things, I was no longer sure what I had or had not tried, so I did this, and IT WORKED! (the option was not quite in the same place, but I found it!) I am now happily installing a 64-bit Windows 7 VM! I had tried the 32-bit since that was all I could seem to install, made all the updates to it, then launched the software I need to run, only to then be shown again why I needed the 64-bit version... The software I need to run will only run under a 64-bit environment. So now after the new 64-bit VM is running, I can go through all the updates on it, then install my programs, and it should be all set to run! :-) Thanks everyone, even though I did not ask for help on this, as it was from all the great help provided over the years, that I was able to find the help I needed! My VM adventure has now officially begun! :-)
  3. On the second system, It looks normal for what I would expect to see. Here is the output with the serial numbers anomonized. Running Unraid 6.6.7 # lshw -c disk *-disk:0 description: ATA Disk product: WDC WD40EZRZ-00G vendor: Western Digital physical id: 0.0.0 bus info: scsi@3:0.0.0 logical name: /dev/sdd version: 0A80 serial: WD-WCC********* size: 3726GiB (4TB) capacity: 3726GiB (4TB) capabilities: 15000rpm gpt-1.00 partitioned partitioned:gpt configuration: ansiversion=6 guid=dad24cf9-32fd-4f76-82ce-4141b844a1be logicalsectorsize=512 sectorsize=4096 *-disk:1 description: SCSI Disk product: ST4000NM0023 vendor: SEAGATE physical id: 0.1.0 bus info: scsi@3:0.1.0 logical name: /dev/sde version: GE09 serial: Z1Z***** size: 3726GiB (4TB) capabilities: 7200rpm gpt-1.00 partitioned partitioned:gpt configuration: ansiversion=6 guid=3bc3749a-8974-42e8-822a-2c8c91479016 logicalsectorsize=512 sectorsize=512 *-disk:2 description: SCSI Disk product: ST4000NM0023 vendor: SEAGATE physical id: 0.2.0 bus info: scsi@3:0.2.0 logical name: /dev/sdf version: GE09 serial: Z1Z***** size: 3726GiB (4TB) capabilities: 7200rpm gpt-1.00 partitioned partitioned:gpt configuration: ansiversion=6 guid=2b239099-face-4e70-87af-922b9c828b65 logicalsectorsize=512 sectorsize=512 *-disk:3 description: SCSI Disk product: HUS724040ALS640 vendor: HGST physical id: 0.3.0 bus info: scsi@3:0.3.0 logical name: /dev/sdg version: A280 serial: PCJ***** size: 3726GiB (4TB) capacity: 4859GiB (5217GB) capabilities: 7200rpm partitioned partitioned:dos configuration: ansiversion=6 logicalsectorsize=512 sectorsize=512 *-disk description: SCSI Disk product: Cruzer Glide vendor: SanDisk physical id: 0.0.0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: 1.00 serial: ******************** size: 29GiB (31GB) capabilities: removable configuration: ansiversion=6 logicalsectorsize=512 sectorsize=512 *-medium physical id: 0 logical name: /dev/sda size: 29GiB (31GB) capabilities: partitioned partitioned:dos *-cdrom description: DVD reader product: DV-28E-V vendor: TEAC physical id: 0.0.0 bus info: scsi@1:0.0.0 logical name: /dev/sr0 version: 1.AB capabilities: removable audio dvd configuration: ansiversion=5 status=nodisc *-disk description: ATA Disk product: WDC WD1600AAJS-0 vendor: Western Digital physical id: 0.0.0 bus info: scsi@4:0.0.0 logical name: /dev/sdb version: 3A01 serial: WD-WCA********* size: 149GiB (160GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 *-disk description: ATA Disk product: WDC WD1600AAJS-0 vendor: Western Digital physical id: 0.0.0 bus info: scsi@5:0.0.0 logical name: /dev/sdc version: 3A01 serial: WD-WCA********* size: 149GiB (160GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512
  4. On the first system, I think the problem may be due to the controller that is in use. It masks the drive information. Here is the output with the serial numbers anomonized. Running Unraid 6.5.3 # lshw -c disk *-disk:0 description: SCSI Disk product: 9650SE-8LP DISK vendor: AMCC physical id: 0.0.0 bus info: scsi@1:0.0.0 logical name: /dev/sdb version: 3.08 serial: ********000000000000 size: 149GiB (160GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 *-disk:1 description: SCSI Disk product: 9650SE-8LP DISK vendor: AMCC physical id: 0.1.0 bus info: scsi@1:0.1.0 logical name: /dev/sdc version: 3.08 serial: ********000000000000 size: 149GiB (160GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 *-disk:2 description: SCSI Disk product: 9650SE-8LP DISK vendor: AMCC physical id: 0.2.0 bus info: scsi@1:0.2.0 logical name: /dev/sdd version: 3.08 serial: ********000000000000 size: 149GiB (160GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 *-disk:0 description: SCSI Disk product: 9650SE-8LP DISK vendor: AMCC physical id: 0.0.0 bus info: scsi@4:0.0.0 logical name: /dev/sde version: 3.08 serial: ********000000000000 size: 1397GiB (1500GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 signature=03c10c2e *-disk:1 description: SCSI Disk product: 9650SE-8LP DISK vendor: AMCC physical id: 0.1.0 bus info: scsi@4:0.1.0 logical name: /dev/sdf version: 3.08 serial: ********000000000000 size: 465GiB (500GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 *-disk:2 description: SCSI Disk product: 9650SE-8LP DISK vendor: AMCC physical id: 0.2.0 bus info: scsi@4:0.2.0 logical name: /dev/sdg version: 3.08 serial: ********000000000000 size: 465GiB (500GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 *-disk description: SCSI Disk product: Cruzer Fit vendor: SanDisk physical id: 0.0.0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: 1.27 serial: ******************** size: 14GiB (16GB) capabilities: removable configuration: ansiversion=6 logicalsectorsize=512 sectorsize=512 *-medium physical id: 0 logical name: /dev/sda size: 14GiB (16GB) capabilities: partitioned partitioned:dos *-cdrom description: DVD reader product: DVD-ROM SR-8178 vendor: MATSHITA physical id: 0.1.0 bus info: scsi@2:0.1.0 logical name: /dev/sr0 version: PZ16 serial: [ capabilities: removable audio dvd configuration: ansiversion=5 status=nodisc
  5. This looks like a great tool! I only have two systems that are running a somewhat recent version of Unraid however. I have loaded this into both of my newer installs, 6.5.3, and 6.6.7, and I see the same thing on both... DiskSpeed - Disk Diagnostics & Reporting tool Version: beta 6a Scanning Hardware 07:30:21 Spinning up hard drives 07:30:21 Scanning system storage 07:30:34 Scanning USB Bus 07:30:40 Scanning hard drives then it just sits there. How long does it sit there normally, and how long normally does it take for the cool screens to appear? Does it happen over time, what should we see as the data is being collected and the tests are running? I may have missed something, but I did not see anything that helps in this set of posts, at least that I noticed or stuck out.
  6. Personally, I have had more FAILED NEW DRIVES than used ones when pre-clearing them over the years. I would never just trust a new drive, or an old drive when adding it to an array. To me the whole purpose of using unraid is for more data safety and peace of mind. Not an increase in headaches and risk while gambling with my data.
  7. just my 2 cents... I always pre-clear my drives, usually on a separate computer, to complete my initial stress testing. No you do not need a dedicated computer for it, it only needs to be available for the dedicated purpose of running pre-clear on disks as it is pre-clearing disks! I have a few computers that I have a flash drive next to, that IF I need them to pre-clear a drive, I just stick the USB flash in the computer, hook up the drive(s) I need to pre-clear, boot and start pre-clearing. When I am done, I shut off the computer, remove the pre-cleared drives, and the USB flash, then I am ready to use the computer again under normal OS uses for the computer. One word of caution, either unplug the other hard drives in the computer before pre-clearing drives, or MAKE SURE YOU ARE TRIPLE checking you are selecting the correct drives to pre-clear! The alternative is that I actually run the pre-clear on the machine running unraid that is getting the new drive. I also find this method ok sometimes, but more frequently limiting as I also need to stop the array first, and shut down the computer. As long as the drive passes, no problem. If the drive fails however, which can and does happen with new drives or old drive being re-purposed, I have lost some additional server time with additional needed power downs. This is why I usually use a seperate computer for pre-clears. Also, the computers running pre-clear do not need to be as powerful as the one you would normally want to run unraid on now. All my computers that I run pre-clears on separately are just old P4 computers. they work very well for pre-clearing drives!
  8. It looks like your planned server should be fine for the limited information you provided of your intended use. Smoothly is a bit subjective, especially with no additional details of intended use. There are many users using much less capable server configurations, with no issues of usability. However, any system can be loaded down with enough demand and too many processes. There are also quite a few users using PFSense on Unraid. I have no other details than that, since I do not use it. You may however find some information here that may help;
  9. If you are talking about a separate NAS on your network, and NOT the Unraid server, then NO your NAS does not count, as it is not part of the Unraid setup. An attached NAS over the LAN would not be part of the Unraid Array, NOR an unassigned drive in the Unraid installation (internal or external). The NAS would be a separate device on the network, and would not count toward, or against any drive allowances in your Unraid system. You CAN however map any other network drives via the unassigned drives plug-in, to be used with unraid as a drive resource OUTSIDE of the protected array. These mapping still DO NOT count toward or against your drive allowances, but they will consume additional resources on your Unraid server, including additional RAM needs. If you are talking about installing Unraid ON your NAS, then each drive in the NAS WOULD be counted in the drive allowances for the server since they would then be part of the Unraid installation. Each drive could be either configured as part of the array, or outside of the array as needed. Hope that helps. Feel free to ask anything else, and please provide a little more detail on how you are planning to use your unraid and your NAS in releation to each other if I was unable to answer your question properly. Also please look at this; https://unraid.net/pricing and the older page https://lime-technology.com/wp/pricing/ which also has this; What are “attached storage devices”? They refer to the total number of storage devices physically attached to the server before you start the array. This number is in addition to your USB flash device used to boot unRAID (e.g. 4 HDDs + 2 SSDs + unRAID boot device = 6 attached devices). Non-storage devices such as GPUs do not count against this limit.
  10. I am not sure, BUT I think there may be something with the combination of the specific driver in use with the drive controller, PCI-e chips on the mother board and drivers, Bios implementations (mother board and drive controller), and even interrupt processing going on here. Long list I know. I have done quite a bit of testing with various setups over the years, and have found both the SuperMicro AOC-SAS2LP-MV8, AND the LSI SAS 9207-8i/e cards to be very capable and stable. However that has not been the case for many users. When starting initial configurations for testing, I do (usually) first check for which slots on the mother board provide the highest bandwidth, and least amount of chip to chip translation of data. I have seen on some motherboards that one slot will work better than another, and that depending on the controller, it may be more usable in a different slot than another control design. The interesting thing is that many people have similar symptoms to what you were seeing when running the SuperMicro AOC-SAS2LP-MV8, so they have switched to the LSI SAS 9207-8i/e cards with no issues at all! It seems that the general feeling on most linux based system forums is that the LSI SAS 9207-8i/e cards are the more stable and usable ones over the SuperMicro AOC-SAS2LP-MV8 cards. It does also seem that the LSI SAS 9207-8i/e cards, especially the 8e, are faster and more expandable and with lower performance impact, when using SAS port multiplexers/expanders. Sadly, with all the "compatible" and interchangeable hardware out there, that we can all swap around and mix and match, there will always be a potential for incompatibility. For example, I have had more erratic performance issues when trying AMD based systems than Intel based systems over the years (since the K6-2 was released, prior to that the AMD processors had no negatives compared with Intel processors in my tests). AMD systems are very usable for most applications, just typically not as consistent in performance, a carry over from the "enhancements" made in the K6-2 processors to allow for faster processes to have streamlined paths for an overall performance boost, at the cost of repeatable and predictable output speeds.
  11. I just looked at the server link, and they are now even cheaper! While not current hardware, cheap is not a bad place to start for a new server, with decent hardware. I would not use the Apaptec RAID controller for anything but a cache pool in this server. The six on-board SATA controllers on the mother board are decent, but I think only two of them are higher speed, I could be wrong there however. I would pull the modem card out. I would also add a LSI SAS 9207-8i HBA and cables and use use it for the front drive bays using them for the unraid array.
  12. Plex works well in a Docker, in my opinion just not as easily as in Windows. I only have one computer I regularly use with Windows 10, and I am still trying to accept it, but so far I really hate it, with each forced update I hate it more. My PLEX server under Windows is running on Windows 8.1. Many people really like Plex running in a Docker on Unraid, and it works very well for them. If all my media was on one Unraid server which also was running Plex I could see I would have NO complaints with it at all. It is pretty cool being able to watch the system resources in Unraid while Plex is transcoding a stream in a Docker. :-) I personally like to keep my old systems in use, till I either outgrow the use, or build another system that eventually just takes over the use. I enjoy putting together a new system, from new or used parts, to test new configurations on, if they work out well, they are kept, if not, they turn into another test for the next application. It sounds like you are all set with options to try out, and it will be interesting to hear what you finally decide! Please let us know what you try, and do or do not like as a result of your testing. It is always nice to hear about what other people have tested and the decision process used to make they final choices. Good luck! Hope you can find a combination that works very well for you with no down sides! One more thing to consider, there is a very cool Host Bus Adapter (HBA) card that is available for internal drives with another version for external drive expansion; LSI SAS 9207-8i (internal port version), and LSI SAS 9207-8e (external port version). These are PCI-e Version 3.0 cards, capable of fully utilizing a PCI-e 8x port! These are getting old now, from when they first came out, but are still a current product, so they can be found new for close to $100, and less for used! I am not sure what you have in your i3 server, but this could potentially be a real way to build it up without removing anything that is currently in use! If you wanted you could even use the SAS 9207-8e version of the card and connect a nice big 24 port JBOD chassis for a very large expansion capability. This is what I am currently in the process of testing, and so far I am very happy with my initial tests! It may turn into my main unraid server when I have finished, and may also become my second full time Plex server! If I like it enough, it may eventually replace my current Plex server, but time will tell... That would also mean that in the future I would have relegated my other unraid servers to backup use then instead of full active storage also. The JBOD chassis I grabbed was an old retired 45 bay SuperMicro 4U unit. Old enough that the server farm it was in needed to upgrade, but it matches perfectly with my SAS 9207-8e cards I also picked up used for under $35 each. After all why would a server farm keep running a slow card with only 8 internal 6Gb/s SAS channels when there are so many faster and MUCH MORE EXPENSIVE options out there? I am happy, it makes for some nice cheap grabs in the used market!
  13. I fully agree with jonathanm. I was a bit surprised with the 50% free space figure myself. The only time I have had that much free space on an unraid server was on a new build. I do have drive that are ready to drop in, that I have pre-cleared, but I do not count them since they are ready in case a drive fails, OR I need to increase my storage capacity. Most of my servers came on-line as I needed to expand, and got to a point, that though the prior system was not fully used yet as far as what it COULD do, it seemed to be a better option to start with a new server build, with new capabilities and more potential for expansion. I love having servers sitting powered off as backups. It is what my older unraid builds do most of the time, sit without power. As far as PLEX is concerned, I have played with it in a Docker, and find it not as easy to work with compared with running a PLEX server under Windows. Since I have multiple servers, my PLEX needs include accessing multiple servers to be seen by PLEX. It is much easier for me to just map the UNC path in PLEX in Windows and have it work. With a Dockerized version, I first need to set the resource to be available in Unraid, using the Unassigned Devices plugin, which works well enough, but then I still need to map the resource in PLEX. More steps, and also a bit more difficult steps to take than under Windows. I just have not seen a real reason to move toward PLEX fully running under unraid to have ANY real advantage for me. To me, multiple machines seems to work better, and provide more stability for my uses. Of course this will not be the case for everyone. I am also still testing to see if I can change my mind about it. It sounds really nice to be able to have everything on one server. I am just not sure if the advantages will out-way the disadvantages to me. I also like using multiple computers in a rack with a KVM switch for ripping my DVDs and Blue Rays. it seems to be much less of a working process bottle-neck than trying to use fewer computers with multiple optical drives. The result is I have my media on hard drives much quicker, then I can either move the files as is to my servers, AND/OR compress them in batches to be placed on the servers later.
  14. What I did when I finally had a FLASH drive die on me on one of my unraid servers. It was in March 2019, and my main unRaid server, or is that Unraid now..., died. I tried a couple times to reboot it, and removed power from the power supply and pressed the power button to drain the power supply, wait 10 minutes, everything... but nothing I did allowed me to turn the server back on. OK, I guess the power supply may be bad, or I finally had a motherboard/CPU die on me. No big deal, just time to replace the bad parts. I look at my Plex server and see that I also seem to be having some issues with one of my other Unraid servers. OK, probably just a reboot to clear cache memory, as I think it does not have much RAM, and it has been a LONG time since a re-boot, even though I do not ever remember having a problem with that server before. I open up a web browser and also a file manager window on one of the networked computers upstairs, where I live to look at the Unraid server in the basement. Then I see something that at the same time makes me curious, and a bit nervous; I see NO FLASH drive on the server. I try to re-boot from the main web console, (NOTE THIS IS a server running a PRO license on version 4.5.3, as is the prior listed server that will not power up.) The server does not respond as expected, and does not even want to shut down remotely. At this point I am getting a bit more concerned. This second server is not my main server, but is I would describe as my third priority server, PLUS it contains multiple backups, and archives of retired sets of video files, that were replaced by, or are master and uncompressed copies, from my TV and Movie collection. It is also the server I use for "work" storage, files I am in the middle of processing and organizing before I finalize them and add them to my Plex library for my family to access. I walk down the stairs, and find that a reboot at the computer only yields a message that a boot device can not be found. OK, no big deal, I have had to change boot devices before on this computer, since it seems to like to re-arrange boot order on hard drives (the USB FLASH is treated as a hard drive on this computer) sometimes with a power off after a power outage. The next problem I run into however is not normal. I am unable to see the USB Flash drive to select it to boot from! I take the flash drive and plug it into a few more computers, all with the same result, THEY DO NOT SEE IT! It is just as if I am not plugging the drive in at all! Absolutely NOTHING! It looks like it is time to replace my old trusty 2GB Sandisk Cruzer. I know there is no problem getting a replacement key, as that was my first concern years ago, that I wrote an email about before buying my first license, and received a nice reply about how I would just need to write an Email if I ever had a Flash drive fail on me, and include the GUID of the new flash drive as well as order details of the original being replaced. So I have never worried about it. Since then, there now is a very nice automated key replacement method which makes life even easier, and quicker. But, remember I am still using this flash with an much older version of Unraid, 4.5.3, that has no such method for automatic on-line replacement. OK, this server is taking over now for repair priority, the main server will just need to wait. For details on the main server repair see this post; Here are the details on this server. With the Flash Drive that failed, and has now been replaced... It is an old SuperMicro server that had been retired I got for a great deal a few years back. OS at time of building: Unraid 4.5.3 CPU: AMD Dual-Core AMD Opteron(tm) Processor 2212 HE Motherboard: SuperMicro H8DME-2 REV 2.01 RAM: 8GB Case: SuperMicro SC846 series Drive Cage(s): stock - part of case chassis Power Supply: stock - 2 ea redundant configuration SATA Expansion Card(s): 3 ea SuperMicro SAT2-MV8 PCI-X cards Cables: stock Fans: stock Parity Drive: WDC_WD20EFRX Data Drives: 2ea WDC_WD10EACS, 1 ea WDC_WD1000FYPS, 1 ea TOSHIBA_DT01ACA2, 1 ea ST2000DM001, 2 ea WDC_WD20EFRX, 1 ea WDC_WD2003FYPS, 1 ea Hitachi_HUA72302, 4 ea HGST_HDS724020AL, 2 ea Hitachi_HDS72202, 2 ea Hitachi_HUA72202 Cache Drive: -none- Total Drive Capacity: 31 TB, 18 of 24 bays Drives Outside of Array: 1 ea Hitachi_HUA722020AL, 1 of 24 bays Primary Use: Backups and temporary work space Likes: Nice and reliable Dislikes: Takes a LONG time to boot Add Ons Used: unMenu, Preclear Future Plans: Add more drives as needed till full only 5 bays open (which will also need an update to 6.x) Boot (peak): -not measured- Idle (avg): -not measured- Active (avg): -not measured- Light use (avg): -not measured- Since I did not want to wait for a replacement Key for my new USB Flash Drive, I decided to download and boot a new copy of Version 6.x Unraid. That way I would be able to get a replacement key quickly via the web interface. I had never done this before, but I was sure it would be quick and easy, which it was! The only concern I had was I was not sure if a new key would also work with older versions of Unraid. My replacement key does however. I am still not sure if a brand new key would or not, but I think it likely would work, but I doubt anyone will buy a new key to use on old 4.x Unraid either. I followed the procedures to install 6.x on a new Flash drive, using the V.1.5b USB creator under Windows. I did make a couple USB drives, one with the Stable version download option, and one with the local ZIP FILE option. I was a little surprised that only the download option allowed for the feature to customize the install. But this was not of concern, and i can see that if you are making a new key from an archived ZIP you wish to duplicate, that the option to customize may not really be wanted then. I then plugged the new USB Flash into the server, without remembering if the hardware was even new enough to run 64-bit, but it was, and I was able to set the BIOS for booting, and booted into the new 6.X world on this old server with no problems. Of course I was in TRIAL mode, with no new key yet, and my drive array not yet mapped. I then copied my backed up key for my dead USB flash drive to the Config directory on the root of the new USB Flash drive. Rebooted and was then presented with the invlaid key message and ability to replace the key on-line. A real good place for the steps to follow is here; https://wiki.unraid.net/UnRAID_6/Changing_The_Flash_Device Though I did not really follow that guide fully, the steps are basically the same. I did however walk through the process to request a replacement key, and was then able to intall the new key to the new flash within just a few more minutes. I re-booted again to make sure everything was working, and that all my drives were there, and undefined in Version 6.x This way I knew all was good up to this point, and that I could also update to Version 6, in the future, or if I needed to now if the following steps to go back to 4.x failed. This is when I ventured back in time, hoping my new key would work with my old 4.x version I had been using. I am pretty good with backups, and with periodic directory captures of my systems. Not as often as I should probably, but not bad. I did have MOST of the contents of the old flash drive backed up, since I had long ago created a automated process to create a new USB flash from install files of for unRaid 4.5.3, which most of my servers are running. It even sets the Go script, and copies over preclear and unMenu so I have everything needed for a new server... I just never thought about a rebuild of an old server before. OOPS! No backups of my configuration files I was running. NO backup of the cool image of the server used in MyMain. This picture, I will maybe worry about in the future, maybe not. I put 4.5.3 on the new USB Flash drive, edited the ident.cfg in the config directory with the correct server name, and kept the new Key in the config directory also, rebooted and I was able to see my server running again under 4.5.3, and properly licensed as PRO! Of course I needed to change the drive mappings and then I would be ready for Parity rebuild! If I just was using user shares, it would have been easy, but I use Drive Shares, AND user shares depending on the need. (Warning be careful as always in this situation as data CAN AND WILL be lost if you copy from one type to the other! - YOU HAVE BEEN WARNED) I needed all my data on the same drives as they were before to keep everything working properly in my network. I pulled up my most recent directory tree listing of my drives, and it was able to help with all but the 5 newest drives. The last 5 drives I needed to do a little more work looking at my PLEX server mappings, and my compression tools configurations. Note that my PLEX server is on a separate computer with access to all three of my main unraid server shares. From there I was able to see exactly which drives should be mapped to which unraid drive slot! :-) I mapped them verified everything on the network was again full functional that used resources on this server, and that all media was properly in place on PLEX for this server. I then stopped the unraid server, enabled parity, brought the server back online and started a parity build. The end result is that everything is now working again on this server, which then allowed me to move back to getting the main server running again, (see above link for that story) Maybe I should look at adding a second CPU and memory to this server. Probably would be really nice if/when I upgrade this to 6.x of Unraid.
  15. Very true. And since I do not have more PCI slots than I am needing to use, I also did not look at which channels may be shared with other devices. I guess I should at least look at my BIOS setup to see if I can squeeze a little more performance out of it. Yes, bottlenecks are there. It has worked well with no performance issues, yet it would be nice if parity checks were a little faster, but it has not caused any problems. I will say that switching over to the EVGA 112-CK-NF77-A1 https://www.evga.com/articles/374.asp motherboard with Intel Socket 775, Core 2 Duo E7200 @ 2.53GHz with 2 GB of RAM has yielded a boost in performance over the previous ECS HT2000 AMD690GM-M2 Rev 1.0A http://www.ecs.com.tw/ECSWebSite/Product/Product_Overview/0/1/789 motherboard with AMD Socket AM2 Athlon 64 X2 with 1 GB of RAM I had been running. I was really looking around to see if I could find a motherboard kicking around, or cheap, with a comparable or better CPU with a full PCI-X slot for my SAT2-MV8, but no such luck, and since it has been my main server, I really needed to get the hardware swap done quickly to bring it back on-line. After all I do have family all over that use PLEX... So again, while I could really upgrade further, stability and usability are primary. Now it is working well again, I hope it will be another 6 plus years before I need to really touch it again. With the age of the Intel based MB and CPU, I am not really sure what to expect for remaining life however. Older hardware I would expect over 15 years more life from it, but with it being where it was in the ROHS cycle when it was all built, it really could fail within the next 3 years easily. (oh the joys of ROHS and reduced reliability)