captain_video

Members
  • Posts

    156
  • Joined

  • Last visited

Everything posted by captain_video

  1. I'm using three Supermicro AOC-SASLP-MV8 controllers with the .21 firmware on an Asus A88X-PRO motherboard. The server is a Supermicro 24-bay 4U rack with a SAS846TQ rev 1.02 backplane. I was thinking of upgrading to a version 3.0 backplane and replacing the controllers with the Dell H310's listed here: https://www.ebay.com/itm/Dell-H310-6Gbps-SAS-HBA-w-LSI-9211-8i-P20-IT-Mode-for-ZFS-FreeNAS-unRAID/162834659601?epid=19006955695&hash=item25e9b3b911:g:3TgAAOSwTf9ZWHPf:sc:USPSPriorityMailSmallFlatRateBox!21043!US!-1 The reason I'm looking to upgrade is to increase the transfer rates to the drives since all of the drives are SATA III versions. The rev 3.0 backplane is configured for SATA III drives and the controllers are already loaded with the IT firmware so it looks like it could be a simple plug and play upgrade. Does this seem like a reasonable approach to increase my transfer rates or am I just throwing money at a problem that may not be the solution? Is there a different 8-port controller that's recommended?
  2. I've got the linuxserver.io docker installed on my unRAID server (version 6.3.5). I was able to get the libraries setup but my Nvidia Shield doesn't acknowledge the server as being online. The Shield shows the unRAID server in the Settings, but can't see it as being online. The Plex WebUI sees the Shield as being an available player, but it's not listed under the Devices tab. When I checked the Plex Docker log (attached) it shows an IPv6 error. I'm wondering if this is the issue. If so, how to I go about fixing it?
  3. I was planning to upgrade my unRAID setup with dual parity drives. Currently, the largest drives in the array are 4TB, including the parity drive. I was going to install a 2nd 4TB parity drive but I just spotted a deal on an 8TB HGST NAS drive on Newegg that was too good to pass up. I'm always looking to plan ahead for future upgrades and increasing the size of the parity drive is the first step. With my existing setup, a data rebuild or parity check takes approximately 23 hours. If I install the 8TB drive as parity, can I expect that time to double or will it just stop checking parity or rebuilding data after it reaches the 4TB boundary?
  4. Wasn't the data about hard drive failures from Backblaze debunked a while back? FWIW, I have over a dozen Seagate ST4000DM000 drives in my array that I pulled from new external enclosures that have been going strong for several years without a single failure.
  5. So I can only upgrade two drives at a time based on two parity disks? I've been trying to find info on using two parity drives but it doesn't seem to be in the documentation.
  6. I'm in the process of upgrading about 7 drives in my 29-drive array. I also plan on adding a 2nd parity drive to the setup. If I install the 2nd parity drive and then run a parity check, would I then be able to swap out multiple drives simultaneously and rebuild the data? Is there a limit on how many drives I can replace at the same time? I currently have a 24-bay Supermicro server case with a 5-in-3 Supermicro case (i.e., has five 3.5" HD slots that can fit in three normal half-height drive slots in a PC case). My system consists of three 8-port Supermicro SATA controllers, six onboard SATA ports, and a 2-port PCI-e controller. 24 of the drives (1 parity and 23 data) are connected to the three 8-port controllers. The other five data drives are connected to the onboard SATA ports as is a single cache drive. I will connect the cache drive to the 2-port controller and then use the vacated 6th port on the motherboard for the 2nd parity drive. This still allows for a 2nd cache drive that can be added to the 2-port controller.
  7. I converted all of the drives in my 24-disc array to the XFS file system a while back. I upgraded several drives while doing the conversion and it was a long, painstaking process that spanned several weeks. I believe this is the process I used, but it's been a while. You will need at least one new drive to insert in the array, but you could use the parity drive as a data drive for this process instead of using a new drive. Just be aware that the array will be unprotected during the conversion process if you do this. If you do not have an extra SATA port available for a 25th drive then using the parity drive may be your only solution. The last drive left over after converting all of the drives will then become the new parity drive so you'll need to work out the sequence for converting the drives. If you're upgrading any of the drives with larger ones then it makes the process a little less complicated. I find that taking a screen shot of the web GUI with all of the drives listed with their serial numbers makes it easy to map the process and keep track of which drives have been formatted and copied and which slots they're assigned to. You may want to take a fresh screen shot after each conversion so you always have the current configuration available as a reference. This is basically how I performed the upgrade process. You pre-clear and format a new drive and then copy the data from the smallest drive to it. If the target drive is one you had previously used in the array you do not need to do a pre-clear. Just format it as an XFS drive and assign it to an available slot. If using the parity drive as a target drive, un-assign it as the parity drive and reassign it to a different slot. When the copy is complete, format the drive that you just copied the data from with XFS. Copy the data from the next larger (or same size) drive to the newly formatted XFS drive. Wash, rinse, and repeat until all drives have been formatted and the data is copied. Assign the last drive in the array as the new parity drive and let it rebuild parity. There's no need to reformat the parity drive. Once it's been assigned as the parity drive it just automatically becomes the repository for the parity data, regardless of how it's been configured. I used a program called TeraCopy in Windows for copying all of the data using a Windows PC. I opened two windows on my PC and navigated to both the source and target drives listed on my network on the TOWER server. This is where you need to know which slots the drives are assigned to because Windows Explorer only shows the disk numbers (i.e., disk 1, disk 2, etc.). I just dragged and dropped the data from the source drive to the target drive and TeraCopy takes over and manages the copy process. It's freeware program that works pretty well. I'm sure there are similar programs out there, so use whichever one suits your fancy.
  8. I've set up multiple shares on my server that made sense at the time. After many years of using unRAID I've found that the names for the shares I set up don't really match the content of the folders anymore so I'd like to rename them. Can I simply go into the shares page and select each share and then rename it? If not I assume my only other option would be to create a new share and simply move the data from the old share folder to the new one and then delete the old share.
  9. Sorry about the semantics. I'm so used to using preclear I forgot that the terminology changes once the drive is inserted into the array. I was unaware that the array isn't taken offline with the newer versions when clearing a disk. That simplifies things considerably and solves my problem, assuming there's nothing wrong with the drives themselves. I've really got to start reading the release notes when I upgrade my software. Update: I installed five disks in the external bays and assigned each one to a slot. I was presented with a warning that all data on the drives would be erased when the array was started. I started the array and the disks are currently being cleared while still leaving the array active and unaffected. I can access files and play videos from the server with no problems. This is exactly what I wanted to do so it's all good. Like I said, I hadn't read the release notes on any current versions of unRAID so I wasn't aware that it had been fixed to clear the drives and still allow the array to be used normally. I was so used to it being unavailable during the clearing process that it never occurred to me that the authors may have changed that functionality. Is it any wonder why I love this program? Been using it for over nine years now and it was the best money I've ever spent on a piece of software. It just keeps getting better.
  10. Got it. I was hoping I wouldn't have to let the array preclear it because it takes it out of commission while the drive is being precleared. Thanks for all of the input. I appreciate all of the comments and suggestions.
  11. All of the drives in question have been partitioned and formatted in unRAID so they will not be precleared automatically when added to a new slot. They were all new, blank drives that had been precleared prior to insertion into the array for the first time. I have never had an issue using the preclear plug-in or the old command line version on a new drive, only with drives that had been removed from the array and replaced. None of the disks in question had reported any errors. I agree that it is disconcerting about the drives that are not completing the preclear process. I intend to run a complete diagnostic on them to see if they're reporting any issues. This whole situation is confusing and quite perplexing. I've re-purposed drives from the array before and never had any issues.
  12. OK, that makes sense about the rebuild with a replacement disk. Thanks for the clarification. However, that brings me back to my original predicament. I can't seem to preclear a disk that was already precleared and then had data added after being inserted into the array. I basically want to wipe it clean so I can install it as a new disk without affecting parity, as you mentioned. I've tried it with several different disks and the preclear function either fails, stalls, or is extremely slow. The last disk I tried to preclear was only 52% through the pre-read process before it stalled. That took 19 hours for a 1.5TB disk. Doesn't unRAID write the clear signature to the disk in an area that would be unaffected by a low level format or am I off base with my thought process? This is why I was asking whether a low level format would zero the disk and still allow unRAID to see it as a precleared disk. If the rest of the disk is zeroed out then wouldn't that basically be seen by unRAID as a new disk that had been precleared? I'm just trying to figure out how to make this work without having to do a parity rebuild or do a new configuration from scratch. In a nutshell, how does one install a drive in an unRAID array that been previously used in the same array but was replaced so it is no longer part of the current configuration? I'd like to clear whatever data is on the drive so unRAID sees it as a blank, precleared drive. Doing a preclear on the drive doesn't appear to be a workable option, at least not using the plug-in. I also tried running the command line preclear script but the drive does not show up as a candidate for being precleared, which makes sense since it had already been precleared previously. I don't really want to install it and do a new configuration because it would do a new parity build with duplicate data on both the old drive and the one that replaced it. I'm starting to think that I may have to install all of the drives in the new enclosure, start with a new configuration, and let it rebuild parity even if the drives contain data. Once the parity rebuild is complete I can access the individual drives and delete the data from the drives. Parity will be updated to reflect the deleted data. This would limit the system to a single parity rebuild and still allow me to access the data. Does this seem like the only scenario that will work or do I have other options?
  13. Hmmm, OK. On a side note, I reinstalled one of the drives that failed the preclear attempt and it treated it like a new precleared drive and rebuilt the data from parity. The drive had been in the array previously in a different location and I replaced it with a new, larger drive. After it rebuilt the data on the new drive from parity I attempted to perform a preclear on the old drive to write all zeroes to it. The preclear got hung up and subsequent attempts to preclear it failed. I took a risk and swapped out the old drive with a smaller one in the array to see what would happen. I figured if it tried to rerun a parity check I'd just stop the array and replace the old drive and restore the original configuration and the perform a new parity check. Instead, it treated the old drive as a precleared drive and restored the data, even though the drive had data from it's former location. I assume the preclear wiped the MBR so it saw it as a blank drive and rebuilt the data from the old drive using parity.
  14. The enclosure is a Supermicro CSE-M35T-1B 5-drive bay that has five discreet SATA inputs. I'm just sitting the enclosure on top of the server chassis and using extension cables for the power inputs and 36-inch SATA cables for the signal connections. My server board uses three Supermicro AOC-SASLP-MV8 8-port SATA controllers for the drives in the internal array and parity drive and a single dual-port SATA III controller (not currently connected to anything). The motherboard has six onboard SATA III ports, one of which is currently used for a 250GB SSD cache drive. I plan to use the five unused onboard SATA ports to connect directly to the enclosure so speed shouldn't be an issue. Doesn't a low level format write all zeroes to the disk? I'm using the HDD LLF Low Level Format tool from this site: http://hddguru.com/software/ The drives were healthy and working fine before I pulled them from the array. I only replaced them with larger drives for increased capacity.
  15. I just picked up an external 5-bay disk enclosure I'd like to add to my 24-bay server. I have a bunch of smaller disks that came out of my unRAID array that I had previously updated and want to use in the external enclosure. I have been attempting to run a preclear on the drives and it either fails or runs painfully slow. I'm using the preclear plug-in on unRAID Server Pro version 6.3.2 with a 24-disk array (including a single parity drive and a cache drive). I have tried using both current versions of the plug-in. Do I even need to run a preclear on the drives? My main concern is that they still contain data that would be a duplicate of what's already on the replacement drives. If I add them to the array I assume that it will automatically rebuild parity with the data that's on the drives. I suppose I could stop the parity rebuild and then delete the data manually, but chances are I'd have to rebuild parity again from scratch since it's unlikely I could halt it in time. I'm trying to figure out the best approach to reintroducing drives with existing data that I would like to clear. Can I just run a low-level format and then insert them into the array without having to do another pre-clear?
  16. I have tried connecting the new drive to a spare SATA port on the motherboard with all other drives connected. When I run the command "preclear_disk.sh -l" to list available drives to preclear, the new drive does not appear in the list. IIRC, the only drive displayed is the cache drive, but it's been a while since I've tried it. I'm just curious if there's some kind of woprkaround that allows me to keep all of the system drives connected and still have the new drive recognized. I have only recently installed the preclear plugin, but I haven't tried it yet. All previous preclear functions have been run from a command line.
  17. I'm running unRAID Pro 6.1.7 with one parity drive, 23 data drives, and one cache drive. If I want to perform a preclear_disk on a new drive I've had to disconnect the cache drive and use that port to attach the new drive to preclear. I'm using three Supermicro AOCO-SASLP-MV8 8-port SATA controllers for the 24 drives and one port on the motherboard for the cache drive. I've got five more SATA ports on the motherboard but if I try to use one of them to preclear the new drive, it's not recognized by unRAID. Is there anyway I can use the extra ports on the motherboard to preclear a drive without having to swap it out with one of the other system drives? Right now it appears that the number of ports recognized by unRAID maxes out at 25.
  18. I reinstalled the third AOC-SASLP-MV8 controller card and booted up the system. Two of the cards initialized during POST and displayed the drives connected to them. All of the yellow LEDS on each card lit up as it was being scanned. The third card did not light up at all during POST. After the two cards were initialized, the system rebooted and started booting unRAID. When it progressed to the point of polling all of the drives I noticed that the LEDs on the third card lit up in sequence. UnRAID finished booting and it appeared that all of the drives were showing up. I checked the web GUI on my main PC and all of the drives were there. I started the array and everything came online normally. I then stopped the array and powered it down to install my 250GB SSD as the cache drive. Everything booted back up fine and I assigned the new drive as the cache drive. I forgot to set the file system for the drive so it formatted it as btrfs by default. Since the rest of the array had been formatted as xfs I stopped the array and reformatted the cache drive with xfs. Now I'm up and running with the new motherboard and SSD cache drive using three 8-port controllers. I'm a happy camper.
  19. Good question, but one that I can't answer. The third controller would never seem to initialize (i.e., no scanning of the controller during POST and no LEDs on the controller lighting up). I never let it fully boot into unRAID to see if the drives were all being recognized. I suppose I could reconnect the third controller and see if that works. Thanks for the tip. It's certainly worth a shot. Unfortunately, I probably won't be able to get to it until the weekend.
  20. I upgraded my old server motherboard to an Asus A88X-PRO board with an AMD A10 7700K CPU. The board has three PCI-e x16 slots (two 2.0 and one 3.0), two PCI-e x1 slots, and two PCI slots. I installed three Supermicro AOC-SASLP-MV8 controllers in the x16 slots and it only recognizes two of them. I've checked all of the UEFI BIOS settings and forced it to use the onboard APU graphics in case it was looking for a graphics card in the first x16 slot. I've tried various combinations of controllers in all slots and it works fine with just two controllers, regardless of which slots they're installed in. When I add the third controller it still only sees two of them. The controllers were tested previously in my old configuration and they all work fine. I actually have four identical controller cards, all upgraded to firmware 0.21 with INT13 disabled and RAID disabled and only two of them are being recognized. I had maxed out the SATA ports and PCI-e slots on my old board to get a 24-drive setup (two 8-port SATA controllers, six onboard SATA ports, and one dual-port Sil3132 SATA PCI-e x1 SATA controller). I wanted to add a cache drive but had nothing to connect it to. I was hoping the Asus board would allow me to use three 8-port controllers and free up the onboard SATA ports. Right now I'm back to a similar setup using all six onboard ports, two 8-port controllers and one 2-port controller. I can add another 2-port controller for the cache drive, but I would prefer to use the three 8-port controllers and keep the onboard ports available for any future expansion or additional cache drives. Is there something that would be preventing the third controller from being recognized? Am I missing some setting in the UEFI BIOS that would cause this? This one has me completely stumped. I should add that the server is running fine with the current setup. I'm loving the new unRAID 6.0. I just finished converting all of my reiserfs drives to xfs and it's working great!
  21. Very weird thing happened to me yesterday. I went back and read through some of the older threads about turning off RAID capability on the AOC-SASLP-MV8 cards and the fact that they will cause a long boot time if it isn't disabled. I had implemented this on both of my original controllers quite some time ago, yet the long boot time persisted. I decided to take another crack at it and I found my old firmware 0.15 files with the edited 6480.txt file. I created a new boot flash drive with the files on it and set about updating the card. As expected, it failed the first attempt so I ran it again. This time it succeeded. I swapped the card with one currently running in my unRAID server and booted it up. This time it booted smoothly with no delay after scanning the controllers. I'm kicking myself for not trying this again after it didn't work the first time I tried it. For several years now I've been dealing with extremely long boot times because of this. Live and learn.
  22. The same thing holds true when using multiple SASLP cards. You get a single BIOS screen when hitting Ctrl+M and you can toggle between the cards. Both cards are scanned and the drives are displayed during POST.
  23. I've been running two Supermicro AOCP-SASLP-MV8 controllers in my server for many years. A few years ago I upgraded the firmware to version 0.21. Since that time it takes the server about 7-8 minutes to boot. It scans each controller and then displays the Ctrl+M message at the bottom of the screen and just sits there. After a while it reboots, scans the 2-port controller and then boots up unRAID. My motherboard has six SATA ports so the 2-port card along with the two 8-port cards gives me a total of 24 SATA ports for one parity drive and 23 data drives. The motherborad only has two PCIe X16 slots and one X1 slot as well as a single PCI slot. I've been wanting to add a cache drive to the array but had no ports available. I found a good deal on an Asus A88X-PRO motherboard and an AMD A10 7700K processor at MicroCenter. The plan was to pick up a third AOCP-SASLP-MV8 controller and run the cache drive from one of the onboard ports. I picked up a used card on ebay that turned out to be DOA. I tried it in several motherboards, including the current server board, and it wasn't even recognized. The seller was kind enough to give me a partial refund and told me to keep the card so as not to have to pay for return shipping. The added bonus was that it came with both high and low profile brackets. I picked up another used card on ebay and this one worked fine and also had the 0.21 firmware. I did have to go into the BIOS and disable the INT13 setting, but at least I didn't have to deal with the BIOS update. In fact, it works so well that I no longer have a long wait to boot up the server. Once it scans both controllers it reboots and scans the 2-port controller and goes straight into unRAID. I did some further experimentation using combinations of the original cards and the two new ones I purchased. The bad card simply would not allow itself or the other controller to be recognized at bootup so it is definitely a dead card. I decided to keep the new good card and the original good card that didn't hang on bootup. I have since purchased another used card on ebay and I'm keeping my fingers crossed that this one will work as well. The whole point of this diatribe is to make people aware that buying used controllers can sometimes be a crapshoot. I know at least one of the two cards originally installed in the server was purchased new from Newegg, but I don't recall where the other was purchased (it may also have come from Newegg).
  24. It would appear that I was premature in my concern. I removed one of the drives from my array to free up a slot for transferring data. I initially installed a new drive in the free slot, formatted it with XFS, and transferred the data from an existing drive to the new one. I ran into the invalid configuration error when swapping the new drive with the original source drive, causing me to reset the configuration and reassign all of the drives in the array. I just performed a disk copy from another existing drive to the original source drive and then swapped the two drive positions. The array came up and just recorded the new drive positions with no errors. Apparently, it's only an issue when introducing a new drive into the configuration. If using existing drives it recognizes the fact that you moved them to different slots and allows the array to start normally.
  25. I recently upgraded to unRAID 6.0 and I'm in the process of changing the filesystem from reiserfs to XFS. I've got multiple shares set up and I want to be able to keep them intact as much as possible during the upgrade so as to keep the server functional during the transition. Here's the process I'm using for the conversion: 1. One slot is used as the destination drive (slot 16) for all data transfers 2. Destination drive formatted with XFS 3. Source drive (slot 23) data copied to sub-folder on destination drive (slot 16) 4. Source drive and destination drive swapped after copy is complete. Destination drive is now in same location as original drive (slot 23) so that shares are intact. 5. Old source drive is configured to be formatted with XFS in destination drive slot (slot 16) 6. Array will not start and invalid configuration is displayed due to too many drive changes 7. Go to Tools tab and apply New Config 8. Reassign all drives to original locations with source and destination drives swapped (slots 16 and 23) 9. Start array and format destination drive (formerly drive 23, now in slot 16) with XFS 10. Data in sub-folder on previous destination drive (formerly drive 16, now in slot 23) is moved to root of drive and sub-folder is deleted. 11. Wash, rinse, and repeat using different source drive (slots 1-15 & 17-22) and same destination drive location (slot 16) My question is whether there's a way where I can swap two drives and start the array without having to reset the configuration each time? I know I could simply keep the drives in their original locations and maintain the same configuration, but then my shares are no longer valid as some drives are excluded from some shares so it gets a bit messy. BTW, I'm using TeraCopy in Windows for the data transfers and it works great.