Jump to content

Talos

Members
  • Posts

    104
  • Joined

  • Last visited

Everything posted by Talos

  1. Yeah that was my concern. With two drives showing the seek errors and only being able to replace them one at a time and still keep the array parity protected I'm not sure the other drive would survive a parity rebuild... I was thinking i would have to do the break the array and restart parity and then copy the info from the removed drives as you said via unassigned devices. The data that is on those two erroring drives is just movies/tv shows which I've made a full listing of so worst case scenario i can just reacquire it if the drive dies completely in the middle of the process. So by doing a "New Config" it will not erase any of the existing disks in my array and allow me to rebuild a new parity based on the new drives Ive inserted plus those I'm keeping from the existing array?
  2. Hello, Recently i had two drives in my array show up as failing SMART due to Seek Error Rate thresholds exceeded. They've since stopped showing this but i dont trust those drives now and I'd been planning on changing up my array anyway so I bought 4x16tb drives to replace those two and some others in my array. My current configuration is: Parity - 18TB Disk 1 - 8TB Disk 2 - 6TB - This is one of the dying drives Disk 3 - 6TB - This is the other dying drive Disk 4 - 3TB Disk 5 - 3TB Disk 6 - 3TB Disk 7 - 6TB Disk 8 - 8TB Disk 9 - 18TB Disk 10 - 6TB I want to remove the two drives that showed the SMART errors, and consolidate the 3tb drives and 6tb drives to single 16tb drives, shrinking my array configuration down to 8 drives from 11 and end up with the following configuration: Parity - 18TB Disk 1 - 16TB Disk 2 - 16TB Disk 3 - 16TB Disk 4 - 16TB Disk 5 - 8TB Disk 6 - 8TB Disk 7 - 18TB I'm hoping someone can suggest the best method to do this. I've only ever expanded my array by adding more or larger drives over the past 15 years, I've never actually shrunk the number of drives used in the array. My end goal is to end up with a smaller case and less power hungry setup to try and save on some electricity running costs as well as claim back some desk space. I've already pre-cleared the 4x16tb drives and they are all sweet. Any guidance would be greatly appreciated. Cheers,
  3. Thanks for clarifying my options itimpl. I finished the array expansion overnight and all is working as expected. Cheers!
  4. Hi, I currently have a 9 drive array. It consists of: Parity - 6Tb (X300 Toshiba) Disk 1 - 6Tb (X300 Toshiba) Disk 2 - 6Tb (X300 Toshiba) Disk 3 - 6Tb (X300 Toshiba) Disk 4 - 3tb (Hitachi 7200rpm) Disk 5 - 3tb (Hitachi 7200rpm) Disk 6 - 3tb (Hitachi 7200rpm) Disk 7 - 3tb (Hitachi 7200rpm) Disk 8 - 3tb (Hitachi 7200rpm) Ive just moved my system into a new case (a Meshify 2) which gives me capacity for 11 drives. I've bought 2 new 8tb Ironwolfs and plan to expand my array by 2 drives. Ive already pre-cleared both the new ironwolf drives and both came back clean. The new array configuration i want is: Parity - 8Tb (New Ironwolf) Disk 1 - 6Tb (Same position) Disk 2 - 6Tb (Same position) Disk 3 - 6Tb (Same position) Disk 4 - 3tb (Same position) Disk 5 - 3tb (Same position) Disk 6 - 3tb (Same position) Disk 7 - 3tb (Same Position) Disk 8 - 3tb (Same Position) Disk 9 - 8tb (New IronWolf) Disk 10 - 6tb (Old Parity) Ive had a search on the forum and seen the posts about the Parity Swap Method and other people saying just replace the parity drive but i'm confused and don't want to risk blanking my array and losing all my data. Previously when i've upgraded my drives i've built a whole new server and copied the data from the old server to new one but i dont have the luxury of doing that this time. So what is the best method for me to 1) upgrade my Parity to support the larger drives in the array and 2) repurpose the old parity into the array and add in the 2nd ironwolf? Any help would be greatly appreciated. Cheers! Edit: Forgot to mention i'm running Unraid Pro 6.8.3 currently.
  5. For my personal situation I run my server off a UPS so I havent had an ungraceful powerdown for many years now. I also only use it as a media/file server. I don't run any complex dockers or VM's or anything that risk crashing the server. This is mainly because i only run this off a Celeron 550 with 8gb ram so it doesnt have heaps of excess grunt for those so perhaps the negatives of BTRFS wouldn't be a big obstacle for me compared to the positive of bitrot detection. Might do a bit more reading while i wait for the pre-clears to finish on those drives to see which path i decide to take Edit:OK ive done a bit more reading and i think i will just stick with XFS and run the risk with diskrot
  6. OK cool thanks mate... So maybe my best approach might be to build a new array with the new drives and copy the content over to those and then add in the other drives one by one copying over the data to each after i make those into the new filesystem. So now which file system do i go with? is BTRFS better with its bitrot detection or does the stability of XFS make it the preferred filesystem?
  7. Hi guys, Ive been running unraid for a long time now (~15 years) and have reached the capacity of my current server. I currently have 9x3tb drives in there and have just bought 4x6tb drives to start upgrading the capacity - the are currently undergoing preclear to ensure there are no issues with them. My existing array is using the reiserFS file system and ive noticed some posts indicating that XFS or BTRFS are the preferred options nowadays. I dont have slots in my server to include more drives so will be replacing 4 of the existing 9 drives (including the parity). Ive been trying to find a definitive guide on how to upgrade my server capacity and also change the filesystem but im still confused. I obviously have to replace my parity drive first with a 6tb drive to make sure that it is the largest drive in the system but when i go to replace the other drives one by one and do the rebuild of my data on to them will it rebuild back to reiserFS or can i tell it to change to XFS/BTRFS and if so which is the best option to choose? cheers! Talos
  8. This would have to be damn near a perfect case for most setups. I've only just recently dropped from 3x5-in-3 hotswaps to 3x3-in-3 hotswaps. If this had of been available at the time for sure I would have grabbed this instead. Will definitely consider this for any future builds I do. Sent from my Galaxy Nexus using Tapatalk
  9. Thanks Joe.. Just what I wanted to hear. Was a bit worried by the rather large numbers at the end of the read and seek error lines. Still don't know enough about interpreting these SMART reports...
  10. Ive just whacked in a Seagate 1.5tb 11 series drive to use as my Parity drive and run preclear over the drive overnight. Thought i'd get the results below checked before I assign the drive as it's reported back a few more things than my WD10EADS drives did when I ran them through preclear. Does the drive appear OK to use or is there something I should be looking at? =========================================================================== = unRAID server Pre-Clear disk /dev/sdg = cycle 1 of 1 = Disk Pre-Clear-Read completed DONE = Step 1 of 10 - Copying zeros to first 2048k bytes DONE = Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE = Step 3 of 10 - Disk is now cleared from MBR onward. DONE = Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4 DONE = Step 5 of 10 - Clearing MBR code area DONE = Step 6 of 10 - Setting MBR signature bytes DONE = Step 7 of 10 - Setting partition 1 to precleared state DONE = Step 8 of 10 - Notifying kernel we changed the partitioning DONE = Step 9 of 10 - Creating the /dev/disk/by* entries DONE = Step 10 of 10 - Testing if the clear has been successful. DONE = Disk Post-Clear-Read completed DONE Disk Temperature: 32C, Elapsed Time: 19:23:44 ============================================================================ == == Disk /dev/sdg has been successfully precleared == ============================================================================ S.M.A.R.T. error count differences detected after pre-clear note, some 'raw' values may change, but not be an indication of a problem 54c54 < 1 Raw_Read_Error_Rate 0x000f 100 100 006 Pre-fail Always - 17417 --- > 1 Raw_Read_Error_Rate 0x000f 118 100 006 Pre-fail Always - 181712693 58c58 < 7 Seek_Error_Rate 0x000f 100 253 030 Pre-fail Always - 732 --- > 7 Seek_Error_Rate 0x000f 100 253 030 Pre-fail Always - 141171 63,66c63,66 < 188 Unknown_Attribute 0x0032 100 253 000 Old_age Always - 0 < 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 < 190 Airflow_Temperature_Cel 0x0022 069 069 045 Old_age Always - 31 (Lifetime Min/Max 26/31) < 195 Hardware_ECC_Recovered 0x001a 100 100 000 Old_age Always --- > 188 Unknown_Attribute 0x0032 100 100 000 Old_age Always - 0 > 189 High_Fly_Writes 0x003a 088 088 000 Old_age Always - 12 > 190 Airflow_Temperature_Cel 0x0022 068 060 045 Old_age Always - 32 (Lifetime Min/Max 26/40) > 195 Hardware_ECC_Recovered 0x001a 052 044 000 Old_age Always 69,72c69,72 < 199 UDMA_CRC_Error_Count 0x003e 200 253 000 Old_age Always - 0 < 240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 256289288486912 < 241 Unknown_Attribute 0x0000 100 253 000 Old_age Offline - 0 < 242 Unknown_Attribute 0x0000 100 253 000 Old_age Offline - 776 --- > 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 > 240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 210333138419731 > 241 Unknown_Attribute 0x0000 100 253 000 Old_age Offline - 3172977585 > 242 Unknown_Attribute 0x0000 100 253 000 Old_age Offline - 120762575 ============================================================================
  11. Heh.. Thanks heaps for the explanation Joe.. Guess I saw the CRC error count line, panicked and assumed it was bad... i will have a bit more of a read of the wiki link..
  12. Ran preclear on my first disc (a brand new WD10EADS) into my new system last night.. Came back with a few errors. Below is a C&P from the telnet session. System specs are as follows: Asus P5Q-Deluxe E6400 underclocked to 1600mhz 2gb G.skill DDR2800 2x Adaptec 1430SA raid cards 3xNorco SS-500 Hotswap modules. Seasonic M12-700w Lian Li PC-A17 case AHCI enabled in BIOS Adaptec raid bios's disabled =========================================================================== = unRAID server Pre-Clear disk /dev/sda = cycle 1 of 1 = Disk Pre-Clear-Read completed DONE = Step 1 of 10 - Copying zeros to first 2048k bytes DONE = Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE = Step 3 of 10 - Disk is now cleared from MBR onward. DONE = Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4 DONE = Step 5 of 10 - Clearing MBR code area DONE = Step 6 of 10 - Setting MBR signature bytes DONE = Step 7 of 10 - Setting partition 1 to precleared state DONE = Step 8 of 10 - Notifying kernel we changed the partitioning DONE = Step 9 of 10 - Creating the /dev/disk/by* entries DONE = Step 10 of 10 - Testing if the clear has been successful. DONE = Disk Post-Clear-Read completed DONE Disk Temperature: 27C, Elapsed Time: 15:01:48 ============================================================================ == == Disk /dev/sda has been successfully precleared == ============================================================================ S.M.A.R.T. error count differences detected after pre-clear note, some 'raw' values may change, but not be an indication of a problem 54c54 < 1 Raw_Read_Error_Rate 0x002f 100 253 051 Pre-fail Always - 0 --- > 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 63c63 < 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 6 --- > 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 7 67c67 < 199 UDMA_CRC_Error_Count 0x0032 200 253 000 Old_age Always - 0 --- > 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 ============================================================================ These errors are bad right?
×
×
  • Create New...