stv29 Posted February 12, 2017 Share Posted February 12, 2017 I'm sorry to be so needy recently; I very rarely tinker with hardware but necessity arose. My parity drive was on its last spins and I decided to move the entire system to a new case. Everything went fine until it was time to replace the parity drive. I previously had it in a RAID0 on an ARC 1222, 2x4tb drives for 8tb total. The only thing I did was swap the dying drives and pre-cleared for two rounds. I left the truncation off. Now, when I am able to select it as the parity drive and start the array, it acts like it is going to start and then stops and brings me back to the main screen to start the server again. It does the same thing when I assign it as a regular disk and start. Leaving it unassigned I can start the array unprotected. What did I do wrong? I've attached my log from a fresh reboot and starting as a parity drive. Any assistance or insights someone can offer is greatly appreciated. Thank you! Sys_Log.txt Quote Link to comment
JorgeB Posted February 12, 2017 Share Posted February 12, 2017 This should help: https://lime-technology.com/forum/index.php?topic=48508.msg500313#msg500313 Quote Link to comment
stv29 Posted February 12, 2017 Author Share Posted February 12, 2017 I changed the "Display world-wide-name" in device ID from "Disabled" to "Automatic", unfortunately without success in assigning the parity drive. Does the same as before. Sys_Log.txt Quote Link to comment
JorgeB Posted February 12, 2017 Share Posted February 12, 2017 Strange, are you sure you applied the setting? Quote Link to comment
stv29 Posted February 12, 2017 Author Share Posted February 12, 2017 Yes, sir. I attached an image of the current setting. Is this correct? Quote Link to comment
JorgeB Posted February 12, 2017 Share Posted February 12, 2017 Correct, no idea then, unless there's a regression bug on v6.3 Quote Link to comment
stv29 Posted February 12, 2017 Author Share Posted February 12, 2017 Would initiating the "new config" do any harm? I knew I was in for a lengthy parity re-build but I certainly don't want to lose anything. Quote Link to comment
stv29 Posted February 13, 2017 Author Share Posted February 13, 2017 Would initiating the "new config" do any harm? I knew I was in for a lengthy parity re-build but I certainly don't want to lose anything. I tried it, it didn't work. I've started syncing everything with my offsite backup and will start over with this one. Thank you for you help! Quote Link to comment
JorgeB Posted February 13, 2017 Share Posted February 13, 2017 Try with v6.2.4 to see if it's a v6.3 issue/bug. Quote Link to comment
stv29 Posted February 21, 2017 Author Share Posted February 21, 2017 Tried with 6.2.4, same settings as previously suggested, no success. Quote Link to comment
JorgeB Posted February 21, 2017 Share Posted February 21, 2017 I'm guessing the problem is the invalid partition error after unRAID creates it: Feb 12 11:17:37 Galactica emhttp: shcmd (55): sgdisk -o -a 64 -n 1:64:0 /dev/sdb |& logger Feb 12 11:17:38 Galactica kernel: sdb: sdb1 Feb 12 11:17:38 Galactica root: Creating new GPT entries. Feb 12 11:17:38 Galactica root: The operation has completed successfully. Feb 12 11:17:38 Galactica emhttp: shcmd (56): udevadm settle Feb 12 11:17:38 Galactica emhttp: invalid partition(s) Post the output of: sfdisk /dev/sdb Check that parity it's still sdb, if not use current identifier. Quote Link to comment
stv29 Posted February 21, 2017 Author Share Posted February 21, 2017 I'm guessing the problem is the invalid partition error after unRAID creates it:Feb 12 11:17:37 Galactica emhttp: shcmd (55): sgdisk -o -a 64 -n 1:64:0 /dev/sdb |& loggerFeb 12 11:17:38 Galactica kernel: sdb: sdb1Feb 12 11:17:38 Galactica root: Creating new GPT entries.Feb 12 11:17:38 Galactica root: The operation has completed successfully.Feb 12 11:17:38 Galactica emhttp: shcmd (56): udevadm settleFeb 12 11:17:38 Galactica emhttp: invalid partition(s) Post the output of:sfdisk /dev/sdb Check that parity it's still sdb, if not use current identifier.Thank you for the reply. I'm currently pre-clearing all the discs. I moved everything off the server to start fresh. I was poking around the forum and found Areca has an updated driver and indicated that v6.0 had native support (what I came from before upgrading). When I'm able to rebuild the kernel after the pre-clear I'll attempt the updated driver as well. Quote Link to comment
stv29 Posted February 23, 2017 Author Share Posted February 23, 2017 I've cleared all the drives and started fresh with 6.2.4 with the module from Areca's site for this version. I tried 6.3.2 but they module was an improper format. (Also tried without Areca's drivers on 6.3.2 following the below thread without success. I set the display name to automatic, installed my pro key and assigned the parity and one drive. Same results as before. I have confirmed the parity drive is still assigned to sdb. Any guidance is greatly appreciated. Thank you! Log.txt Quote Link to comment
JorgeB Posted February 23, 2017 Share Posted February 23, 2017 On 21/02/2017 at 7:47 PM, johnnie.black said: Post the output of: sfdisk /dev/sdb Quote Link to comment
JorgeB Posted March 17, 2017 Share Posted March 17, 2017 On 23/02/2017 at 10:09 PM, stv29 said: Any guidance is greatly appreciated. Thank you! In case it still maters the problem is that your Areca controller is using a 4kb logical sector Feb 12 14:55:49 Galactica kernel: sd 1:0:0:0: [sdb] 1953508992 4096-byte logical blocks: (8.00 TB/7.28 TiB) unRAID currently only supports 512byte sectors, don't know it that setting is configurable on the controller. 1 Quote Link to comment
stv29 Posted March 21, 2017 Author Share Posted March 21, 2017 It is and that worked! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.