PeterB Posted March 2, 2011 Share Posted March 2, 2011 One small question - what has become of /proc/acpi/sleep - it seems to have vanished in 5? See here: http://acpi.sourceforge.net/documentation/sleep.html I thought that it might be something like that. unMENU still tries to invoke sleep. I guess that the s3sleep scripts will require some attention! I wonder whether s2ram is still appropriate. Quote Link to comment
SCSI Posted March 2, 2011 Share Posted March 2, 2011 Successfully updated to 5b6 from 4.7 on my server. I did not have to re-add my disks. All my 6 HDs were detected and valid. Currently running a parity check. Quote Link to comment
PeterB Posted March 2, 2011 Share Posted March 2, 2011 For some reason, mover is not being run automatically on 5.0b6. It was fine of b4. I have the schedule set to 0 */2 * * *. In other words, it should run every 2 hours. It hasn't run at 8am, 10am or 12. I have just clicked 'Apply' on the Mover Settings, and I have a log entry: Mar 2 12:20:23 Tower emhttp: shcmd (128): crontab -c /etc/cron.d - <<< "# Generated mover schedule: 0 */2 * * * /usr/local/sbin/mover 2>&1 |logger" I'll wait and see whether it runs at 2pm. It worked when I invoked it manually at 8.10 am. ================================================================================================== The conclusion is that, for some reason, after upgrading from 5.0b4 to 5.0b6 the schedule set for mover was being ignored - mover just wasn't being run. Simply clicking 'Apply' on the mover settings has restored normal functionality, which has survived a reboot. Quote Link to comment
SSD Posted March 2, 2011 Share Posted March 2, 2011 Download Please read: There's a bug that started with 5.0-beta4 where the driver is writing the 'config/super.dat' file in the wrong format. If you are running -beta4, -beta5, -beta5a, or -beta5b, you must delete the file 'config/super.dat' before booting 5.0-beta6. This means you must re-assign all your hard drives again (sorry). If you are coming from 4.7, you will not need to do this. Please read the Release Notes located on the unRAID wiki. In particular, uninstall or disable any 3rd party add-ons until the add-on author has verified correct operation with this release. Tom - Could you provide a little more details on this bug, in particular what is needed to avoid it. Some people have begun using the 5.0 betas, but with the MBR issue in the beta6 are hesitant to upgrade. They might like to stick with beta4 or beta5b until the dust settles on beta6 and beyond. I saw a user post a problem when he added a disk to slot 20. If you avoid that slot can you avoid but bug? If not, what is required to avoid the bug? Thanks! Quote Link to comment
TheMantis Posted March 2, 2011 Share Posted March 2, 2011 I also have a number of disks indicated as being wrong. I suspect that they all have HPA's. Everything is working great in 4.7. It's no big drama at the moment. Here's a screen capture: Quote Link to comment
TheMantis Posted March 2, 2011 Share Posted March 2, 2011 And a system log : 5_0_beta6_log.pdf Quote Link to comment
SuperW2 Posted March 2, 2011 Share Posted March 2, 2011 Upgraded from Beta5b... assigned Disks and immediately 7 of my 17 Disks now say "Unformatted"... It also appears that a Parity Sync is also started/running...I'm guessing this is bad... Syslog Attached. syslog.zip Quote Link to comment
BRiT Posted March 2, 2011 Share Posted March 2, 2011 Upgraded from Beta5b... assigned Disks and immediately 7 of my 17 Disks now say "Unformatted"... It also appears that a Parity Sync is also started/running...I'm guessing this is bad... Syslog Attached. Is there anything in common with your disks showing up as "Unformatted" ? Were they all Sector 63 aligned previously? I followed Joe L's advice and used his MBR utility to repair my drives. The MBR repair utility runs in seconds, but it's the reiser filesystem check (reiserfsck) utility that takes a while to run on larger drives. Quote Link to comment
SuperW2 Posted March 2, 2011 Share Posted March 2, 2011 Upgraded from Beta5b... assigned Disks and immediately 7 of my 17 Disks now say "Unformatted"... It also appears that a Parity Sync is also started/running...I'm guessing this is bad... Syslog Attached. Is there anything in common with your disks showing up as "Unformatted" ? Were they all Sector 63 aligned previously? I followed Joe L's advice and used his MBR utility to repair my drives. The MBR repair utility runs in seconds, but it's the reiser filesystem check (reiserfsck) utility that takes a while to run on larger drives. No idea if there were Sector 63 Aligned previously (don't even know what that means actually)... I know they were "Alive" and running in a running array seconds before upgrading to Beta6 Quote Link to comment
BRiT Posted March 2, 2011 Share Posted March 2, 2011 On what version of unRAID were these drives initially created? Was it on or before version 4.6 or even 5.0beta3? I'm going out on a limb and saying they were indeed Sector 63 Aligned. If they were Sector 64 / 4K Aligned then the filesystem would still exist with previous data. Here are the lines from your syslog showing their MBRs being written over into a Sector 64 / 4K Alignment: Mar 1 21:39:53 Media emhttp: writing mbr on disk 1 (sda) with partition 1 offset 64 Mar 1 21:39:53 Media emhttp: re-reading (sda) partition table Mar 1 21:39:53 Media kernel: sda: sda1 Mar 1 21:39:54 Media emhttp: writing mbr on disk 2 (sdl) with partition 1 offset 64 Mar 1 21:39:54 Media emhttp: re-reading (sdl) partition table Mar 1 21:39:54 Media kernel: sdl: sdl1 Mar 1 21:39:55 Media emhttp: writing mbr on disk 3 (sdc) with partition 1 offset 64 Mar 1 21:39:55 Media emhttp: re-reading (sdc) partition table Mar 1 21:39:55 Media kernel: sdc: sdc1 Mar 1 21:39:56 Media emhttp: writing mbr on disk 5 (sdk) with partition 1 offset 64 Mar 1 21:39:56 Media emhttp: re-reading (sdk) partition table Mar 1 21:39:56 Media kernel: sdk: sdk1 Mar 1 21:39:57 Media emhttp: writing mbr on disk 7 (sdm) with partition 1 offset 64 Mar 1 21:39:57 Media emhttp: re-reading (sdm) partition table Mar 1 21:39:57 Media kernel: sdm: sdm1 Mar 1 21:39:58 Media emhttp: writing mbr on disk 8 (sde) with partition 1 offset 64 Mar 1 21:39:58 Media emhttp: re-reading (sde) partition table Mar 1 21:39:58 Media kernel: sde: sde1 Mar 1 21:39:59 Media emhttp: writing mbr on disk 9 (sdo) with partition 1 offset 64 Mar 1 21:39:59 Media emhttp: re-reading (sdo) partition table Mar 1 21:39:59 Media kernel: sdo: sdo1 After stopping the array, if you follow the directions Joe L posted on reconstructing the MBR on those drives then you should be good to go without data loss. Quote Link to comment
SuperW2 Posted March 2, 2011 Share Posted March 2, 2011 On what version of unRAID were these drives initially created? Was it on or before version 4.6 or even 5.0beta3? I'm going out on a limb and saying they were indeed Sector 63 Aligned. If they were Sector 64 / 4K Aligned then the filesystem would still exist with previous data. Here are the lines from your syslog showing their MBRs being written over into a Sector 64 / 4K Alignment: Mar 1 21:39:53 Media emhttp: writing mbr on disk 1 (sda) with partition 1 offset 64 Mar 1 21:39:53 Media emhttp: re-reading (sda) partition table Mar 1 21:39:53 Media kernel: sda: sda1 Mar 1 21:39:54 Media emhttp: writing mbr on disk 2 (sdl) with partition 1 offset 64 Mar 1 21:39:54 Media emhttp: re-reading (sdl) partition table Mar 1 21:39:54 Media kernel: sdl: sdl1 Mar 1 21:39:55 Media emhttp: writing mbr on disk 3 (sdc) with partition 1 offset 64 Mar 1 21:39:55 Media emhttp: re-reading (sdc) partition table Mar 1 21:39:55 Media kernel: sdc: sdc1 Mar 1 21:39:56 Media emhttp: writing mbr on disk 5 (sdk) with partition 1 offset 64 Mar 1 21:39:56 Media emhttp: re-reading (sdk) partition table Mar 1 21:39:56 Media kernel: sdk: sdk1 Mar 1 21:39:57 Media emhttp: writing mbr on disk 7 (sdm) with partition 1 offset 64 Mar 1 21:39:57 Media emhttp: re-reading (sdm) partition table Mar 1 21:39:57 Media kernel: sdm: sdm1 Mar 1 21:39:58 Media emhttp: writing mbr on disk 8 (sde) with partition 1 offset 64 Mar 1 21:39:58 Media emhttp: re-reading (sde) partition table Mar 1 21:39:58 Media kernel: sde: sde1 Mar 1 21:39:59 Media emhttp: writing mbr on disk 9 (sdo) with partition 1 offset 64 Mar 1 21:39:59 Media emhttp: re-reading (sdo) partition table Mar 1 21:39:59 Media kernel: sdo: sdo1 After stopping the array, if you follow the directions Joe L posted on reconstructing the MBR on those drives then you should be good to go without data loss. The drives have been in there for quite a while... I'm sure since 4.x but not sure exactly... Ok, but first, I'm looking for the remedial Linux command for Dummies Wiki Page... I can Telnet to my Server via Putty, but don't have a clue how to access my Flash Drive to execute the app from Joe L's post, that I copied over from my Windows PC... I only see the following when logging in as "root"... Media login: root Linux 2.6.36.2-unRAID. root@Media:~# ls initconfig@ mdcmd* powerdown@ samba@ root@Media:~# Quote Link to comment
SuperW2 Posted March 2, 2011 Share Posted March 2, 2011 cd /boot ls Thanks... So I did the "test" thing on the first one... root@Media:/boot# unraid_partition_disk.sh /dev/sda ######################################################################## Model Family: Seagate Barracuda 7200.10 family Device Model: ST3750640AS Serial Number: 3QD02DRS Firmware Version: 3.AAC User Capacity: 750,156,374,016 bytes Disk /dev/sda: 750.2 GB, 750156374016 bytes 1 heads, 63 sectors/track, 23256336 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 64 1465149167 732574552 83 Linux Partition 1 does not end on cylinder boundary. ######################################################################## ============================================================================ == Disk /dev/sda is NOT partitioned for unRAID properly. == expected start = 63, actual start = 64 == expected size = 1465149105, actual size = 1465149104 ============================================================================ Quote Link to comment
BRiT Posted March 2, 2011 Share Posted March 2, 2011 Try the following: ls /boot cd /boot ls Hopefully you will see the unraid_partition_disk.sh file. If so then you can invoke it by using /boot/unraid_partition_disk.sh along with the needed parameters to it, such as your various drives one at a time [/dev/sda , /dev/sdl , /dev/sdc , /dev/sdk , /dev/sdm , /dev/sde , /dev/sdo ]. Also, do NOT use -A, you NEED it to reconstruct the MBR as Sector 63! Using the -A parameter keeps you at exactly the same situation. You first want to check the drive, then partition and write the MBR and then run filesystem check on the 1st partition, note the 1 at the device specifier /dev/sda1 unraid_partition_disk.sh /dev/sda unraid_partition_disk.sh -p /dev/sda reiserfsck /dev/sda1 Quote Link to comment
SuperW2 Posted March 2, 2011 Share Posted March 2, 2011 Try the following: ls /boot cd /boot ls Hopefully you will see the unraid_partition_disk.sh file. If so then you can invoke it by using /boot/unraid_partition_disk.sh along with the needed parameters to it, such as your various drives one at a time [/dev/sda , /dev/sdl , /dev/sdc , /dev/sdk , /dev/sdm , /dev/sde , /dev/sdo ]. Also, do NOT use -A, you NEED it to reconstruct the MBR as Sector 63! Using the -A parameter keeps you at exactly the same situation. You first want to check the drive, then partition and write the MBR and then run filesystem check on the 1st partition, note the 1 at the device specifier /dev/sda1 unraid_partition_disk.sh /dev/sda unraid_partition_disk.sh -p /dev/sda reiserfsck /dev/sda1 Yowza, this is getting involved... Here is the Results of the unraid_partition_disk.sh -p /dev/sda, reiserfsck /dev/sda1 running now root@Media:/boot# unraid_partition_disk.sh -p /dev/sda ######################################################################## Model Family: Seagate Barracuda 7200.10 family Device Model: ST3750640AS Serial Number: 3QD02DRS Firmware Version: 3.AAC User Capacity: 750,156,374,016 bytes Disk /dev/sda: 750.2 GB, 750156374016 bytes 1 heads, 63 sectors/track, 23256336 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 64 1465149167 732574552 83 Linux Partition 1 does not end on cylinder boundary. ######################################################################## ============================================================================ Note: this procedure does not touch any existing file-system It is intended to repair the partition table in the MBR. File system repairs may also be required. The entire 512 bytes of the MBR will be overwritten if you use this program. You will end up with a single partition starting on the second cylinder to the end of the drive. Definitions for any other partition will be erased. no other bytes on the disk will be afected. (Existing file-system data is not touched.. no byte > address 512 is touched) ============================================================================ Are you absolutely sure you want to write the MBR to partition this drive? (Answer 'Yes' to continue. Capital 'Y', lower case 'es'): Yes 48+0 records in 48+0 records out 48 bytes (48 B) copied, 0.0167704 s, 2.9 kB/s 1+0 records in 1+0 records out 446 bytes (446 B) copied, 2.5919e-05 s, 17.2 MB/s 1+0 records in 1+0 records out 1 byte (1 B) copied, 0.000410892 s, 2.4 kB/s 1+0 records in 1+0 records out 1 byte (1 B) copied, 0.000414972 s, 2.4 kB/s 1+0 records in 1+0 records out 1 byte (1 B) copied, 0.000487828 s, 2.0 kB/s 16+0 records in 16+0 records out 16 bytes (16 B) copied, 5.8341e-05 s, 274 kB/s Restarting udevd is STRONGLY discouraged and not supported. If you are sure you want to do this, use 'force-restart' instead. ============================================================================ == Partitioning of /dev/sda complete == ============================================================================ Disk /dev/sda: 750.2 GB, 750156374016 bytes 1 heads, 63 sectors/track, 23256336 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 63 1465149167 732574552+ 83 Linux Partition 1 does not end on cylinder boundary. Quote Link to comment
BRiT Posted March 2, 2011 Share Posted March 2, 2011 The reiserfsck takes a while to run on 2TB Green drives, around 37 minutes. It should be quicker on the smaller 750GB 7200rpm drives. Quote Link to comment
SuperW2 Posted March 2, 2011 Share Posted March 2, 2011 The reiserfsck takes a while to run on 2TB Green drives, around 50 minutes. It should be quicker on the smaller 750GB 7200rpm drives. The first one (SDA) just finished the reiserfsck...I started the array momentarily after it was done and SDA was no longer listed as unformatted (and quite full actually)... looks like I have a few hours of reiserfsck in front of me... Thanks for the help... Is there someway I could have avoided this (or others to avoid this during upgrades?) Quote Link to comment
BRiT Posted March 2, 2011 Share Posted March 2, 2011 I'm glad it seems you're on your way to recover from this situation. Short of not running beta versions on production arrays, no. I looked through your syslog and your activities seem very straight forward. Upon booting you went to re-adding the drives in to your array one by one. At the last step of starting the re-assigned array some of your drives were not detected to having existing unRAID MBR on them so a new MBR was written (at Sector 64). I have not combed through your syslog yet to find any correlation between the drives hit and those not hit. Quote Link to comment
sacretagent Posted March 2, 2011 Share Posted March 2, 2011 went back to beta4 and now my disk 2 is accessible again ... rebuilding superblocks ... not sure if my data is still alive LOL in beta6 i was not even able to run reiserfs rebuild tree... joe's partition script also gave me info in beta4, where in beta6 it said all 0 so i assume there is a BIG bug in beta6 just a bit more info EARS 1 TB drive but NO jumper and not MBR-4k-aligned started having trouble with a second ears drive in my array that's why i reverted Quote Link to comment
PeterB Posted March 2, 2011 Share Posted March 2, 2011 For some reason, mover is not being run automatically on 5.0b6. It was fine of b4. I have the schedule set to 0 */2 * * *. In other words, it should run every 2 hours. It hasn't run at 8am, 10am or 12. I have just clicked 'Apply' on the Mover Settings, and I have a log entry: Mar 2 12:20:23 Tower emhttp: shcmd (128): crontab -c /etc/cron.d - <<< "# Generated mover schedule: 0 */2 * * * /usr/local/sbin/mover 2>&1 |logger" I'll wait and see whether it runs at 2pm. It worked when I invoked it manually at 8.10 am. Yes, mover is now running, as intended, every two hours. I suspect that the trigger was the clicking on 'Apply'. So now I just need to see whether it will start running again after a reboot. Quote Link to comment
gfjardim Posted March 2, 2011 Share Posted March 2, 2011 I had a formated disk under 5b4 and a parity re-sync under 5b6. Maybe installation instructions should be modified. Can't we remove super.dat, reassign drives an then use the "trust my parity" procedure? Here I did that with 5b5a and worked. Quote Link to comment
ehfortin Posted March 2, 2011 Share Posted March 2, 2011 Hi, Try the following: ls /boot cd /boot ls Hopefully you will see the unraid_partition_disk.sh file. If so then you can invoke it by using /boot/unraid_partition_disk.sh along with the needed parameters to it, such as your various drives one at a time [/dev/sda , /dev/sdl , /dev/sdc , /dev/sdk , /dev/sdm , /dev/sde , /dev/sdo ]. Also, do NOT use -A, you NEED it to reconstruct the MBR as Sector 63! Using the -A parameter keeps you at exactly the same situation. You first want to check the drive, then partition and write the MBR and then run filesystem check on the 1st partition, note the 1 at the device specifier /dev/sda1 unraid_partition_disk.sh /dev/sda unraid_partition_disk.sh -p /dev/sda reiserfsck /dev/sda1 It took some time to figure that the script unraid_partition_disk has been updated between the time I've downloaded it yesterday morning and ... later the same day. I was unable to get success with the original one. Now, I have an interesting situation with my cache drive. At first, I figure it was not a problem as I knew it was empty so I tried just to reformat the disk with Unraid. Well... It stays in the unformatted state. This morning, I've tried to use Joe's script with apparent success (the script did create a MBR that is Unraid compliant). I've done the reiserfsck and everything whas fine. I'm even able to mount it and see that it is empty. However, Unraid continue to say it is unformatted. Here is the extract from the script: root@fileserver:/boot# ./unraid_partition_disk.sh /dev/sdb ######################################################################## Model Family: Seagate Barracuda 7200.10 family Device Model: ST3250410AS Serial Number: 9RY1CT09 Firmware Version: 3.AAC User Capacity: 250,059,350,016 bytes Disk /dev/sdb: 250.1 GB, 250059350016 bytes 1 heads, 63 sectors/track, 7752336 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 63 488397167 244198552+ 83 Linux Partition 1 does not end on cylinder boundary. ######################################################################## ============================================================================ == == DISK /dev/sdb IS partitioned for unRAID properly == expected start = 63, actual start = 63 == expected size = 488397105, actual size = 488397105 == ============================================================================ I've retried to format it from Unraid and, after saying it is formatting, it still shows unformatted, even if I stop and start the array or if I reboot. I'm including the syslog. However, I'm pretty sure I've identified why the problem is occuring but don't know how to fix it. When I look in the log (and if I try to mount it manually), I get the error "Mar 2 09:00:23 fileserver logger: mount: unknown filesystem type 'ddf_raid_member' Manually I can force the type to be reiserfs and it works fine. It seems like Unraid is doing a mount without telling the type is reiserfs so... it get the default ddf_raid_member. Who knows how to change that? In fdisk, I can see the partition ID is 83 (Linux) like all others. But that doesn't relate to actual file system on the disk as Linux type is working with ext2, ext3, ext4, reiserfs, etc. I also have another question. The default in Unraid 5b6 is to format MBR on 4K aligned. What if I put an older 1 TB drive that was not in that new format? Does it work or I have to change the default format to MBR unaligned for it to work? Thank you. ehfortin zipfile.zip Quote Link to comment
bubbaQ Posted March 2, 2011 Share Posted March 2, 2011 Tom, how about a command line option for emhttp, to not start the array. Particularly when testing betas, I would like to be able to start emhttp, and see the status of all the drives before starting the array. Perhaps, you could even add something to the config for the last version used, so that whenever a version change is detected, emhttp does not autostart the array and instead gives the user a message explaining possible issues. Quote Link to comment
ehfortin Posted March 2, 2011 Share Posted March 2, 2011 Tom, how about a command line option for emhttp, to not start the array. Particularly when testing betas, I would like to be able to start emhttp, and see the status of all the drives before starting the array. Perhaps, you could even add something to the config for the last version used, so that whenever a version change is detected, emhttp does not autostart the array and instead gives the user a message explaining possible issues. Yes, it would be a good idea to not try to autostart the array when there is a version change. By displaying the release notes (or something like it) and showing what emhttp think is appropriate but letting people decide to start the array or not based on these info, it may reduce problem. At least, people that sees errors and are not too sure what to do could ask question before trying it and/or revert back to the original version until they are more confident with the new version. ehfortin Quote Link to comment
lionelhutz Posted March 2, 2011 Share Posted March 2, 2011 Tom, I think a beta release such as this should have the download page you link; http://lime-technology.com/download/doc_details/21-unraid-server-version-50-beta6-aio But maybe it should not be listed here at all; http://lime-technology.com/download/cat_view/49-unraid-server In other words, a beta release should not be so easily accessed by the general public. They should have to come here and find the link in this thread. In this manner, they will also have the opportunity to read this thread and decide if they truely do want to use the release. Alternatively, put a big bold warning "This software is test software. It might cause you to lose all your data" on the above download page. I'm also not sure it's very appropriate to call it "Hot" and "New" either. To me, much new software (very typical of free software) is continually released as beta, even though the releases are really production releases of the software. This is causing too many people to not know what a beta release even means anymore Peter Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.