unRAID Server Release 6.0-beta3-x86_64 Available


limetech

Recommended Posts

When I added a 4TB parity drive a month or two ago I did as you outlined - I added the drive to my v6.0 system, pre_cleared it, stopped the array, added the drive and re-started it. However, UnRAID wanted to clear the drive again, which took almost 6 hours, and my array was unavailable during that time. Once the clear completed I was put back to the screen where I could start the array again. That time when I clicked start I was able to format the drive and was operational in a few minutes.

 

I thought I was avoiding this issue this time by doing the preliminary work on another system, but obviously that was not the case.

 

I ended up starting the array without the new disk as I didn't want the system to be unusable during hours when the family was still awake. I am thinking to add the disk and re-clear it via the GUI later tonight, but hope it will only take the 6 hours again, and not the 36-40 the pre_clear takes for a 4TB drive.

 

This is because the disk was not clear when it was added to the array.

 

I added a new 4TB drive to my system. I pre_cleared it on a UnRAID 5.0.5 system and also mounted it so I could format it before moving it over to my existing array.

 

 

Hopefully you've found the reasoning behind this and it's fixed in beta4.

 

This is not a bug. The disk was not clear prior to addition to the array. Once the disk is mounted and formatted it is no longer clear.

Link to comment
  • Replies 661
  • Created
  • Last Reply

Top Posters In This Topic

When I added a 4TB parity drive a month or two ago I did as you outlined - I added the drive to my v6.0 system, pre_cleared it, stopped the array, added the drive and re-started it. However, UnRAID wanted to clear the drive again, which took almost 6 hours, and my array was unavailable during that time. Once the clear completed I was put back to the screen where I could start the array again. That time when I clicked start I was able to format the drive and was operational in a few minutes.

I am even more confused!  You talk about a pre-clear when adding a parity drive, but that only happens when adding a data drive.  I do not understand the workflow you have been using.

 

I thought I was avoiding this issue this time by doing the preliminary work on another system, but obviously that was not the case.

The preclear does avoid the long downtime issue, but there is obviously something we do not understand about your workflow that is causing i it to happen for you.

Link to comment

Hi,

 

I´m using the 6.0-beta3 and have a lot off errors in the syslog. All Errors beginning with [fffff.....]

you can see it here :

 

Mar 23 08:07:22 MASSENGRAB kernel:  [ffffea0000000000-ffffea00087fffff] PMD -> [ffff880217000000-ffff88021ebfffff] on node 0 (Errors)

 

this massage is the first error while booting. After booting, when booting is completed and I try to fill the serer I get more errors:

 

Mar 23 08:16:15 MASSENGRAB ntpd_intres[1450]: host name not found: pool.ntp.org

Mar 23 08:17:08 MASSENGRAB kernel: BUG: Bad page state in process smbd  pfn:3d47f

Mar 23 08:17:08 MASSENGRAB kernel: page:ffffea0000f51fc0 count:0 mapcount:0 mapping:          (null) index:0x2

Mar 23 08:17:08 MASSENGRAB kernel: page flags: 0x4000000000100000(unevictable)

Mar 23 08:17:08 MASSENGRAB kernel: Modules linked in: md_mod coretemp hwmon kvm_intel kvm i2c_i801 i2c_core ata_piix ahci libahci e1000e(O) acpi_cpufreq mperf (Drive related)

Mar 23 08:17:08 MASSENGRAB kernel: CPU: 0 PID: 1530 Comm: smbd Tainted: G          O 3.10.24p-unRAID #13 (Errors)

Mar 23 08:17:08 MASSENGRAB kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./Z87 Extreme6, BIOS P2.30 12/26/2013

Mar 23 08:17:08 MASSENGRAB kernel:  000000000000f210 ffff8802147eda50 ffffffff8149830e ffff8802147eda68

Mar 23 08:17:08 MASSENGRAB kernel:  ffffffff810904fb 0000000000000001 ffff8802147edb28 ffffffff810907ea

Mar 23 08:17:08 MASSENGRAB kernel:  ffffea0000f526a0 ffff8802147edfd8 0000000000000002 000000001f212840

Mar 23 08:17:08 MASSENGRAB kernel: Call Trace: (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff8149830e>] dump_stack+0x19/0x1b (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff810904fb>] bad_page+0xca/0xe3 (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff810907ea>] get_page_from_freelist+0x236/0x491 (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff8109127b>] __alloc_pages_nodemask+0x152/0x858 (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff810c863d>] ? mem_cgroup_charge_common+0x77/0x83 (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff8108b768>] ? find_get_page+0x19/0x6a (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff8108c167>] grab_cache_page_write_begin+0x6b/0xbe (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff811f0055>] fuse_perform_write+0x193/0x44c (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff811f0554>] fuse_file_aio_write+0x246/0x28d (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff810cc5f1>] do_sync_write+0x7a/0x9f (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff810ccb41>] vfs_write+0xc2/0x170 (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff810cd131>] SyS_pwrite64+0x52/0x82 (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff8149c729>] system_call_fastpath+0x16/0x1b (Errors)

Mar 23 08:17:08 MASSENGRAB kernel: Disabling lock debugging due to kernel taint

Mar 23 08:17:08 MASSENGRAB kernel: BUG: Bad page state in process shfs  pfn:3d466

Mar 23 08:17:08 MASSENGRAB kernel: page:ffffea0000f51980 count:0 mapcount:0 mapping:          (null) index:0x2

Mar 23 08:17:08 MASSENGRAB kernel: page flags: 0x4000000000100000(unevictable)

Mar 23 08:17:08 MASSENGRAB kernel: Modules linked in: md_mod coretemp hwmon kvm_intel kvm i2c_i801 i2c_core ata_piix ahci libahci e1000e(O) acpi_cpufreq mperf (Drive related)

Mar 23 08:17:08 MASSENGRAB kernel: CPU: 1 PID: 1403 Comm: shfs Tainted: G    B      O 3.10.24p-unRAID #13 (Errors)

Mar 23 08:17:08 MASSENGRAB kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./Z87 Extreme6, BIOS P2.30 12/26/2013

Mar 23 08:17:08 MASSENGRAB kernel:  000000000000f8f4 ffff880210b1da10 ffffffff8149830e ffff880210b1da28

Mar 23 08:17:08 MASSENGRAB kernel:  ffffffff810904fb 0000000000000001 ffff880210b1dae8 ffffffff810907ea

Mar 23 08:17:08 MASSENGRAB kernel:  ffff880210b1da58 ffff880210b1dfd8 0000000000000002 00000000f13605f4

Mar 23 08:17:08 MASSENGRAB kernel: Call Trace: (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff8149830e>] dump_stack+0x19/0x1b (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff810904fb>] bad_page+0xca/0xe3 (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff810907ea>] get_page_from_freelist+0x236/0x491 (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff8109127b>] __alloc_pages_nodemask+0x152/0x858 (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff81134f7e>] ? do_journal_end.isra.25+0xc4e/0xc78 (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff8108b768>] ? find_get_page+0x19/0x6a (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff8108c167>] grab_cache_page_write_begin+0x6b/0xbe (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff8111e876>] reiserfs_write_begin+0x5a/0x1bc (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff8108bc8b>] generic_file_buffered_write+0x10a/0x23b (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff8108cfee>] __generic_file_aio_write+0x2a4/0x2dc (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff8108d07c>] generic_file_aio_write+0x56/0xa4 (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff810cc5f1>] do_sync_write+0x7a/0x9f (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff810ccb41>] vfs_write+0xc2/0x170 (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff810cd131>] SyS_pwrite64+0x52/0x82 (Errors)

Mar 23 08:17:08 MASSENGRAB kernel:  [<ffffffff8149c729>] system_call_fastpath+0x16/0x1b (Errors)

Mar 23 08:17:08 MASSENGRAB kernel: BUG: Bad page state in process shfs  pfn:3d467

Mar 23 08:17:08 MASSENGRAB kernel: page:ffffea0000f519c0 count:0 mapcount:0 mapping:0000000000100000 index:0x2

 

What does it mean? Something with the CPU?

 

Best helgebernd

syslog-2014-03-23.txt

Link to comment

When I added a 4TB parity drive a month or two ago I did as you outlined - I added the drive to my v6.0 system, pre_cleared it, stopped the array, added the drive and re-started it. However, UnRAID wanted to clear the drive again, which took almost 6 hours, and my array was unavailable during that time. Once the clear completed I was put back to the screen where I could start the array again. That time when I clicked start I was able to format the drive and was operational in a few minutes.

I am even more confused!  You talk about a pre-clear when adding a parity drive, but that only happens when adding a data drive.  I do not understand the workflow you have been using.

 

I thought I was avoiding this issue this time by doing the preliminary work on another system, but obviously that was not the case.

The preclear does avoid the long downtime issue, but there is obviously something we do not understand about your workflow that is causing i it to happen for you.

 

Sorry.... Nothing like trying to dig yourself out of a hole only to realize you are heading in the wrong direction.  :o

 

I made the comments I am trying to articulate here:

 

Re: unRAID Server Release 6.0-beta3-x86_64 Available

« Reply #286 on: February 07, 2014, 09:47:42 AM »

 

I had added a 4TB parity drive (which I did pre_clear before using - Don't you do this for parity drives as well as data to ensure the drive is solid?), and pre_cleared the 3TB drive which had been parity before re-adding it as a data drive. At that time I only had my 6.0 server, so pre_cleared the 3TB disk, stopped the array, added the drive and tried to start the array, but it wanted to clear the drive again, which took 6 hours to complete. Because the array was in a starting state (I guess) none of my shares were visible.

 

Once the GUI clear finished I had to click Start on the array again, which happened quickly and I could format the new disk.

 

This was what I had been trying to avoid this time, but am having other issues. As mentioned, I wanted to go through the clear process last night while everyone was sleeping, however I couldn't stop the array as it was stuck unmounting user shares. I had forgotten to shutdown my VM, so I did that but nothing changed.

 

I finally ended up rebooting the server and when it came back up I stopped the parity check, and tried to stop the array, but it still kept complaining about unmounting user shares, which was strange. I did this twice more with the same result, and left it doing a parity check which is still going.

 

I don't understand why immediately after a reboot (and I had not restarted the ArchVM or anything) that it failed to dismount the disks.

Link to comment

I´m using the 6.0-beta3 and have a lot off errors in the syslog. All Errors beginning with [fffff.....]

 

this massage is the first error while booting. After booting, when booting is completed and I try to fill the serer I get more errors:

 

Mar 23 08:17:08 MASSENGRAB kernel: BUG: Bad page state in process smbd  pfn:3d47f

...

Mar 23 08:17:08 MASSENGRAB kernel: CPU: 0 PID: 1530 Comm: smbd Tainted: G          O 3.10.24p-unRAID #13 (Errors)

...

Mar 23 08:17:08 MASSENGRAB kernel: Disabling lock debugging due to kernel taint

Mar 23 08:17:08 MASSENGRAB kernel: BUG: Bad page state in process shfs  pfn:3d466

...

Mar 23 08:17:08 MASSENGRAB kernel: CPU: 1 PID: 1403 Comm: shfs Tainted: G    B      O 3.10.24p-unRAID #13 (Errors)

...

Mar 23 08:17:08 MASSENGRAB kernel: BUG: Bad page state in process shfs  pfn:3d467

...

 

What does it mean? Something with the CPU?

 

Probably not the CPU.  In your syslog is the following:

Mar 23 08:07:22 MASSENGRAB kernel: ACPI Warning: 0x000000000000f040-0x000000000000f05f SystemIO conflicts with Region \_SB_.PCI0.SBUS.SMBI 1 (20130328/utaddress-251)

Mar 23 08:07:22 MASSENGRAB kernel: ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver

 

Normally, these ACPI Warning messages seem harmless, but in your case, I'm not so sure, because it sounds like it may be SMB related, and the first module reported to be corrupted was smbd.  That's not a direct link at all, but is still suspicious.  And I'm not expert enough to say what the conflicting module/driver is.

 

My recommendation is to check for a BIOS update for your motherboard.

 

Another reason to locate a BIOS update is the following:

Mar 23 08:07:22 MASSENGRAB kernel: [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored

Again, I'm not an expert, and I have no idea what the significance of this is, but a BIOS update may fix it.

 

Mar 23 08:07:22 MASSENGRAB kernel: FAT-fs (sda1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.

Run checkdisk on your flash drive.

 

I agree with this, although I'm not sure it's related.

 

I've been seeing this message rather often lately, in numerous syslogs, but without the kind of problems reported that you would normally expect with a corrupted boot file system, at least not the kind of problems that were easily and directly associated with it.  On the other hand, there is a possible cause, in that so many users with older versions of UnRAID have had to rerun syslinux, typically on a Windows machine, and may not have properly run the safe removal tool.  I'd like to hear if the Check Disk actually finds anything wrong, and whether it clears this bad shutdown check.

Link to comment

Anyone have any idea what all this means in my log?  This occurred about 8 times in a row.  I've had this happen before and it all occurs at midnight.

 

I am running one VM.  I have 8G of memory.  4G is dedicated to unRAID, and 1G is for the VM.

 

Plugins are Dynamix, SNAP and apcupsd.

 

Mar 24 00:59:23 MediaServer kernel: swapper/0: page allocation failure: order:0, mode:0x20
Mar 24 00:59:23 MediaServer kernel: CPU: 0 PID: 0 Comm: swapper/0 Tainted: G           O 3.10.24p-unRAID #13
Mar 24 00:59:23 MediaServer kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./Z87 Extreme6, BIOS P2.30 12/26/2013
Mar 24 00:59:23 MediaServer kernel:  0000000000012b30 ffff880132e03b90 ffffffff8149830e ffff880132e03c18
Mar 24 00:59:23 MediaServer kernel:  ffffffff8108ef64 ffffffff00000002 ffffffff00000001 0000000081687e10
Mar 24 00:59:23 MediaServer kernel:  0000000000000010 ffffffff816897f8 ffff880000000002 0000000100000030
Mar 24 00:59:23 MediaServer kernel: Call Trace:
Mar 24 00:59:23 MediaServer kernel:    [] dump_stack+0x19/0x1b
Mar 24 00:59:23 MediaServer kernel:  [] warn_alloc_failed+0x118/0x12c
Mar 24 00:59:23 MediaServer kernel:  [] __alloc_pages_nodemask+0x736/0x858
Mar 24 00:59:23 MediaServer kernel:  [] __netdev_alloc_frag+0x68/0x113
Mar 24 00:59:23 MediaServer kernel:  [] __netdev_alloc_skb+0x39/0xd7
Mar 24 00:59:23 MediaServer kernel:  [] igb_clean_rx_irq+0xe9/0x673 [igb]
Mar 24 00:59:23 MediaServer kernel:  [] igb_poll+0x389/0x608 [igb]
Mar 24 00:59:23 MediaServer kernel:  [] ? xen_hypercall_sched_op+0xa/0x20
Mar 24 00:59:23 MediaServer kernel:  [] net_rx_action+0xa6/0x153
Mar 24 00:59:23 MediaServer kernel:  [] __do_softirq+0xc5/0x17f
Mar 24 00:59:23 MediaServer kernel:  [] call_softirq+0x1c/0x30
Mar 24 00:59:23 MediaServer kernel:  [] do_softirq+0x3c/0x82
Mar 24 00:59:23 MediaServer kernel:  [] irq_exit+0x44/0x89
Mar 24 00:59:23 MediaServer kernel:  [] xen_evtchn_do_upcall+0x2b/0x37
Mar 24 00:59:23 MediaServer kernel:  [] xen_do_hypervisor_callback+0x1e/0x30
Mar 24 00:59:23 MediaServer kernel:    [] ? xen_hypercall_sched_op+0xa/0x20
Mar 24 00:59:23 MediaServer kernel:  [] ? xen_hypercall_sched_op+0xa/0x20
Mar 24 00:59:23 MediaServer kernel:  [] ? xen_safe_halt+0x10/0x18
Mar 24 00:59:23 MediaServer kernel:  [] ? default_idle+0x9/0xd
Mar 24 00:59:23 MediaServer kernel:  [] ? arch_cpu_idle+0x13/0x1e
Mar 24 00:59:23 MediaServer kernel:  [] ? cpu_startup_entry+0xcb/0x12a
Mar 24 00:59:23 MediaServer kernel:  [] ? rest_init+0x6d/0x6f
Mar 24 00:59:23 MediaServer kernel:  [] ? start_kernel+0x37a/0x385
Mar 24 00:59:23 MediaServer kernel:  [] ? repair_env_string+0x58/0x58
Mar 24 00:59:23 MediaServer kernel:  [] ? x86_64_start_reservations+0x2a/0x2c
Mar 24 00:59:23 MediaServer kernel:  [] ? xen_start_kernel+0x50b/0x517
Mar 24 00:59:23 MediaServer kernel: Mem-Info:
Mar 24 00:59:23 MediaServer kernel: DMA per-cpu:
Mar 24 00:59:23 MediaServer kernel: CPU    0: hi:    0, btch:   1 usd:   0
Mar 24 00:59:23 MediaServer kernel: CPU    1: hi:    0, btch:   1 usd:   0
Mar 24 00:59:23 MediaServer kernel: DMA32 per-cpu:
Mar 24 00:59:23 MediaServer kernel: CPU    0: hi:  186, btch:  31 usd: 180
Mar 24 00:59:23 MediaServer kernel: CPU    1: hi:  186, btch:  31 usd:  14
Mar 24 00:59:23 MediaServer kernel: Normal per-cpu:
Mar 24 00:59:23 MediaServer kernel: CPU    0: hi:  186, btch:  31 usd:  26
Mar 24 00:59:23 MediaServer kernel: CPU    1: hi:  186, btch:  31 usd:   0
Mar 24 00:59:23 MediaServer kernel: active_anon:11615 inactive_anon:317 isolated_anon:0
Mar 24 00:59:23 MediaServer kernel:  active_file:268759 inactive_file:522798 isolated_file:0
Mar 24 00:59:23 MediaServer kernel:  unevictable:108543 dirty:100420 writeback:15521 unstable:0
Mar 24 00:59:23 MediaServer kernel:  free:5075 slab_reclaimable:35506 slab_unreclaimable:8081
Mar 24 00:59:23 MediaServer kernel:  mapped:5210 shmem:4040 pagetables:2101 bounce:0
Mar 24 00:59:23 MediaServer kernel:  free_cma:0
Mar 24 00:59:23 MediaServer kernel: DMA free:15076kB min:32kB low:40kB high:48kB active_anon:12kB inactive_anon:0kB active_file:56kB inactive_file:140kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15984kB managed:15900kB mlocked:0kB dirty:40kB writeback:12kB mapped:0kB shmem:12kB slab_reclaimable:556kB slab_unreclaimable:44kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Mar 24 00:59:23 MediaServer kernel: lowmem_reserve[]: 0 3158 3766 3766
Mar 24 00:59:23 MediaServer kernel: DMA32 free:4776kB min:6568kB low:8208kB high:9852kB active_anon:26868kB inactive_anon:924kB active_file:1069464kB inactive_file:2081236kB unevictable:1124kB isolated(anon):0kB isolated(file):0kB present:3326180kB managed:3234248kB mlocked:8kB dirty:396596kB writeback:61784kB mapped:6068kB shmem:12656kB slab_reclaimable:110408kB slab_unreclaimable:11428kB kernel_stack:424kB pagetables:4292kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Mar 24 00:59:23 MediaServer kernel: lowmem_reserve[]: 0 0 607 607
Mar 24 00:59:23 MediaServer kernel: Normal free:448kB min:1264kB low:1580kB high:1896kB active_anon:19580kB inactive_anon:344kB active_file:5516kB inactive_file:9816kB unevictable:433048kB isolated(anon):0kB isolated(file):0kB present:4708352kB managed:622432kB mlocked:4kB dirty:5044kB writeback:288kB mapped:14772kB shmem:3492kB slab_reclaimable:31060kB slab_unreclaimable:20852kB kernel_stack:872kB pagetables:4112kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Mar 24 00:59:23 MediaServer kernel: lowmem_reserve[]: 0 0 0 0
Mar 24 00:59:23 MediaServer kernel: DMA: 11*4kB (UE) 5*8kB (UM) 7*16kB (EM) 3*32kB (EM) 5*64kB (UEM) 5*128kB (UM) 2*256kB (UE) 0*512kB 3*1024kB (UEM) 3*2048kB (UEM) 1*4096kB (R) = 15076kB
Mar 24 00:59:23 MediaServer kernel: DMA32: 0*4kB 1*8kB (R) 0*16kB 1*32kB (R) 0*64kB 1*128kB (R) 0*256kB 1*512kB (R) 0*1024kB 0*2048kB 1*4096kB (R) = 4776kB
Mar 24 00:59:23 MediaServer kernel: Normal: 35*4kB (EM) 45*8kB (M) 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 500kB
Mar 24 00:59:23 MediaServer kernel: 904225 total pagecache pages
Mar 24 00:59:23 MediaServer kernel: 0 pages in swap cache
Mar 24 00:59:23 MediaServer kernel: Swap cache stats: add 0, delete 0, find 0/0
Mar 24 00:59:23 MediaServer kernel: Free swap  = 0kB
Mar 24 00:59:23 MediaServer kernel: Total swap = 0kB

Link to comment

I decided to migrate one of my test boxes from 5.0.5 up to beta 3 of 6.0. I can no longer ssh into the box. Physical terminal login still works fine. However certain binaries that worked fine before the move appear to be broken (such as nano & clear to name a few). Attempting to execute them results in an error: "cannot execute binary file".

 

Doing a quick search pointed me at the fact that Slackware 14.1 64bit (which unRAID 6 appears to be based on), by default is purely 64bit with no 32bit support out of the box. I came across a few guides on how to manually replace the necessary libraries to support both 32 and 64bit executables (desirable until 64bit native binaries, plugins, ect, are available across the board), but I really don't want to start going down that road unless I absolutely have, especially on a beta build.

 

Maybe I am on a completely wrong path, please feel free to point this Slackware n00bie in the right direction. If I am not wrong, any chance you can add the necessary libraries to provide some much needed legacy support? Thanks! 8)

Link to comment

When I added a 4TB parity drive a month or two ago I did as you outlined - I added the drive to my v6.0 system, pre_cleared it, stopped the array, added the drive and re-started it. However, UnRAID wanted to clear the drive again, which took almost 6 hours, and my array was unavailable during that time. Once the clear completed I was put back to the screen where I could start the array again. That time when I clicked start I was able to format the drive and was operational in a few minutes.

I am even more confused!  You talk about a pre-clear when adding a parity drive, but that only happens when adding a data drive.  I do not understand the workflow you have been using.

 

I thought I was avoiding this issue this time by doing the preliminary work on another system, but obviously that was not the case.

The preclear does avoid the long downtime issue, but there is obviously something we do not understand about your workflow that is causing i it to happen for you.

 

Sorry.... Nothing like trying to dig yourself out of a hole only to realize you are heading in the wrong direction.  :o

 

I made the comments I am trying to articulate here:

 

Re: unRAID Server Release 6.0-beta3-x86_64 Available

« Reply #286 on: February 07, 2014, 09:47:42 AM »

 

I had added a 4TB parity drive (which I did pre_clear before using - Don't you do this for parity drives as well as data to ensure the drive is solid?), and pre_cleared the 3TB drive which had been parity before re-adding it as a data drive. At that time I only had my 6.0 server, so pre_cleared the 3TB disk, stopped the array, added the drive and tried to start the array, but it wanted to clear the drive again, which took 6 hours to complete. Because the array was in a starting state (I guess) none of my shares were visible.

 

Once the GUI clear finished I had to click Start on the array again, which happened quickly and I could format the new disk.

 

This was what I had been trying to avoid this time, but am having other issues. As mentioned, I wanted to go through the clear process last night while everyone was sleeping, however I couldn't stop the array as it was stuck unmounting user shares. I had forgotten to shutdown my VM, so I did that but nothing changed.

 

I finally ended up rebooting the server and when it came back up I stopped the parity check, and tried to stop the array, but it still kept complaining about unmounting user shares, which was strange. I did this twice more with the same result, and left it doing a parity check which is still going.

 

I don't understand why immediately after a reboot (and I had not restarted the ArchVM or anything) that it failed to dismount the disks.

 

Did you get your question(s) answered yet? The formatting in your post seems to be wonky, i'm not sure what you are still asking.

 

Assuming you are not pre-clearing a drive just to test it/burn it in:

 

Parity drives do not need to be pre-cleared. Upon replacing your Parity drive (or adding one, if none existed) your unRaid array will start to calculate parity. This WILL take a while, but if i recall, you can still use the array while it is doing its math, it will just be slow

 

Data drives can be pre-cleared (or not). If you do NOT pre-clear, then unRAID will read/fill the entire drive to make sure that it is fully blank, and (i think) set it all to zeros so it doesn't have to do anything extra with parity. This will take quite a while (your previous listing of 6 hours sounds good) and your array cannot start until it finishes.

The pre-clear script, however, will leave a special signature on the drive, which unRAID will see, and not try to do anything special to the disk. Your array should then start quickly, allow you to format the drive with its filesystem, and off you go. This is often preferred, since your system only need to be down to install the hdd (either to start preclearing it, or just power it up if you used another system to do the preclear). Pre-clear often takes 24+ hours on 2TB+ hdds depending on I/O speed.

 

You also asked about why you couldn't stop your array right after starting it. I would insure that you are not running any plugins, etc, and that there are no systems trying to use those shares. Any chance you have some other system automatically doing something to those shares when they are available? There are links around here to track down what is still using your shares, which files are open, etc, etc. I don't have it handy, but its worth looking into if you are trying to solve this issue.

 

I haven't seen anyone say they ran a preclear on a v5 system, then installed the drive into a v6 system and it worked without issue. I'm sure someone has done it, and I have no reason to doubt it, but since I've never seen someone say they did it, I can't tell you with 100% it works. (again, I assume it does, don't have enough spare unRAID boxes to test it though!)

Link to comment

I decided to migrate one of my test boxes from 5.0.5 up to beta 3 of 6.0. I can no longer ssh into the box. Physical terminal login still works fine. However certain binaries that worked fine before the move appear to be broken (such as nano & clear to name a few). Attempting to execute them results in an error: "cannot execute binary file".

 

Doing a quick search pointed me at the fact that Slackware 14.1 64bit (which unRAID 6 appears to be based on), by default is purely 64bit with no 32bit support out of the box. I came across a few guides on how to manually replace the necessary libraries to support both 32 and 64bit executables (desirable until 64bit native binaries, plugins, ect, are available across the board), but I really don't want to start going down that road unless I absolutely have, especially on a beta build.

 

Maybe I am on a completely wrong path, please feel free to point this Slackware n00bie in the right direction. If I am not wrong, any chance you can add the necessary libraries to provide some much needed legacy support? Thanks! 8)

Making 64 bit unRAID run 32 bit executables is probably the wrong path. Most of the popular plugins are already on the forum in 64 bit. What are you missing?
Link to comment
Doing a quick search pointed me at the fact that Slackware 14.1 64bit (which unRAID 6 appears to be based on), by default is purely 64bit with no 32bit support out of the box. I came across a few guides on how to manually replace the necessary libraries to support both 32 and 64bit executables (desirable until 64bit native binaries, plugins, ect, are available across the board), but I really don't want to start going down that road unless I absolutely have, especially on a beta build.

 

Maybe I am on a completely wrong path, please feel free to point this Slackware n00bie in the right direction. If I am not wrong, any chance you can add the necessary libraries to provide some much needed legacy support? Thanks! 8)

 

Whether to add 32bit support is a philosophical decision.  There has already been discussion on this forum and the great majority were in favour of maintaining a pure 64bit environment,especially since the support for 32bit binaries is not always perfect.

 

If your problem is support of existing 32bit plugins, then try contacting the author.  Nearly all plugins can be 'converted' simply by replacing the old 32bit package downloads with the 64bit equivalents.  The other point to remember is that, with the availability of VMs, many existing plugins are better offloaded from the unRAID machine and run in appropriate VMs.  If you're not intending to run VMs, then there is little compulsion to move from v5 to v6 - especially during the beta phase.

Link to comment

Did you get your question(s) answered yet? The formatting in your post seems to be wonky, i'm not sure what you are still asking.

 

Assuming you are not pre-clearing a drive just to test it/burn it in:

 

Parity drives do not need to be pre-cleared. Upon replacing your Parity drive (or adding one, if none existed) your unRaid array will start to calculate parity. This WILL take a while, but if i recall, you can still use the array while it is doing its math, it will just be slow

 

Data drives can be pre-cleared (or not). If you do NOT pre-clear, then unRAID will read/fill the entire drive to make sure that it is fully blank, and (i think) set it all to zeros so it doesn't have to do anything extra with parity. This will take quite a while (your previous listing of 6 hours sounds good) and your array cannot start until it finishes.

The pre-clear script, however, will leave a special signature on the drive, which unRAID will see, and not try to do anything special to the disk. Your array should then start quickly, allow you to format the drive with its filesystem, and off you go. This is often preferred, since your system only need to be down to install the hdd (either to start preclearing it, or just power it up if you used another system to do the preclear). Pre-clear often takes 24+ hours on 2TB+ hdds depending on I/O speed.

 

You also asked about why you couldn't stop your array right after starting it. I would insure that you are not running any plugins, etc, and that there are no systems trying to use those shares. Any chance you have some other system automatically doing something to those shares when they are available? There are links around here to track down what is still using your shares, which files are open, etc, etc. I don't have it handy, but its worth looking into if you are trying to solve this issue.

 

I haven't seen anyone say they ran a preclear on a v5 system, then installed the drive into a v6 system and it worked without issue. I'm sure someone has done it, and I have no reason to doubt it, but since I've never seen someone say they did it, I can't tell you with 100% it works. (again, I assume it does, don't have enough spare unRAID boxes to test it though!)

 

Thanks for the follow up. As to whether I got an answer, no, not really, but I've worked around it all the same.

 

I will be adding another 4TB disk soon so will try the pre_clear on the 6.0 system and see if I have the same issue of the GUI wanting to re-clear it, or if it was a one off thing.

 

As for being unable to stop the array, I have no clue what was going on. I don't have any plugins (all are in a VM which was not started yet), and I only have XBMC clients connecting to the server. I suppose one of the XBMC clients could have been trying to update it's library (I don't have a central one yet), but it was very strange as I've never had that issue before immediately after a reboot.

 

I ended up letting the parity check finish and then stopped the array, added the 4TB data drive and let it re-clear it via GUI. It was a long and painful day, but it's behind me now. :)

 

 

Link to comment

Doing a quick search pointed me at the fact that Slackware 14.1 64bit (which unRAID 6 appears to be based on), by default is purely 64bit with no 32bit support out of the box. I came across a few guides on how to manually replace the necessary libraries to support both 32 and 64bit executables (desirable until 64bit native binaries, plugins, ect, are available across the board), but I really don't want to start going down that road unless I absolutely have, especially on a beta build.

 

Maybe I am on a completely wrong path, please feel free to point this Slackware n00bie in the right direction. If I am not wrong, any chance you can add the necessary libraries to provide some much needed legacy support? Thanks! 8)

 

Whether to ad 32bit support is a philosophical decision.  There has already been discussion on this forum and the great majority were in favour of maintaining a pure 64bit environment,especially since the support for 32bit binaries is not always perfect.

 

If your problem is support of existing 32bit plugins, then try contacting the author.  Nearly all plugins can be 'converted' simply by replacing the old 32bit package downloads with the 64bit equivalents.  The other point to remember is that, with the availability of VMs, many existing plugins are better offloaded from the unRAID machine and run in appropriate VMs.  If you're not intending to run VMs, then there is little compulsion to move from v5 to v6 - especially during the beta phase.

 

i would completely support you on that. while i have barely any plugins installed under v5.0.x - the main reason to switch to v6.x would be to get the 2 plugins out of unraid and install the programs in a vm. therefore running a stable and clean unraid server, having everything i used as plugins running on a vm. should increase stability for my main-concern storage (unraid) and allow for even better options for most of the software around as plugins (no need to wait for sb to write a plugin, or update it - you just install the program wanted on a vm). obviously v6 is still beta/ fairly early beta. in top of it, some steep learning curve exists (just going to that myself, re: xen). but, in the end it should pay off nicely for increased stability and a much broader choice of apps to be used... straight out of the box (so to say).

if you intend to use v6/ xen with your old plugins (updated 64bit ones) you are missing the whole point of this new unraid version.

 

my 2c, L

Link to comment
if you intend to use v6/ xen with your old plugins (updated 64bit ones) you are missing the whole point of this new unraid version.

 

To be fair, there are one or two addons which are appropriate for the host machine.  I would argue strongly, that apcupsd is one, which is why it was the first conversion I performed.  Screen and ssh are other candidates (although I believe that ssh is now in the standard unRAID build).

Link to comment

Is this beta safe?

By definition since it is a beta there is an element of risk.  I would avoid using it on a really important system.

 

Having said that from feedback so far if you run a vanilla unRAID system with no plugins then for most people it runs trouble-free.  A  few people  have  reported issues with particular hardware configurations - but they tend to show up immediately. 

 

I am personally running it with no signs of trouble with the Dynamix enhanced GUI installed.  I am booting with Xen enabled (which is optional) which is allowing me to experiment with running VM's - I currently have some Windows ones running for reasonably serious usage and some others for experimenting.

Link to comment

Is this beta safe?

 

Sent from my GT-I9505 using Tapatalk

I have been running it for about a month, including a parity check and even upsizing a disk (rebuild), and no problems on the unRAID side of things. VM requires a little tinkering when the machine is rebooted due to the unRAID shares not being immediately available to it or something like that. For me, this mostly just means I have to manually restart Transmission on the VM, the other apps seem handle it on their own.
Link to comment

Just doing a fresh install and see what happens....

 

 

well... didn't seem to make a difference!! have to stick with afp for now i guess!

 

Seems all my issues stemmed from a pending disk failure, red-balled a couple days ago, now replaced & re-built, transfer's over smb seem to be working fine now!!

 

Please disregard all my previous posts, nothing was related to V6b3!! I guess I should have completed more testing before posting!!

Link to comment

Pretty new to unraid, but having read a lot and seen many videos, im very aware of what's happening. (if that makes sense)

Anyway i have a problem that i'm not sure what it can stem from..

 

After installing and running the beta version of 6.0, installing the unmenu, and using the pkg installer to install screen for preclearing a few discs.. screen does not work at all, it will install, however does not execute.. just says cannot execute binary file. I've also tried the installpkg from tgz downloaded manually, but to no avial.. any ideas??  :-\

Link to comment

Pretty new to unraid, but having read a lot and seen many videos, im very aware of what's happening. (if that makes sense)

Anyway i have a problem that i'm not sure what it can stem from..

 

After installing and running the beta version of 6.0, installing the unmenu, and using the pkg installer to install screen for preclearing a few discs.. screen does not work at all, it will install, however does not execute.. just says cannot execute binary file. I've also tried the installpkg from tgz downloaded manually, but to no avial.. any ideas??  :-\

unmenu has not been updated to install 64 bit packages. You can't run 32 bit applications on 64 bit unRAID 6.

 

For 64 bit screen see here and here

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.