64 Bit unRAID running natively on Arch Linux with full hypervisor support



Recommended Posts

yeah i found that and it didnt fix the problem, also updated grub2 manually

 

thats about the update script not working not even about the issue i had

 

;(

 

edit: the links you gave me arent even the same error report that i had...

 

grub2, keep getting stuck at "loading initial ram-disk" for my custom kernel, even when i use the unraid version and copy the .config then change nothing

Link to comment
  • Replies 451
  • Created
  • Last Reply

Top Posters In This Topic

Whether item 3 is a current format 64 bit or more hopefully what we are talking about here is not clear.

It is pretty safe to assume that the first 64-bit release is going to be standard unRAID in 64-bit flavor.

 

Unfortunately I know you are correct but I am still hopeful this will happen sooner rather than later, as v5, webgui and plugins plus lack of passthrough on my system (I5 2500k) have made me lose interest in unRAID at the moment

Link to comment

yeah i found that and it didnt fix the problem, also updated grub2 manually

 

edit: the links you gave me arent even the same error report that i had...

 

I see that now.

 

is a nice idea, have made one of slack before but never really investigated this going deep for it

 

been playing with archunraid recently.... either i'm not smart enough for arch or for grub2, keep getting stuck at "loading initial ram-disk" for my custom kernel, even when i use the unraid version and copy the .config then change nothing

 

You do not provide a lot of info but glad that syslinux works for you. Grub is optionally as you know.

 

Any luck with compiling unRAID? Integrating it in Arch?

Link to comment

longer form version: grub gets to "loading initial ram-disk" when using custom kernel, even when i use the exact kernel version that unraid uses and copy the .config from unraid then change nothing (both with and without the md files) edit: didnt play around too heavily though, using vmware workstation 10.1 with "other linux 3.x 64bit kernel" option, maybe there's something in there grub2 doesnt like

 

yeah got it all mostly going in the end althoughhhh

you cant symlink to fat32 and syslinux only works ON fat32 so while grub is optional syslinux cant be used for that :) (when i manually created the boot/config folder and tried to open the webgui i got errors about being unable to read/create super.dat so couldnt actually save device assignments, stopped looking into it after that, couldnt be bothered digging into permissions and traces for that, killed the VM and will have another go perhaps with lilo this time)

 

edit: had done fullslackware64 before and did it again but i just dont like arch or slackware... so gonna have some goes at the other distros see what i like best :> im usually a debian guy but centos/opensuse looks like fun, used suse a few years back and liked the idea...

editedit: was using arch64 fyi

 

editediteditedit: (my brain is disjointed) always fighting the urge to go lfs/gentoo for super anal minimalistic vs actually picking something easy/simple to update/replace

 

editediteditediteditediteditediteditediteditediteditediteditedit: once i figured out to use syslinux i managed it in newer kernels etc so there's something weird there

Link to comment

longer form version: grub gets to "loading initial ram-disk" when using custom kernel, even when i use the exact kernel version that unraid uses and copy the .config from unraid then change nothing (both with and without the md files) edit: didnt play around too heavily though, using vmware workstation 10.1 with "other linux 3.x 64bit kernel" option, maybe there's something in there grub2 doesnt like

 

Grub (packaged with Arch and other Distros) looks for the kernel a certain way. Did you install the os-prober package? It might have better success but I wouldn't count on it.

 

Otherwise, use ABS and edit the PKGBUILD when you compile the Kernel in Arch. It will create it the way grub wants it.

 

Syslinux is "dumb" and doesn't assume anything and you have more "control" over it. Probably why you were successful.

 

yeah got it all mostly going in the end althoughhhh

you cant symlink to fat32 and syslinux only works ON fat32 so while grub is optional syslinux cant be used for that :) (when i manually created the boot/config folder and tried to open the webgui i got errors about being unable to read/create super.dat so couldnt actually save device assignments, stopped looking into it after that, couldnt be bothered digging into permissions and traces for that, killed the VM and will have another go perhaps with lilo this time)

 

mount the unraid flash drive via fstab and it can read Fat32 fine (assuming you have that FS enabled in the kernel)

 

edit: had done fullslackware64 before and did it again but i just dont like arch or slackware... so gonna have some goes at the other distros see what i like best :> im usually a debian guy but centos/opensuse looks like fun, used suse a few years back and liked the idea...

 

If you can compile kernels, install Arch and Slackware.... Why bother with unRAID?

 

You are jumping through a lot of hoops for an old / outdated filesystem (Reiser) running on Raid 5.

 

Use Raid 5/6/7 and use XFS, EXT4, LVM or ZFS instead in a native Linux Distro. It will be A LOT faster, easier and has better WebGUIs to manage.

 

editediteditedit: (my brain is disjointed) always fighting the urge to go lfs/gentoo for super anal minimalistic vs actually picking something easy/simple to update/replace

 

Gentoo is great but Arch is minimalistic too. You can compile everything in it or use the package manager. Same as Gentoo.

 

My Arch takes up less than 10 GB of space (I could make it less than 2 if I wanted) and runs with 1GB of ram with XBMC, mysql, webguis, etc. going. Also has 5 VMs (2 are XBMC) and total CPU Utilization is less than 15% with everything going at once (Playing 1080p Movies on XBMC on the server, 2 XBMC VMs and doing a "balance" in RAID, etc.)

Link to comment

mount the unraid flash drive via fstab and it can read Fat32 fine (assuming you have that FS enabled in the kernel)

yeah had it mounted as /flash then symlinked config to /boot/config (as mentioned in one of the older guides, probably unneeded but was getting late at night :>)

 

edit: had done fullslackware64 before and did it again but i just dont like arch or slackware... so gonna have some goes at the other distros see what i like best :> im usually a debian guy but centos/opensuse looks like fun, used suse a few years back and liked the idea...

If you can compile kernels, install Arch and Slackware.... Why bother with unRAID?

You are jumping through a lot of hoops for an old / outdated filesystem (Reiser) running on Raid 5.

Use Raid 5/6/7 and use XFS, EXT4, LVM or ZFS instead in a native Linux Distro. It will be A LOT faster, easier and has better WebGUIs to manage.

well its not raid5, its jbod with parity + fuse, i only use it for media storage so its fine and i like the idea that if the machine + some disks die i have full copies of the data on the other disks (and "best" option i've found for this, i dont like flexraid) i just want the option to run up some LabVM's on there whenever i want to test something (and to move sickbeard etc off my xbmc machine) mainly running as unraid for media storage with drives spundown but with a nice ZFS array raid5 for my vms/datastore is my final goal which is why i've kept an eye on this and i like where xen/kvm is going with GPU passthrough, vmware definately slack with that (and i get why, never going to use that on an enterprise server)

editediteditedit: (my brain is disjointed) always fighting the urge to go lfs/gentoo for super anal minimalistic vs actually picking something easy/simple to update/replace

Gentoo is great but Arch is minimalistic too. You can compile everything in it or use the package manager. Same as Gentoo.

My Arch takes up less than 1 GB of space and runs with 1GB of ram with XBMC, mysql, webguis, etc. going. Also has 5 VMs (2 are XBMC) and total CPU Utilization is less than 15% with everything going at once (Playing 1080p Files on the server, 2 XBMCs and doing a "balance" in RAID, etc.)

nice, will look further into it, after getting "lost" in the bootloader issues and compiling again and again to try track the issue i was over arch, need to take a breather then have another go ;) worst case use lilo i guess (or track down why workstation hates archgrub - as slackgrub worked fine)

 

 

my final goal used to be esxi with all this shit passed through but as you say each free version of esxi gets "meh'er" and i prefer not to passthrough, more complexity for no gain if you can get it all going in the native OS

Link to comment

longer form version: grub gets to "loading initial ram-disk" when using custom kernel, even when i use the exact kernel version that unraid uses and copy the .config from unraid then change nothing (both with and without the md files) edit: didnt play around too heavily though, using vmware workstation 10.1 with "other linux 3.x 64bit kernel" option, maybe there's something in there grub2 doesnt like

 

Grub (packaged with Arch and other Distros) looks for the kernel a certain way. Did you install the os-prober package? It might have better success but I wouldn't count on it.

 

Otherwise, use ABS and edit the PKGBUILD when you compile the Kernel in Arch. It will create it the way grub wants it.

 

Syslinux is "dumb" and doesn't assume anything and you have more "control" over it. Probably why you were successful.

 

 

i did not, yeah i tried the ABS method aswell, same issue..

 

when was messing around with archgrub2 i tried both ext2 and ext4 (both of which were compiled hard into the kernel)

Link to comment

You could always run unRAID in a VM and solely migrate over time to openSUSE (Linux) handling your storage. It can do ZFS, Raid 5 (Single Parity), Raid 6 (Dual Parity), RAID 7 (Triple Parity) BTRFS (RAID 5/6 should be stable this year), have Warm Spares (Linux would remove bad drive and add Warm Spare automatically and notify you), deduplication, encryption, CoW, compression, hardware monitoring, etc. and manage all of it via ONE webGUI that is very easy to use / "sexy".

 

unRAID is what I would consider a "starter home" for people who are looking to do a simple NAS in their home network. However, with the advances in Hardware / Software / Virtualization / XBMC / Usenet / etc.... Many of you want to do more / combine a lot of "things" into one machine. Due to the development cycle and using a bzroot with a read only filesystem... unRAID is never going to be able to do / update / innovate like a Professional full blown Linux Sever Distro can do.

 

Is ZFS as easy as unRAID to add and remove drives at will? 3 years ago, I went with unRAID because of how easy as it is. At this point like you have mentioned, I am looking for more out of my server. I have tons of plugins running on unRAID and I would rather have them run elsewhere. As I pointed out to you in a different thread, I am looking at building a new server. Now would be the time for me to think about migrating my data elsewhere if there is a good solution. Don't get me wrong, I love unRAID but it might make more sense to me and others to be able to have our data on a stable linux build if it had the same functions instead of running a vm's just for storage.

Link to comment

yeah i tried the ABS method aswell, same issue..

 

when was messing around with archgrub2 i tried both ext2 and ext4 (both of which were compiled hard into the kernel)

 

Did you add the filesystems you use and the kernel driver to the ramfs (using mkinitcpio.conf file)?

might not have, was weird that it just hangs at that (for 8hours+ left it overnight) rather than giving any error, used same FS as the default kernel which still worked

edit: maybe i was using MBR rather than GPT, grub2 definitely reads like it prefers GPT

 

followed directions for both ABS and "traditional" from archsite..

 

 

as said now that i've poked around a bit in arch might work if i try again

Link to comment

Is ZFS as easy as unRAID to add and remove drives at will?

 

you can add and expand the array but (as far as i know) theres no easy way to remove drives as the data is striped (i assume you're doing the zfs equiv of raid5/6/10 - RaidZ rather than mirrored zpool)

 

"if installing under vmware add BusLogic to modules" note in the installer... testing

Link to comment

Is ZFS as easy as unRAID to add and remove drives at will?

 

you can add and expand the array but (as far as i know) theres no easy way to remove drives as the data is striped (i assume you're doing the zfs equiv of raid5/6/10 - RaidZ rather than mirrored zpool)

 

"if installing under vmware add BusLogic to modules" note in the installer... testing

 

with ZFS your pool size is set for life when it is built. It can't be expanded in the same way that unraid can.

 

Expansion = copy all data off pool, delete pool, create new larger / smaller pool, copy back all yo shit.

 

Raid 5/6 with an lvm on top is a different story though and knocks the pants off unraid for performance and raid6 gives dual parity. However, all drives must be spinning due to the striping.

 

HTH

 

Sent from my Nexus 5 using Tapatalk

 

Link to comment

I'm back from my Xmas travels and have an exam on 6th but the after that its full steam ahead once I've begged and borrowed the components for a dev machine.

 

Waiting to hear back from Tom on things, loosing a bit of hope as its been nearly 2 weeks since he last made contact. I'm all for collaboration but it takes two to tango.

 

So, in light of this I will be pushing ahead with a fork of unraid myself much in the same way as other Linux distros have forked. Mine will be a two pronged approach...

 

First, get unraid-fork packaged for an alpha release. I approximate by mid to late Jan on this.

 

Second, tidy up any loose ends and reach a state where this can be 'given back' to the community entirely open sourced with some documentation by the end of Feb.

 

Third, take what I've learned and push forward with a replacement modernised version of emhttp that I will write. It is still very early days on point 3 mind you...

 

So with this all in mind, and leveraging this awesome community and what was discussed earlier in this thread, I feel comfortable accepting donations. I realise this model isn't for everyone but I would really appreciate any contributions towards the development costs as I'm a full time student right now without a job.

 

If you'd like to donate please do so via PayPal. My address is

 

[email protected]

 

Thanks for reading and for donating from the bottom of my heart if you do.

 

Sent from my Nexus 5 using Tapatalk

 

 

Link to comment

Fwiw (and imv obviously) announcing a fork/rewrite when those 2 weeks include the Xmas hols seems a little impatient and is not obviously a course of action conducive to a working relationship.

 

Nevertheless I look forward to the results. Have you decided on a distro to base it on? Or will you publish a description of what it is you hope to implement exactly?

 

Sent from my Nexus 7 using Tapatalk

 

Link to comment

Fwiw (and imv obviously) announcing a fork/rewrite when those 2 weeks include the Xmas hols seems a little impatient and is not obviously a course of action conducive to a working relationship.

1. Considering Tom is technically inclined... I'm sure he has heard of "Out of Office" replies before and knows how to set one up. Not sure what line of business you are in but EVERYONE I know and deal with at client sites (even assistants and "nonessential" personal)  use them.

 

2. You must be new here... It's not uncommon for people to have PMs ignored and for Tom to disappear for months on end without a single post / message.

 

3. Tom hasn't said no anywhere in this thread in regards to "forking" unRAID.

 

Nevertheless I look forward to the results. Have you decided on a distro to base it on? Or will you publish a description of what it is you hope to implement exactly?

 

Yeah what are you going to choose? Slackware? Debian? CentOS / Scientic (Red Hat Clones)? Gentoo? ClearOS? Mageia?

Link to comment

Fwiw (and imv obviously) announcing a fork/rewrite when those 2 weeks include the Xmas hols seems a little impatient and is not obviously a course of action conducive to a working relationship.

1. Considering Tom is technically inclined... I'm sure he has heard of "Out of Office" replies before and knows how to set one up. Not sure what line of business you are in but EVERYONE I know and deal with at client sites (even assistants and "nonessential" personal)  use them.

 

2. You must be new here... It's not uncommon for people to have PMs ignored and for Tom to disappear for months on end without a single post / message.

 

3. Tom hasn't said no anywhere in this thread in regards to "forking" unRAID.

Yes I am new so have no idea of the history. If that is normal then the impatience is understandable. My first impressions of unraid as a piece of software have been that it desperately needs modernising hence the interest in this thread.

 

What is the reason to run this as a distro rather than just packaging it in some particular distro format along with a guide to how it was packaged? Personally I would prefer to just be able to grab a package to install on the distro of my choice rather than having to cut over to some arbitrary distro.

Link to comment

Is ZFS as easy as unRAID to add and remove drives at will?

you can add and expand the array but (as far as i know) theres no easy way to remove drives as the data is striped (i assume you're doing the zfs equiv of raid5/6/10 - RaidZ rather than mirrored zpool)

with ZFS your pool size is set for life when it is built. It can't be expanded in the same way that unraid can.

Expansion = copy all data off pool, delete pool, create new larger / smaller pool, copy back all yo shit.

Raid 5/6 with an lvm on top is a different story though and knocks the pants off unraid for performance and raid6 gives dual parity. However, all drives must be spinning due to the striping.

HTH

aah fair enough, always figured ZFS had the same "re-stripe" options as raid5/6 usually does

 

 

with Arch and using the ABS system, how'd you manage the md driver patches? copy + paste seems to create errors in the makepkg component of ABS :>

Link to comment

with Arch and using the ABS system, how'd you manage the md driver patches? copy + paste seems to create errors in the makepkg component of ABS :>

 

Made some patch files that edited the Kbuild and Makefile so I could compile unRAID and keep /dev/mapper, LVM support, etc.

 

You could always do Raid 5/6 with LVM if ZFS isn't your cup of tea.

Link to comment

Is ZFS as easy as unRAID to add and remove drives at will?

 

you can add and expand the array but (as far as i know) theres no easy way to remove drives as the data is striped (i assume you're doing the zfs equiv of raid5/6/10 - RaidZ rather than mirrored zpool)

 

"if installing under vmware add BusLogic to modules" note in the installer... testing

 

with ZFS your pool size is set for life when it is built. It can't be expanded in the same way that unraid can.

 

Expansion = copy all data off pool, delete pool, create new larger / smaller pool, copy back all yo shit.

 

Raid 5/6 with an lvm on top is a different story though and knocks the pants off unraid for performance and raid6 gives dual parity. However, all drives must be spinning due to the striping.

 

Not quite true.

ZFS is a combo of a filesystem and RAID.

You can easily expand pools made of single disks or made of mirrored disks.

What makes it great is the copy-on-write feature.

IMHO the best feature of  unRAID-NG would be the capability to use other FS than current reiser ith "emhttp-NG".

I'd certainly build an array based on unraid MD module (raid part) with ZFS (filesystem part) on the individual disks of that array (one disk -> one pool).

Link to comment

with Arch and using the ABS system, how'd you manage the md driver patches? copy + paste seems to create errors in the makepkg component of ABS :>

Made some patch files that edited the Kbuild and Makefile so I could compile unRAID and keep /dev/mapper, LVM support, etc.

 

nice nice, able to point me in a good doc direction for that ? (or post the patchfiles ?) while i'm pretty good technically kernel patches and programming are not my area so the higher level docs went over my head

edit: always just made the changes i wanted manually, havent delved into a "ABS type" kernel compiliation before

Link to comment

Grumpy if I were to use your opensuse and kvm guide, how easy would be to move the setup later on to this new unraid version (if it ever happens).

 

Why would you?

 

You will be going back in time to a read only file system, plugins and solely dependent on one person for everything.

 

Does openSUSE have 10,000+ developers, testers, innovation, updates, patches, testing, package maintainers, etc. or does unRAID?

 

You need to think of unRAID as a NAS storage device utilizing a very "stripped down" Linux to do it and not a Linux Distro. I doubt it will ever evolve into that.

 

 

As Fx stated before, unRAID is something more than raid 5. Their jbod application is pretty useful when adding new drives. HOWEVER, if I could get this with other distro/solution, I'd have a go at it. The thing is that I haven't found it and so far unRAID is the NAS solution that better fits my needs.

 

My point is... Why twist yourself and your hardware into a pretzel trying to make it accommodate unRAID.

That's not really what I'm trying, I might have not make myself clear. Virtualization, IMHO, is the way to go. Right now I have a VM for media storage (unRAID), a VM for VMs storage (NAS4Free), a VM for mySQL, another for a Plex Server, a testing VM, usenet, etc. My next goal was to install pfSense and maybe untangle.

Bottom line, I'm not trying to accommodate my set-up to unraid, but rather unRAID to my set-up, because it's the best media storage solution I could find.

 

Link to comment

I'm back from my Xmas travels and have an exam on 6th but the after that its full steam ahead once I've begged and borrowed the components for a dev machine.

 

Waiting to hear back from Tom on things, loosing a bit of hope as its been nearly 2 weeks since he last made contact. I'm all for collaboration but it takes two to tango.

 

So, in light of this I will be pushing ahead with a fork of unraid myself much in the same way as other Linux distros have forked. Mine will be a two pronged approach...

 

First, get unraid-fork packaged for an alpha release. I approximate by mid to late Jan on this.

 

Second, tidy up any loose ends and reach a state where this can be 'given back' to the community entirely open sourced with some documentation by the end of Feb.

 

Third, take what I've learned and push forward with a replacement modernised version of emhttp that I will write. It is still very early days on point 3 mind you...

 

So with this all in mind, and leveraging this awesome community and what was discussed earlier in this thread, I feel comfortable accepting donations. I realise this model isn't for everyone but I would really appreciate any contributions towards the development costs as I'm a full time student right now without a job.

 

If you'd like to donate please do so via PayPal. My address is

 

[email protected]

 

Thanks for reading and for donating from the bottom of my heart if you do.

 

Sent from my Nexus 5 using Tapatalk

donated, hope not being the first and certainly not the last ;-)!

Link to comment

ZFS is a combo of a filesystem and RAID.

 

Agreed.

 

You can easily expand pools made of single disks or made of mirrored disks.

 

You and I think it is easy but I'm not sure the average unRAID user would.

 

What makes it great is the copy-on-write feature.

 

Encryption, Compression, Checksums on data and metadata, etc. are also good ones too.

 

IMHO the best feature of  unRAID-NG would be the capability to use other FS than current reiser ith "emhttp-NG".

I'd certainly build an array based on unraid MD module (raid part) with ZFS (filesystem part) on the individual disks of that array (one disk -> one pool).

 

Why not run Linux, ZFS and use nappit at that point?

 

I would choose BTRFS. You get most of the pluses of ZFS (some new things it can't do) minus the hassles of zpools / vdevs.

Link to comment

Why not run Linux, ZFS and use nappit at that point?

 

I would choose BTRFS. You get most of the pluses of ZFS (some new things it can't do) minus the hassles of zpools / vdevs.

 

napp-it  is (open-)solaris only,...been there, done it.

I am currently using Linux with LUKS/dm-crypt and ZoL on top.

My critical data is on a raid-z3 pool with triple parity....my media files are on a raid-z1.

I agree that with zfs the "habbit" of pooling the drives is not that flexible and this is why I would love to go back to unRAID-NG  8)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.