unRAID Server release 4.3-beta4 available


Recommended Posts

Download.

 

Finally a new feature in this release to speed up write operations (and more), called 'Cache Disk' (explained below).  Other changes include updating to latest linux kernel and Samba releases, and a few other bug fixes.

 

Reminder: this is a beta release - there may still be a few 'rough edges'  ;)

 


 

unRAID OS Cache Disk Feature

 

This release includes an innovative new feature that will greatly increase perceived system write performance.

 

You may now assign one of your hard drives to be a "cache" disk.  A cache disk is a hard drive that is not part of the normal parity-protected array.  When a cache disk exists in the system, it is visible under My Network Places as a disk share named "cache" (provided disk shares are being exported).  You may read/write the cache share just as you would any other disk share.  Since the cache disk is outside the array, writes will be much faster, but of course if this disk fails, all it's data may be lost.

 

The real power of a cache disk is realized when User Shares are enabled.  The cache disk may be automaticaly "included" in every user share.  Hence any object (file or directory) created on a user share is created on the cache disk, provided enough space exists.  Therefore, when you browse a user share via My Network Places, the listings will transparently include objects on the cache disk as well as on the other data disks.

 

In order to prevent the cache disk from filling up, we have created a new utility called the "mover".  The mover is a process which periodically moves objects from the cache disk to the array proper.  You can set a schedule which defines when the mover will "wake up".  The default schedule is to wake up at 3:40AM every day.

 

Since there is a lag between the time objects are created on the cache disk, and when they are moved to the array, it may be desireable to disable the cache disk for certain shares.  For this purpose, there is a setting for each user share to disable use of the cache disk for that share.

 

To provide redundancy for the cache disk it may be possible to set up a "raid-1" array on your motherboard or controller card.  Another possibility is to assign the cache disk to an external SATA raid array.  We also plan to provide sofware raid-1 for the cache disk in a future unRAID OS release.

 

Notes & current limitations:

 

* The cache disk feature is for Pro licenses only.  Including the cache disk, the maximum number of hard drives supported by unRAID OS is now 17.  Note: the 17th hard disk can only be used as a cache disk - the maximum array width is still 16 hard drives.

 

* The "model/serial number" display for the cache disk is slightly different.

 

* The cache disk statistics will not be cleared when you click 'Clear Statistics'.

 

* The "Cache disk floor" setting defines the minimum amount of free space on the cache disk which must exist in order for a new object to be created on the cache disk.  This setting does not apply when accessing the cache disk directly via the 'cache' share.

 

* When the mover moves files from the cache disk to the array, a pair of syslog entries are made for each file.  There is currently no control for turning this off, though an advanced user can do so by editing the script "/usr/local/sbin/mover", removing the "-v" option on the mv command.  Also, the mover does not delete the directory structure on the cache disk, thus it is possible for many empty directories to accumulate on the cache disk over time.  This will be addressed in a future unRAID OS release.

 

* The mover will not move any top-level directories which begin with a '.' character.  Such directories will not exist in normal use, but an advanced user may use this knowledge to create directories which won't get moved.

 

* The mover will not move any files that exist in the root of the cache disk.  Such files will not exist in normal  use, but an advanced user may use this knowledge to create files which won't get moved (for example, a swap file).

 

* If the mover finds no files to move, and the disks are spun-down, the disks will not spin up.

 

* The format of the "Mover schedule" string follows the linux "crontab" format.  In a future unRAID OS release, we plan on making this more user friendly, but here is the description of this format:

 

The crontab format consists of 5 fields separated by spaces.  Individual fields may contain a time, a time range, a time range with a skip factor, a symbolic range for the day of week and month in year, and additional subranges delimited with commas. If you specify both a day in the month and a day of week, the result is effectively OR'ed: the crontab entry will be run on the specified day of week and on the specified day in the month.  A field consisting of an asterisk ('*') indicates "every" time of that field.

 

Examples:

 

# MIN HOUR DAY MONTH DAYOFWEEK

# at 3:40 a.m. every day

40 3 * * *

 

# every two hours at the top of the hour

0 */2 * * *

 

# every two hours from 11p.m. to 7a.m., and at 8a.m.

0 23-7/2,8 * * *

 

# at 11:00 a.m. on the 4th and on every mon, tue, wed

0 11 4 * mon-wed

 

# 4:00 a.m. on january 1st

0 4 1 jan *

 

* When a cache disk is assigned and formatted a new entry exists in the /mnt directory:

/mnt/cache

 

* When a cache disk is assigned and formatted, and user shares are enabled, another new entry exists in the /mnt directory.  This mount point is a view of the user shares which doesn't include the cache disk:

/mnt/user0

 

* The mover is just a script called "/usr/share/sbin/mover" which invokes 'find' to traverse the cache disk and move files to the array using the 'mv' command.  Advanced users may edit this script to fine-tune the mover.  For example, it's possible to set conditions such as "move only files older than N days", or "only move files greater than N bytes in size", etc.  Refer to the script itself and the 'man' page of the 'find' command.

 


 

unRAID Server 4.3-beta4 Release Notes

Upgrade Instructions (Please Read Carefully)
--------------------------------------------
If you are currently running unRAID Server 4.2-beta1 or higher (including 4.2.x 'final'), please copy the following files from the new release to the root of your Flash device:
    bzimage
    bzroot

If you are currently running unRAID server 4.0 or 4.1, please copy the following files from the new release to the root of your Flash device:
    bzimage
    bzroot
    syslinux.cfg
    menu.c32
    memtest

This can be done either by plugging the Flash into your PC or, by copying the files to the 'flash' share on your running server.  The server must then be rebooted.

If you are currently running unRAID Server 3.0-beta1 or higher, please follow these steps to upgrade:

1. Referring to the System Management Utility 'Main' page, make a note of each disks's model/serial number; you will need this information later.

2. Shut down your server, remove the Flash and plug it into your PC.

3. Right-click your Flash device listed under My Computer and select Properties.  Make sure the volume label is set to "UNRAID" (without the quotes) and click OK.  You do NOT need to format the Flash.

4. Copy the files from the new release to the root of your Flash device.

5. Right-click your Flash device listed under My Computer and select Eject.  Remove the Flash, install in your server and power-up.

6. After your server has booted up, the System Management Utility 'Main' page will probably show no devices; this is OK, navigate to the 'Devices' page. Using the model/serial number information gathered in step 1, assign each of your hard drives to the correct disk slot.

7. Go back to the 'Main' page and your devices should appear correctly.  You may now Start the array.


If you are installing this release to a new Flash, please refer to instructions on our website at:

http://www.lime-technology.com/wordpress/?page_id=19


Changes from 4.3-beta3 to 4.3-beta4
-----------------------------------

New feature: cache disk support.

Improvement: enable SMART before reading disk temperature.

Improvement: upgrade from linux kernel 2.6.24.3 to 2.6.24.4 (refer to http://lwn.net/Articles/274741).

Improvement: upgrade from Samba 3.0.28 to Samba 3.0.28a (addresses some Vista issues, refer to http://us1.samba.org/samba/history/samba-3.0.28a.html).

Improvement: added back a few more missing libraries needed for certain user customizations.

Bug Fix: Support normal expansion of array when Parity is not installed.

Link to comment

Nice Job.  ;D

 

> We also plan to provide sofware raid-1 for the cache disk in a future unRAID OS release.

 

What about RAID-0, If the cache disk is all about temporary space and write speed,

would it be better to put RAID-0 back into the kernel?

 

Once RAID-0 is back in the kernel, then multiple spindles for PARITY also become a possibility.  ;)

 

In my network I have a special cache/RAID-0 array just for the purpose of my rips and video encoding.

 

Another thought is, why RAID-1 on a disk when the array itself already has the ability for protection.  ???

 

 

 

 

 

Link to comment

No use for Raid-0 yet because speed is limited by network throughput - even a single old 7200RPM IDE drive is faster than real-world GigE.  When 10GigE becomes more common, then perhaps it might be worth considering.

 

As for no array Raid-1 - we're working on it.  Our philosophy is to release stuff incrementally.  In it's current form, this feature is very useful, so there's no point in delaying it further - may as well get it out there & get some feedback.

 

Also, a big consideration, perhaps the biggest consideration, for any new feature is simplicity.  The original vision for unRAID was to not even have any UI - it would be an appliance that just "worked".  Well that's unrealistic, but you really have to make an effort to fight complexity.  The 'split-level' concept is a good example of something that is too complex & we are thinking all the time about how to simplify it or design it away. The cache disk feature is also borderline too-complex & that's one reason it's implementation has been delayed until now.

Link to comment

As a Unraid user for the past few years, I am really glad to see these updates and increased functionality being added in. Way to go!

 

As for the cache disk, I think this is a great idea. Another thought - a lot of users keep a spare drive around in case of drive failures. Would it be possible for the unraid box to automatically substitute this cache drive into the array and rebuild when a drive fails?

Link to comment

As for the cache disk, I think this is a great idea. Another thought - a lot of users keep a spare drive around in case of drive failures. Would it be possible for the unraid box to automatically substitute this cache drive into the array and rebuild when a drive fails?

 

Hey that's a pretty good idea!

Link to comment

As for the cache disk, I think this is a great idea. Another thought - a lot of users keep a spare drive around in case of drive failures. Would it be possible for the unraid box to automatically substitute this cache drive into the array and rebuild when a drive fails?

 

Hey that's a pretty good idea!

 

My vote for the logic would be as follows:

 

* If the cache drive is as large or larger than parity, proceed with the exchange and rebuild parity on what was previously the cache drive

* If not, check user preferences

      If user prefers system to stay live, allow the unraid to continue with full use of the drives with no parity protection

      If user prefers system to shut down, elegantly power down

 

This second set of logic should work whether or not there is a cache drive.  It works best when there is a messaging system that can be employed so, for example, an email can go out saying "parity drive down, shutting down system" or "parity drive down, system remains active".

 

 

Bill

 

 

Link to comment

i]

[* The mover will not move any files that exist in the root of the cache disk.  Such files will not exist in normal  use, but an advanced user may use this knowledge to create files which won't get moved (for example, a swap file).
[/i]

 

This is great.... a sanctioned place for the swapfile... and other linux-y things  ;D

 

Link to comment

Is there a way to use/divide up the cache drive so that it can be a boot drive thereby saving writes with the FLASH?

Seems there are many complaints about bad flash after a period of time.

 

If the cache had two partitions, 1 FAT32 for the boot partition and 1 REISER for cache, then we have a backup to our flash or an alternate boot location.

 

I've been considering grub as a boot loader with reiserfs compiled in so I can boot off the hard drives.

Link to comment

Is there a way to use/divide up the cache drive so that it can be a boot drive thereby saving writes with the FLASH?

Seems there are many complaints about bad flash after a period of time.

 

Flash failures are not from being written - unRAID OS very rarely writes to the Flash.

 

If the cache had two partitions, 1 FAT32 for the boot partition and 1 REISER for cache, then we have a backup to our flash or an alternate boot location.

 

I've been considering grub as a boot loader with reiserfs compiled in so I can boot off the hard drives.

 

There are many uses for this feature... stay tuned...  :)

Link to comment

First, great idea to add the cache feature ! I have been playing with the idea to use my new unRAID server as a storage server for my SageTV DVR. Speed would be a factor and the file safety features where not of great value for short term TV recordings. This new approach might be a great way to use unRAID to record content without loosing speed, but adding some safety to longer term recording. I'll have to put my thinking cap on ;)Great work !

As for the cache disk, I think this is a great idea. Another thought - a lot of users keep a spare drive around in case of drive failures. Would it be possible for the unraid box to automatically substitute this cache drive into the array and rebuild when a drive fails?

 

Hey that's a pretty good idea!

 

My vote for the logic would be as follows:

 

* If the cache drive is as large or larger than parity, proceed with the exchange and rebuild parity on what was previously the cache drive

* If not, check user preferences

      If user prefers system to stay live, allow the unraid to continue with full use of the drives with no parity protection

      If user prefers system to shut down, elegantly power down

 

This second set of logic should work whether or not there is a cache drive.  It works best when there is a messaging system that can be employed so, for example, an email can go out saying "parity drive down, shutting down system" or "parity drive down, system remains active".

 

 

Bill

 

 

 

I would be very much in favour of such a system !

Link to comment

I've been doing this for over a month with an extra disk on my unRAID box - and running a "mv" overnight to move the data to the array.  It works great!  (Thanks to Joe L. for walking me through the linux part).  I've been calling it "staging" instead of "cache".

 

Suggest you do a "sync" as part of the mover to flush all the buffers to disk, just in case.

 

With user shares disabled, can you create directories like "disk1", "disk2", .... on the cache disk and have it sync those to physical drives?

Link to comment

The real power of a cache disk is realized when User Shares are enabled.  The cache disk may be automaticaly "included" in every user share.  Hence any object (file or directory) created on a user share is created on the cache disk, provided enough space exists.  Therefore, when you browse a user share via My Network Places, the listings will transparently include objects on the cache disk as well as on the other data disks.

 

 

Sound like a great tool but what I don't understand is if I have a user share the spans a lot of disks and my cache disk is only a fraction of the size what is the point of "any object created on a user share is created on the cache disk, provided enough space exists"?

 

My cache disk will certainly not be big enough.  Do you mean that anything created on the cache disk under a user share is mirrored on the listings for that share?

 

So I have user shares movies and tv.  On my cache disk I have movies.  I place "movie 1" on the cache disk in movies and it will show up in network places in movies?

 

 

Link to comment

Do you mean that anything created on the cache disk under a user share is mirrored on the listings for that share?

Correct.

 

So I have user shares movies and tv.  On my cache disk I have movies.  I place "movie 1" on the cache disk in movies and it will show up in network places in movies?

Correct.

 

Actually you don't have to explicitly copy anything to the cache disk.  If the cache disk is enabled for a share, then when you create any new file or directory on that share, the system will try to create it on the cache disk first.  The only reason it wouldn't create it on the cache disk is if there is not enough free space left on the cache disk.

 

Sound like a great tool but what I don't understand is if I have a user share the spans a lot of disks and my cache disk is only a fraction of the size what is the point of "any object created on a user share is created on the cache disk, provided enough space exists"?

 

There are 3 main reasons why we implemented this feature:

 

1. To widen the bottleneck when writing several media streams to the server simultaneously, e.g., multiple HiDef video feeds.

 

2. To speed up initial loading of the array.  Assuming a 1TB cache disk, and average network throughput of 50MB/sec, it would take a little over 5 hours to fill up the cache disk (and then probably 15 to drain it to the array).

 

3. To provide a place to put a swap file.

Link to comment

OK, I might be thick, but please bear with me;

 

I am using SageTV as a PVR and I am considering using unRAID to record shows to, instead of a local disk. This has a couple of advantages (and some disadvantages, that the cache might resolve) in my opinion. I would be able to have all recordings saved on and served from my unRAID server offering a safety net and ample storage for both SD and HD shows.

 

Here are my questions - Sage uses the directory specified to record AND playback shows. In fact, Sage clients generally access the shows for playback directly from the directory and not through the Sage "server". My concern is how would Sage saves the shows to the cache and play a recording the next day from the unRAID array IF the directory path has changed. So would Sage record to:

  • //server/tv/cache, or
  • would the path be //server/tv and unRAID would know to save it first to the cache, and then move it to the array based on the schedule ?

 

How does unRAID know to save the file in the cache first ?

 

I also have an application that monitors the recording directory and "scans" new recorded shows for commercial detection/skipping. I would have the application monitor the cache for any new recordings, but the application generates a small file that must be saved in the same location as the show to tell Sage about the commercial timemarks.

 

Confused ? I am and I know what I am trying to say  :-\

 

Would I be able to use the new cache feature the way it is implemented today ?

Link to comment

I too use SageTV with unRAID, works great!

 

I believe you are forgetting how User Shares work.  You do not specify reading or writing to the physical disks, whether data disks or the cache disk.  You specify reading and writing to the User share path, and unRAID decides which physical drive is used.  With the new Cache drive, the only thing new that you have to do is add the Cache disk to the Included drives for your User Share, and it will be transparently managed.  When you write to your TV share path, it will save it to the Cache disk at 2 to 4 times the previous speed, and then you will read it back from the same TV share path, without caring whether it is coming from the Cache disk or has been moved to a data disk.  Read speed should be the same either way.

 

Link to comment

Just a thought after thinking about RAID1 on the cache disk,

I thought I would stimulate thought on SAFE50 and SAFE33.

 

On the new DSPRO external drive chassis I have, this feature is available.

It seems like it would go well inside unraid for the Cache disks whereby

part of the pair could be used for Cache (raid1) and part of the pair could be used raid0/spanned for parity.

Sort of like the Intel Matrix Raid.

 

ith a pair of 1TB drives, we would have 1TB for parity, 500GB mirrored for Cache using SAFE50

Or possibly.

1.4TB for Parity, 300GB mirrored cache using SAFE33.

 

I figure if I put applications and data on the cache drive, they will be spinning pretty frequently.

So if the pair were divided, my parity pair would be spinning also.

 

Perhaps in the wrong thread, but I figured I would add it to where the conversation discussed RAID1 on the cache.

Feel free to move it.

 

 

 

Link to comment

I too use SageTV with unRAID, works great!

 

Great to hear !!! I was concerned about the speed/load of writing the new recordings, having showanalyser "scan" new recordings and a couple of clients watching content from the server. I currently have 5 SD tuners and will be adding two new R5000 HD tuners to feed my habit ;) I was planning on putting the SD tuners on the Sage server and the two R5000 along with DirMon and ShowAnalyser on another machine, all using my brand, spanking new unRAID server to store the content. I am just a little worried that the load over the Gig network connection and pipe to the hard drives might be a bottle neck.

 

I believe you are forgetting how User Shares work.  You do not specify reading or writing to the physical disks, whether data disks or the cache disk.  You specify reading and writing to the User share path, and unRAID decides which physical drive is used.  With the new Cache drive, the only thing new that you have to do is add the Cache disk to the Included drives for your User Share, and it will be transparently managed.  When you write to your TV share path, it will save it to the Cache disk at 2 to 4 times the previous speed, and then you will read it back from the same TV share path, without caring whether it is coming from the Cache disk or has been moved to a data disk.  Read speed should be the same either way.

 

Actually, I haven't forgotten how User Shares work ... I am still trying to figure it out ;D

 

The way it is working and describing it to me was the way I was hoping and thinking it would work. I just wasn't sure if the cache disc was also in the mix.

 

Right now, my unRAID server is only storing movies (500+) for Sage. Next step is to move my music library to unRAID and then I will tackle the migration of the Sage show and add the R5000 to the mix. I still have a couple of weeks of tinkering ;)

 

Thanks for your help.

Link to comment

since i don't allow writing to a user share, i have the following questions, b/c it seems you might require user shares to have writing enabled.

 

When you enable the disk cache for a certain share, a folder by that same name is created in the disk cache directory. This happens automatically.  Then i can write directly to //tower/diskcache (unlike writing to the actual disk, parity is not updated with this) and at the scheduled time, the mover will move the corresponding files to the user share disks on the array, which will also build/update the parity. How would this work if you don't have writing enabled? How would it know where to move it (since i only write to a specific disk when needed, would i be able to select which disk i want it written to when the mover moves the data?)?

 

Thanks

Link to comment

I too use SageTV with unRAID, works great!

 

I believe you are forgetting how User Shares work.  You do not specify reading or writing to the physical disks, whether data disks or the cache disk.  You specify reading and writing to the User share path, and unRAID decides which physical drive is used.  With the new Cache drive, the only thing new that you have to do is add the Cache disk to the Included drives for your User Share, and it will be transparently managed.  When you write to your TV share path, it will save it to the Cache disk at 2 to 4 times the previous speed, and then you will read it back from the same TV share path, without caring whether it is coming from the Cache disk or has been moved to a data disk.  Read speed should be the same either way.

 

 

Right, except that you don't add the Cache disk to the "Included disks".  Instead there's a drop down box for each share that enables or disables the cache disk for that share.  At first we did implement it exactly as you describe, using the name "disk0" instead of "cache", but there are other reasons why this wasn't the best way to do it.

Link to comment

since i don't allow writing to a user share, i have the following questions, b/c it seems you might require user shares to have writing enabled.

 

When you enable the disk cache for a certain share, a folder by that same name is created in the disk cache directory. This happens automatically.  Then i can write directly to //tower/diskcache (unlike writing to the actual disk, parity is not updated with this) and at the scheduled time, the mover will move the corresponding files to the user share disks on the array, which will also build/update the parity. How would this work if you don't have writing enabled? How would it know where to move it (since i only write to a specific disk when needed, would i be able to select which disk i want it written to when the mover moves the data?)?

 

Thanks

 

If your user shares are read-only, then no objects will get created via that share on any disk.  If disk shares are write-enabled, then you could write to a user share directory on the cache disk, or any other disk.  Put another way, the access controls on a user share only apply to access via Samba (ie, Windows Networking).

 

If you have set the 'excluded disks' string for a share to name all the disks, then the mover will get an error on every file it tries to move.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.