Jump to content

unRAID Server release 4.5-beta7 available


limetech

Recommended Posts

There are optimized XOR functions built into the linux kernel that are part of the crypto package, but are also used to calculate parity.  These functions had an argument list like this: (size, count, ptr) where size is the buffer size, ptr is an array of buffer pointers, and count is the number of pointers in ptr.  By convention, ptr[0] is the destination buffer into which buffers ptr[1], ptr[2], etc were XOR'ed.  Well somewhere along the line, someone changed the argument list to be this: (size, count, dest, ptr), where everything is the same except that the destination buffer is specified as it's own argument and the ptr list only specifies 'source' buffers.  Well the unraid bug was that instead of including the proper kernel crypto header files, I had just copied the xor function declaration into a private header file, so the code did not pick up the change in the argument definition.  So to fix this I had to 'include' the proper kernel header file and then change the code to conform to the new xor function argument list.... make sense?

 

Yes, that does actually make sense... after I read it a couple of times to make sure I was understanding it correctly.

 

Thanks for the explination.

Link to comment
  • Replies 115
  • Created
  • Last Reply

Top Posters In This Topic

I'm still only getting 16mb uploading to cache. I was on 4.4.2 and to update I shutdown unRAID and copied the 4.5 beta 7 bzimage bzroot files to the flash drive overwriting the old files. Do I need to do something different?

 

What you did should be fine. However, to verify, go into the unRaid main page on your browser and check the version number on the top right of the browser page.

Link to comment

My xor parity calculation rate has dropped in beta 7. I noticed there is a new best xor function routine according to the syslog. I ran a parity check and the figures seem to be comparable (no suprise since XOR operation is fast compared to read/write disk operations). Just thought I'd mention it.

 

CPU0: AMD Sempron Processor LE-1200 stepping 01

 

Total of 1 processors activated (4200.35 BogoMIPS).

 

Oct 14 10:46:56 Tower kernel: xor: automatically using best checksumming function: pIII_sse

Oct 14 10:46:56 Tower kernel: pIII_sse : 6468.800 MB/sec

Oct 14 10:46:56 Tower kernel: xor: using function: pIII_sse (6468.800 MB/sec)

 

Previously it was using p5_mmx at around 7196 MB/sec.

 

CPU0: AMD Sempron Processor LE-1200 stepping 01

Total of 1 processors activated (4200.36 BogoMIPS)

Oct 11 00:12:07 Tower kernel: md: xor using function: p5_mmx (7196.000 MB/sec)

 

Doesnt seem to have made a noticable difference in parity check speed.

 

 

 

Link to comment

Just upgraded from 4.4.2 mainly because I wanted the fill-up allocation method for user shares.

 

I was also glad to see this fixed in this beta -> http://lime-technology.com/forum/index.php?topic=4272.0

 

However, read speed from user shares over NFS has suffered a lot. I've gone from ~16MB/s in 4.4.2 to ~7MB/s in this beta. Any ideas on this speed drop or anything I could do to fix it?

 

A couple questions.  First, what NFS client are you using to connect to the unRAID server?

 

Second, is there a particular need to use NFS?  Reason I ask is that probably NFS will always be "problematic" via User Shares.  This is because the User Share file system uses the linux FUSE module & FUSE has known issues with NFS (though getting better with each new FUSE release).  Note also that NTFS-3G (linux NTFS file system support) is also built upon FUSE & I'm a bit concerned about possible NTFS-3G/NFS problems, but for most people they'd rather have NTFS file system support than NFS support.... get my dilemma?

Link to comment

My xor parity calculation rate has dropped in beta 7. I noticed there is a new best xor function routine according to the syslog. I ran a parity check and the figures seem to be comparable (no suprise since XOR operation is fast compared to read/write disk operations). Just thought I'd mention it.

 

CPU0: AMD Sempron Processor LE-1200 stepping 01

 

Total of 1 processors activated (4200.35 BogoMIPS).

 

Oct 14 10:46:56 Tower kernel: xor: automatically using best checksumming function: pIII_sse

Oct 14 10:46:56 Tower kernel: pIII_sse : 6468.800 MB/sec

Oct 14 10:46:56 Tower kernel: xor: using function: pIII_sse (6468.800 MB/sec)

 

Previously it was using p5_mmx at around 7196 MB/sec.

 

CPU0: AMD Sempron Processor LE-1200 stepping 01

Total of 1 processors activated (4200.36 BogoMIPS)

Oct 11 00:12:07 Tower kernel: md: xor using function: p5_mmx (7196.000 MB/sec)

 

Doesnt seem to have made a noticable difference in parity check speed.

 

Yes I noticed that another 'feature' added at some point to kernel was to predefine the xor algorithm based on CPU architecture type.  For x86 it's set to pIII_sse.  No easy way to change this without modifying kernel source, but I don't think the speed is going to make any difference whatsoever in unRAID application.

Link to comment

Upgraded to beta 7 everything running ok.

 

But when I tried to stop the array it keeps saying that disks are unmounting and array doesnt stop. I'm running the LS script to cache folders.

 

What do I need to do ?

 

Look in the forum for the new cache _dirs in the forum.  Joe L has updated it to work better with the new version of unRAID

Link to comment

Upgraded to beta 7 everything running ok.

 

But when I tried to stop the array it keeps saying that disks are unmounting and array doesnt stop. I'm running the LS script to cache folders.

 

What do I need to do ?

Log in via telnet or on the system console.  Stop your added script(s), the array will then stop. 

 

The new version of unRAID is waiting for the disks to not be "busy' and it will wait forever for the disks to not be busy, or until your syslog fills all available memory... (whichever comes first) :(

 

Joe L.

Link to comment

My writes are quite a bit better than last beta so that's almost fixed (although for some reason writing directly to cache is slower than to shares)... however my reads are hurting pretty badly in both Server 08 and Windows 7 builds (both 64bit).  Streaming seems ok however when I'm actually taking something off the server I'm getting between 7-13MB/s tops.  This is consistent in both Total Commander and normal Windows 7/Server 08 copying.  At the moment I'm the only one accessing too and there isn't any other serious activity going on.

 

Abit AB9 Pro

2x Rosewill RC-213 cards (1 drive each)

Mix of 9 = 2TB and 1TB hard drives (WD GP's and Seagate 7200.11's firmwared)

Corsair 550VX PSU

4GB of Crucial 6400 DDR2 memory

Celeron 440 CPU

 

I don't remember my writes being this bad in the past but I haven't been taking much off the server so I wouldn't have noticed if they were.  My logs are a bit messy now because I was copying over a bit more than my server could handle (drive on the way) so I ran out of space without realizing it after rebooting a night or 2 ago.  Also I don't like all the fake duplicate messages and what is the "Oct 14 10:13:57 Tower last message repeated 2 times" line all about? It's repeated a lot throughout today.  Here's a dirty and a clean syslog: http://www.charlesjorourke.com/hosting/syslog/ .

Link to comment

...  Also I don't like all the fake duplicate messages and what is the "Oct 14 10:13:57 Tower last message repeated 2 times" line all about? It's repeated a lot throughout today.  Here's a dirty and a clean syslog: http://www.charlesjorourke.com/hosting/syslog/ .

 

Why do you say "fake" duplicates?

 

A 'duplicate object' message has this format:

 

/mnt/diskX/ShareName/object...

 

It is generated when the same exact file exists on 2 or more disks.  The 'diskX' indicates where there is a duplicate.

 

For example suppose we have this situation:

 

disk4/Video/Vacations/maui.avi

disk5/Video/Vacations/maui.avi

disk6/Video/Vacations/maui.avi

 

When you traverse the 'Video' user share, you will only see one "Vacations/maui.avi" file - it will be the one on disk4 (because that's the lowest numbered disk it appears on), and that would be the one read back if you viewed it.  Additionally, in the system log you would see two entries:

  duplicate object: /mnt/disk5/Video/Vacations/maui.avi

  duplicate object: /mnt/disk6/Video/Vacations/maui.avi

 

These messages are saying that there is already a file with that exact path/name in a lowered numbered disk than what is indicated on the message, ie, the message doesn't tell you which disk has the first occurance of the file.

 

To fix this you would go to the disk5 share, navigate to Video/Vacations directory, and delete maui.avi.  Then do same thing on disk6 share.  Of course it might be wise to first find Video/Vacations/maui.avi on disk1, disk2, or disk3 first & make sure that's the one you want to keep.

 

One more thing: if you are using the Cache drive, then it behaves like "disk0", that is, if in addition to above, the file Video/Vacations/maui.avi also existed on the Cache drive, then you would see 3 "duplicate object" messages.... I'll leave it as an exercise to the reader (if you got this far) to determine which disk would be the first duplicate :)

 

Also note, in the Cache drive case, when the mover fires up it will move the file off the cache drive and onto one of the array disks, choosing which disk according to the share allocation policy.  It is possible that an existing file of the same name is overwritten (it it happens to exist on the chosen disk), or another duplicate can get created.  In the latter case, that newly moved file may or may not be the one which appears in directory listings depending on whether it wound up on the lowest numbered drive.  Make sense??

 

Link to comment

I've already sorted out the true dupes (I had a few occur because of the full disks... I need to tweak my allocation a little bit)... but for some reason (and this has happened for a long while I ignore it at this point) whenever I move a large amount of data (a 14GB mkv for instance) off the cache and into the shares I will receive those dupe messages. I've checked over and over each time and there is never multiples of the same file on different disks. Once I restart I won't see those dupe messages again until I move something else. It's a strange false positive but it never affects me... just makes a mess of the log file.

 

But regardless I'm not really concerned with that issue. What I am concerned with is the poor read performance I've noticed... if other users can test to see what kinds of speeds they are getting for copying files off their shares/drives I would love to see some results. I'd like some closure as to whether this is an unraid issue or hardware issue.

Link to comment

Are the 'duplicates' only for files being moved from cache drive to array by the mover?  AND, do you also have a background process 'scanning' the directories to try to keep file references in memory (to avoid disk spin up)?  If answer to both is 'Yes' then that explains the duplicates: during the time the mover is moving a large file, a directory scan takes place which indeed does find two copies of the same file: the original on the cache disk and the new one being created on the array disk.  No real solution to this except maybe stop the scanner while the mover is running (can check for /var/run/mover.pid perhaps), or perhaps excluding the cache disk while mover is running (by referencing /mnt/user0 as root for scan).

 

As for transfer performance - I'm currently looking into that.

Link to comment

Yes it's most likely Joe's cache script that's causing the dupe messages... like I said I've never found it to be a real issue. The only real times it happened I was aware of it and corrected the issues. Good to know you're looking into the reading issues. I checked with a buddy of mine and he is having similar read issues but he's unsure if it can solely be due to software (he has new hardware and it's known to have network issues).

Link to comment

Just upgraded from 4.4.2 mainly because I wanted the fill-up allocation method for user shares.

 

I was also glad to see this fixed in this beta -> http://lime-technology.com/forum/index.php?topic=4272.0

 

However, read speed from user shares over NFS has suffered a lot. I've gone from ~16MB/s in 4.4.2 to ~7MB/s in this beta. Any ideas on this speed drop or anything I could do to fix it?

 

A couple questions.  First, what NFS client are you using to connect to the unRAID server?

 

Linux kernel client.

 

Second, is there a particular need to use NFS?  Reason I ask is that probably NFS will always be "problematic" via User Shares.  This is because the User Share file system uses the linux FUSE module & FUSE has known issues with NFS (though getting better with each new FUSE release).  Note also that NTFS-3G (linux NTFS file system support) is also built upon FUSE & I'm a bit concerned about possible NTFS-3G/NFS problems, but for most people they'd rather have NTFS file system support than NFS support.... get my dilemma?

 

The need to use NFS is because I have only Linux (and unix) machines here. Samba and CIFS are kinda "foreign" to this enviroment (although, granted, there are unix extensions to them). In general I can understand that unRAID is more targeted to a Windows world (and HTPC usage). That's actually fine, no complains, however unRAID can be great for having more than just your media collection on. Anyway.

 

I did some tests with both nfs and cifs:

 

tom linux # dd if=/mnt/storage_smb/series/FamilyGuy/Season_5/5x18\ -\ Meet\ The\ Quagmires.avi of=/dev/null

354748+0 records in

354748+0 records out

181630976 bytes (182 MB) copied, 17.5493 s, 10.3 MB/s

tom linux # dd if=/mnt/storage/series/FamilyGuy/Season_5/5x18\ -\ Meet\ The\ Quagmires.avi of=/dev/null

354748+0 records in

354748+0 records out

181630976 bytes (182 MB) copied, 28.5523 s, 6.4 MB/s

 

Cifs gives ~4MB/s more speed than NFS. What I've noticed from 4.4.2 is that shfs tops cpu usage now (~90%) against ~50% from what it was in 4.4.2.

 

Anyway, I was happy with ~17MB/s in 4.4.2, so hopefully it could still reach this level in new builds.

 

I'll try and see if I can do something to optimize NFS both on the server and client.

 

(EDIT: Just to clarify, the above tests are done from a remote machine).

 

Thanks for your time!

Link to comment

Are the 'duplicates' only for files being moved from cache drive to array by the mover?  AND, do you also have a background process 'scanning' the directories to try to keep file references in memory (to avoid disk spin up)? 

<snip>

No real solution to this except maybe stop the scanner while the mover is running (can check for /var/run/mover.pid perhaps)

 

A new version of cache_dirs is now available that will do exactly as you suggested.  As soon as it notices a running "mover" process, determined by the /var/run/mover.pid file, it will pause any further scan until the mover process is complete.

 

You may still get some "dupe" messages if the mover process starts up while cache_dirs is in the middle of a scan, but once that scan cycle is complete, that should be it until the mover process exits.

 

The same "dupe" messages can also occur if you use ANY process to access the directories involved, so looking for a movie to play via your media player can still trigger a "dupe file" message, because while the file is being moved from the cache drive to a data drive it does exist, with the same name, on both disks until the copy is complete and the file on the cache drive deleted.

 

The new version of cache_dirs is available here  Thanks for the idea Tom. 

 

Joe L.

Link to comment

 

I was so hoping that we'd get to kernel 2.6.31 this time...and maybe also samba 3.4.

 

Reading through the changelogs, I think that that may  solve the freezing problems I have, when transferring huge amounts of data from Windows-XP to my tiny unraid box with the "not-yet-officially-supported" motherboard...

 

Any chance we might be getting kernel 2.6.31.x and samba 3.4.x in the near future?

 

Link to comment

Actually, Linux 2.6.31.x wasn't stable until a few days ago.  It had a nasty bug that caused kernel panics on systems with 4GB of memory with PAE enabled. The fix was released less than 2 weeks ago.

 

Here's the history:

4 days ago v2.6.31.4

9 days ago v2.6.31.3

11 days ago v2.6.31.2

 

I can't really blame LimeTech for not wanting to roll out a long awaited beta on a kernel which until recently was subject to massive panic deaths.

Link to comment

Hey silly question. I just switched to Linux 100% in the house.

How do you install unraid on a usb stick if you are running Ubuntu?

The instructions are for XP.

 

 

Actually, it's the same :-)

 

Install syslinux (check your distro repositories - maybe it's already installed) and then plug the stick. After it gets mounted, note down the path by doing a "df" from console:

 

Filesystem          1K-blocks      Used Available Use% Mounted on

/dev/md2            154170968 139368784  14802184  91% /

udev                    10240        84    10156  1% /dev

/dev/sdc1            312559096  71936332 240622764  24% /mnt/pig

shm                    1687088      8332  1678756  1% /dev/shm

tom:/home/evas/      239146240 100609440 126388848  45% /mnt/tom

desmond:/mnt/user/storage/

                    1465104384 647637248 817467136  45% /mnt/storage

/dev/sdd1              1978056  1400032    578024  71% /media/SONY_STICK

 

It should be mounted under /media

 

Then, just run syslinux:

 

syslinux /media/SONY_STICK

 

Then, copy the contents of unraid zip to the stick.

 

Don't forget to re-label the stick to "UNRAID". If gnome doesn't provide you with a gui way to do it, you can do it with mlabel (part of mtools package):

 

sudo mlabel -i /dev/sdd1 ::unraid

 

It's been a while since I done these, so they might not work at first, let me know if you get stuck somewhere. Note that maybe syslinux might need root privilages (run with sudo).

 

Link to comment

This version seems to be randomly stopping the array for me.  Especially when resume from suspend with my browser on http://tower/main.html or when I connect via smb://tower for the first time.

 

Running os-x 10.6 and Safari.  Going to revert to 4.5beta6 and see if it continues to happen as its possible its something entirely client related.

 

EDIT : Evenetually narrowed this down to Safari's "Top Sites" feature which preloads your top 12 websites in the background when you open it.

Link to comment

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...