unRAID Server Release 6.2 Stable Release Available


limetech

Recommended Posts

Is it possible to simply assign my "Unassigned Devices/Docker" SSD as a cache drive?  Then I think it's possible to go to every User share and disabled Cache drive use, right?  Kind of like a cache drive without any caching?

 

You do not have to set any shares to use the cache drive if you don't want to. Other than perhaps your VM share/folder as it would seem.

Neat, thanks.  Sounds like that's the best option for me, then.  I can get docker.img to a supported location, keep using my SSD, and still be able to write all data directly to the protected array without it hitting the cache drive first.

 

Just have to update all my paths.  Shouldn't be too bad, I hope.

 

I run in this configuration, for my use case there is no benefit from setting any user shares to use the cache disk.  Cache is just for appdata, docker and VM's.

Link to comment
  • Replies 443
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

 

Kind of disappointed I need to either add a cache drive I don't want, or leave an array disk spinning 24/7 to use docker now.  Not terribly excited with either option.

 

 

Well, how else are you going to run Docker?  You can't run it without having a drive, and the drive will be active as the apps within the Dockers will be accessing the drive. 

 

You also don't HAVE to use the cache drive to cache, you could put a small SSD in there and ONLY use it for Docker and appdata.  A cheap 60GB or 120GB SSD isn't expensive, over here in the UK it's possible to get a 120GB SSD from a decent brand for less than £40.

 

What you're trying to achieve is like having a car, wanting to drive it, but not being terribly excited by starting the engine.

Link to comment

Maybe its time unRAID webgui supported a 3rd device type. Currently we have Array Disks and Cache Disks. Perhaps its time to add an "Appdata" disk slot to the management so people are not forced into this situation?

 

I bet the Unassigned Devices route was disabled as its a work around for device limits in the lower tiers.

Link to comment

Updated mostly fine and all my things are currently live and working however I have noticed that all my docker apps have the update ready message next to them... but every one of them results in an error "layers from manifest don't match image configuration" when i attempt to update

 

http://lime-technology.com/forum/index.php?topic=40937.msg481138#msg481138

 

Just a reminder to all upgrading users, *please* do read both of the first 2 posts of this announcement thread.  The first has important information about upgrade tasks needed, and the second is the Additional Upgrade Advice post, with additional notes and clarifications that have been found since the release, updated from time to time.  The issue above is a common one, and mentioned there, as well as in the Docker FAQ.

Link to comment
I bet the Unassigned Devices route was disabled as its a work around for device limits in the lower tiers.

that is incorrect as unassigned devices still count against the device limits.    The actual reason is a timing issue in that dockers and VMs are typically started before Unassigned Devices gets around to mounting devices.

 

My guess is that if you do your own mounting in the 'go' file (and corresponding umount in the 'stop' file) you can make sure the mount happens before emhttp gets started (and thus before the array starts).    However that is beyond the expertise of most users.

Link to comment

I'm lucky in that UD mounts my ssd before services are started. I have 2 - 120GB ssd's. Btrfs is not an option because install of certain vm's (e.g. mythbubtu) on a btrfs system will fail backup.  Stopping the newly installed vm and copying the raw or qcow image to the array or other drive fails because of i/o errors.

 

Another option like an appdata drive would be nice or an option in UD to mount certain drives before the array starts.

Link to comment

6.2 Single vs Dual Parity testing on the D525 X7SPA-HF-D525

@Garycase

 

Reminder that the Atom D525 is a 1.8 Ghz Dual Core with Hyperthreading and a max of 13 Watt with a measly 700 passmark.

 

Write Speed Test

  • Write speed to the Cached Share, all other hardware and test file the same (4 GB  ISO File)
  • V5 near 100 MB/s
  • V6.1.9 max of 40 MB/s
  • V2.0 57 MB/s

 

v6.2 Write speeds are still not as fast as v5, however it is nearly 40% quicker than v 6.1.9!  The low Cache write speeds have been my biggest complaint with v6, however the benefits far outweigh this obstacle.

 

Parity Check and Build Times

  • Single Disk Parity Check: 10 hours, 4 minutes @ 110 MB/s
  • Dual Disk Parity Build: 17 hours, 46 minutes.  CPU at 22-48% use
  • Dual Disk Parity Check: 15 hours, 33 minutes @ 71 MB/s.  CPU 2 threads at 100%, 2 threads at 40% 

 

Dual Parity check takes nearly 50% longer than single Parity

 

File Read during  parity Check

One use case of this server is to store Children’s movies as unconverted ISO for playback in the home (mix of DVD and Bluray, all purchased and stored in the crawl). 

  • v 6.2 Single Parity – DVDs play with no problem.  1080p files stutter for a few frames every 8 minutes
  • v 6.2 Dual Parity– All movies fail playback after 3-5 seconds.  MP3s fail to playback after a few songs.  Unable to open RAW image files for editing

 

File Copy during  parity Check

Note that I do NOT copy files (to the Cache) during parity checks normally, however mover is suspended during parity checks so in theory this should be ok

  • v 6.2 Single Parity – 42 MB/S
  • v 6.2 Dual Parity– 34 MB/s

 

My choices with 6.2 are to choose single parity for read access during parity checks and forgo read access for single parity use, or know the server cannot be used during the monthly check. 

 

I think I can live with the kids not having movies and my having to work with a local copy until 4pm on the first day of each month.

 

Fantastic work LimeTech... to be able to run dual parity on a lowly D525 Atom is just wonderful!

Link to comment

Write Speed Test

  • Write speed to the Cached Share, all other hardware and test file the same (4 GB  ISO File)
  • V5 near 100 MB/s
  • V6.1.9 max of 40 MB/s
  • V2.0 57 MB/s

 

v6.2 Write speeds are still not as fast as v5, however it is nearly 40% quicker than v 6.1.9!  The low Cache write speeds have been my biggest complaint with v6, however the benefits far outweigh this obstacle.

 

I strongly recommend looking into adjusting your disk tunables, they can make a dramatic difference.  It's possible you may be able to get close to v5 speeds again.

Link to comment

I wouldn't call failure to play mp3 or movies with dual parity a success.

The failures to sustain a read only occur WHILE the dual parity is being checked.   

The parity check occurs from 1:30am->4pm.... once a month.

 

For now I am choosing to not have server access during 1 day a month - the evening is still open for use. 

 

The kids nap between 12:30-3:30... and only have a couple of movies/week as 'screen time' privileges.  So... worst case is that the kids do not have a movie on a Sat/Sun morning if it is the first of the month.

 

For me, this means I need to be mindful to bring over any work files to the local machine the night before the monthly parity check... not ideal... but I can live with it for now as a tradeoff against having 2 disk failure protection until I can source together a more robust server and consolidate 2 of the desktops into it.

 

Keep in mind... this is on a D525 Atom!

 

I strongly recommend looking into adjusting your disk tunables, they can make a dramatic difference.  It's possible you may be able to get close to v5 speeds again.

 

Thanks for this... I will start messing with the tunables ASAP via http://lime-technology.com/wiki/index.php/Improving_unRAID_Performance#User_Tunables

 

I will likely start with the Tuneables Test Utility... looks like the 6.2 version is being tested for release here https://lime-technology.com/forum/index.php?topic=29009.0

 

 

 

 

 

 

Link to comment

ok so ive got slightly screwed over by the dodgy dynamix update for 6.1.9 (CA auto updates dynamix by default), ive seen the fix for this but thought what the hell, this is a sign from the god-geek that i should just embrace 6.2.0 and plunge straight in :-). the issue i have is that i dont now have a working UI in which to prep for the update, so i cant get to the VM screen in order to disable auto boot on startup (part of the procedure for 6.2.0 is i noted, dont auto start vm before tweak performed). ive done a quick text search across the flash drive but cannot find any reference to auto boot settings.

 

so the question in short is, does anybody know if its possible to temporarily disable auto boot of a named vm through CLI, im no libvirt expert so it could very well be buried in there somewhere maybe?.

Link to comment

ok so ive got slightly screwed over by the dodgy dynamix update for 6.1.9 (CA auto updates dynamix by default), ive seen the fix for this but thought what the hell, this is a sign from the god-geek that i should just embrace 6.2.0 and plunge straight in :-). the issue i have is that i dont now have a working UI in which to prep for the update, so i cant get to the VM screen in order to disable auto boot on startup (part of the procedure for 6.2.0 is i noted, dont auto start vm before tweak performed). ive done a quick text search across the flash drive but cannot find any reference to auto boot settings.

 

so the question in short is, does anybody know if its possible to temporarily disable auto boot of a named vm through CLI, im no libvirt expert so it could very well be buried in there somewhere maybe?.

 

VM's usually start when the array starts, yes?  You can turn off auto array start by editing config/disk.cfg and making sure it says startArray="no", not startArray="yes".  But I'm not sure if that is specifically what you want.

Link to comment

- Unlike 6.1.9, the Docker system in 6.2 no longer supports the docker.img file to be located on a disk mounted with the Unassigned Devices plugin.  You must locate it either on the Cache drive or on the array. 

 

Even if this was never officially supported, this regression is REALLY sad.

 

No that restriction is not true.  We'll update the OP.

Link to comment

ok so ive got slightly screwed over by the dodgy dynamix update for 6.1.9 (CA auto updates dynamix by default), ive seen the fix for this but thought what the hell, this is a sign from the god-geek that i should just embrace 6.2.0 and plunge straight in :-). the issue i have is that i dont now have a working UI in which to prep for the update, so i cant get to the VM screen in order to disable auto boot on startup (part of the procedure for 6.2.0 is i noted, dont auto start vm before tweak performed). ive done a quick text search across the flash drive but cannot find any reference to auto boot settings.

 

so the question in short is, does anybody know if its possible to temporarily disable auto boot of a named vm through CLI, im no libvirt expert so it could very well be buried in there somewhere maybe?.

 

Courtesy of bonienl: From another computer on the same network

SSH [email protected]

type in your root password

rm /boot/config/plugins/dynamix.plg

 

Then reboot into 6.1.9, disable VM autostarting, etc

...

have fun upgrading to 6.2

:)

 

Link to comment

ok so ive got slightly screwed over by the dodgy dynamix update for 6.1.9 (CA auto updates dynamix by default), ive seen the fix for this but thought what the hell, this is a sign from the god-geek that i should just embrace 6.2.0 and plunge straight in :-). the issue i have is that i dont now have a working UI in which to prep for the update, so i cant get to the VM screen in order to disable auto boot on startup (part of the procedure for 6.2.0 is i noted, dont auto start vm before tweak performed). ive done a quick text search across the flash drive but cannot find any reference to auto boot settings.

 

so the question in short is, does anybody know if its possible to temporarily disable auto boot of a named vm through CLI, im no libvirt expert so it could very well be buried in there somewhere maybe?.

 

VM's usually start when the array starts, yes?  You can turn off auto array start by editing config/disk.cfg and making sure it says startArray="no", not startArray="yes".  But I'm not sure if that is specifically what you want.

 

That would work, IF i could then get access to the VM in order to make the video driver modifications, i cant remember for sure, can i modify VM config with the array stopped?.

Link to comment

ok so ive got slightly screwed over by the dodgy dynamix update for 6.1.9 (CA auto updates dynamix by default), ive seen the fix for this but thought what the hell, this is a sign from the god-geek that i should just embrace 6.2.0 and plunge straight in :-). the issue i have is that i dont now have a working UI in which to prep for the update, so i cant get to the VM screen in order to disable auto boot on startup (part of the procedure for 6.2.0 is i noted, dont auto start vm before tweak performed). ive done a quick text search across the flash drive but cannot find any reference to auto boot settings.

 

so the question in short is, does anybody know if its possible to temporarily disable auto boot of a named vm through CLI, im no libvirt expert so it could very well be buried in there somewhere maybe?.

 

Courtesy of bonienl: From another computer on the same network

SSH [email protected]

type in your root password

rm /boot/config/plugins/dynamix.plg

 

Then reboot into 6.1.9, disable VM autostarting, etc

...

have fun upgrading to 6.2

:)

 

Thanks, yeah i did spot that, but ive decided to go ahead and just go for the update to 6.2.0 (update button pressed), so just looking for steps to disable vm boot, the rest i should be able to handle

Link to comment

- Unlike 6.1.9, the Docker system in 6.2 no longer supports the docker.img file to be located on a disk mounted with the Unassigned Devices plugin.  You must locate it either on the Cache drive or on the array. 

 

Even if this was never officially supported, this regression is REALLY sad.

 

No that restriction is not true.  We'll update the OP.

Many many people have trouble with this. Some do not.  Probably a race condition but I still felt my contribution was justified.

 

Sent from my LG-D852 using Tapatalk

 

 

Link to comment

 

Kind of disappointed I need to either add a cache drive I don't want, or leave an array disk spinning 24/7 to use docker now.  Not terribly excited with either option.

 

 

Well, how else are you going to run Docker?  You can't run it without having a drive, and the drive will be active as the apps within the Dockers will be accessing the drive. 

 

You also don't HAVE to use the cache drive to cache, you could put a small SSD in there and ONLY use it for Docker and appdata.  A cheap 60GB or 120GB SSD isn't expensive, over here in the UK it's possible to get a 120GB SSD from a decent brand for less than £40.

 

What you're trying to achieve is like having a car, wanting to drive it, but not being terribly excited by starting the engine.

 

Someone clearly never watched the flintstones, lol.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.