unRAID Server Release 5.0-rc16b Available


Recommended Posts

@binhex - Thanks.  I noticed Tom's post, but on first read it wasn't clear to me exactly what Tom thought the correct definition of free space should be.  That's why I added my reply, to clearly state what I believe it should be.

[EDIT: after composing my replies, I reread Tom's post and it made more sense :-)  ]

Here is a link: http://lime-technology.com/forum/index.php?topic=28353.30

 

@garycase - The definition I presented matches the behavior of Windows Server environments, and likely any environment in which shared network folders reside on a single logical volume and no disk quotas are defined. 

  • Every share shows free space as the maximum that could be written to that share at this very moment.
  • If you write a file to one of those shares, free space drops accordingly on all shares that reside on the same logical server volume.
  • Total available free space on the server does not equal the sum of all reported free space on all shares.

To compute an accurate Total Available Free Space in unRaid's default web GUI, you'd have to go to the Main tab, then manually total the values in the Free column for all rows where Device = disk[1..n].  Again, in my experience this matches other server environments.

 

The cache drive overflow behavior does complicate matters a bit.  I believe it should be excluded from the free space in the array shares, since it's not protected space, which is the point of shares on the protected array.

Link to comment
  • Replies 85
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

I agree that's a reasonable way to display the free space ... but as I noted, it's not really a very good metric as long as there are multiple shares all using the same pool of free space.

 

The only thing that really matters is how much free space is on the server ... and that's very easy to see at a glance without worrying about a per-share value.

 

Link to comment

I agree that the Free Space computation in the latest version doesn't make sense.

Right, already mentioned a bug got introduced.

 

For a share on the protected array, free space should equal the sum of all free space on all drives on which the share is allowed to write.

 

Cache space should never be included in these totals, even though unRAID will allow writing beyond the array limit until the cache drive is full.  When we write to a share on the protected array, we expect it to be protected immediately or after the next scheduled mover script execution.  Thus, the array should be presented as full when there is no more protected space available for it to use.

 

To do this we need to limit the amount of space taken by a share on the cache disk to the amount of space left for the share on the array.  This is very hard because once a new file gets opened on the cache disk it can keep growing until client decides to stop writing it.  That is we're not told in advance how big the file would be.  The code would have to check free space after each write and I think this would be expense in terms of execution overhead.

 

The other problem that can happen is that an existing file on one of the array disks can get extended, taking up free space.

 

Well it's possible but not sure it's worth it.  Also, what if cache is 'mirrored'?  Then urgency of moving to array decreases.

 

Link to comment

Well it's possible but not sure it's worth it.  Also, what if cache is 'mirrored'?  Then urgency of moving to array decreases.

 

Definitely do NOT think "... it's worth it"  ==>  there are FAR more significant things I (and I suspect most others) would prefer you spend your time on !!  [e.g. UPS support, e-mail notification, etc.]    In fact, if the system had automatic e-mail notifications for failing/failed drives, it would probably be pretty easy to have a "running out of space" e-mail alert whenever the total available array space was less than some (possibly configurable) number (e.g. 100GB).

 

Link to comment

Also, what if cache is 'mirrored'?  Then urgency of moving to array decreases.

 

Absolutely ==>  AND it makes using a cache a much more attractive option, since cached writes would no longer be "at risk" !!

 

Ever since you mentioned the btrfs cache pool feature, I've been anxiously awaiting this capability (as I don't use a cache for that very reason).    I note your web page STILL says "... Our latest unRAID Server release supports several advanced features including a cache pool. For example, you can configure your 10 high capacity hard drives in an unRAID array, and configure 4 fast 2.5? SSD’s in a RAID-1 btrfs cache pool."

 

... is that still going to be true for v5.0 ??

 

Link to comment

Upgrade went fine - kicked off a parity check as I haven't run one in a while and a few hours later the webpage is unresponsive.  I do have simplefeatures installed so I will need to remove that and test but everything else seems to be working well (minus known issues)

 

 

Link to comment

The GUI has gone unresponsive for me on 3 different machines.  All was fine last night, went to bed, got up this morning and all 3 just spin.

 

Anyone else?

 

John

 

Just check mine and it is fine using stock web GUI. You using stock or simple features?

 

Reason I ask is that last couple of rc's I've found simple features to be a bit unstable so I removed it. Will revisit post final one a new version comes out.

 

Sent from my Nexus 4 using Tapatalk 2

 

It's unlikely to be Simple Features. SF will either run, or not run. There may be display bugs introduced because Tom has changed something that hasn't been changed in my code yet, but Simple Features base package cannot introduce instability as it basically runs off the same code and in the same manner as the stock GUI.

 

Cheers

Link to comment
For a share on the protected array, free space should equal the sum of all free space on all drives on which the share is allowed to write.

 

Cache space should never be included in these totals, even though unRAID will allow writing beyond the array limit until the cache drive is full.  When we write to a share on the protected array, we expect it to be protected immediately or after the next scheduled mover script execution.  Thus, the array should be presented as full when there is no more protected space available for it to use.

To do this we need to limit the amount of space taken by a share on the cache disk to the amount of space left for the share on the array.  This is very hard because once a new file gets opened on the cache disk it can keep growing until client decides to stop writing it.  That is we're not told in advance how big the file would be.  The code would have to check free space after each write and I think this would be expense in terms of execution overhead.

 

Tom, thanks for sharing details.

 

Are we talking about real-time free space reporting (such as Windows Explorer would display), or are we talking about the data listed on the Shares tab in the web GUI?  I thought it was the latter, which should be an easier task than real-time.

 

Also, I wasn't asking you to prevent unRAID from allowing writes to the cache beyond what would fit on the protected array space for the share.  I agree that this would be difficult and slow down write operations.  I do think that the space consumed by files written (and closed) on the cache drive should be subtracted from the free space available on the relevant share.  When the protected share is full and has overflowed onto the cache drive, I think the free space should show negative of possible, or zero if a negative value would cause errors elsewhere.

Link to comment

It looks like I am still plagued by the "Transport endpoint not connected" issue.

I have attached my SYSLOG in case anyone wants to look it over. I am *not* running Plex, that's the difference between mine and the others.

 

Tom is well aware of this issue => that's probably the main reason at this point that v5.0 isn't ready for release (can't say that for sure, but I know he's been working hard to resolve this specific issue)

Link to comment
Guest
This topic is now closed to further replies.