unRAID Server Release 5.0-rc16b Available


Recommended Posts

  • Replies 85
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

I don't understand what you guys are saying about the way free space is calculated. Can someone explain it in more detail? I expect to see info on how much disk space is free per disk. Others seem to expect it to show something different?

Link to comment

Tom ----

 

I was under the impression that version 5 would allow us to use a maximum of seven hard drives and the user would be allow to select whether to use the seventh drive as a cache drive or a data drive.  It does not appear (to me) that this feature has been added into this release.  Am I missing something, or is that feature still be to added, or have you changed your mind?

Link to comment

check your share settings to verify split levels, disk assignments (includes/excludes), etc.

 

Disk assignments are all set to All, except Time Machine is just set to disk 8. Thats a 3tb drive which is registering 3.94TB free?

 

In the stock webGui, click on each share link and take a look a the 'included disks/excluded disks' mask - you'll probably see the problem.  There's a bug in the way SF handles keywords like "all".

Link to comment

I don't understand what you guys are saying about the way free space is calculated. Can someone explain it in more detail? I expect to see info on how much disk space is free per disk. Others seem to expect it to show something different?

The 'free space' calculation is a little dodgy...

 

For a cache-only share (i.e., a share where the top-level directory only exists on the cache drive):

- used space tells you how much storage on the cache drive the share is taking

- free space tells you how much free space there is on the cache drive.

 

The reason this is "dodgy" is suppose you have two cache-only shares, "share1" and "share2".  share1 is using 10GB and share2 is using 20GB of a 50GB cache drive.  Calculating share1 free space shows:

Total: 10GB, Free: 20GB

and calculating share2 free space shows:

Total: 20GB, Free: 20GB

 

Total used space is 30GB, which is correct, but clearly there is not 40GB free space.  The amount of free space left for share1 to use is affected by how much free space share2 uses and vice-versa.  So free space is really "potential" free space.

 

Same thing happens on other shares.  The general algorithm is this:

 

for every disk (including cache) that the share currently exists:

  include that disk's free space in the free space available for the share

 

for every disk (not including cache) that the share does not currently exist on:

  if it's possible the disk could be used for the share, include that disk's free space

 

What does it mean for a disk to be possibly used for a share (but isn't currently)?  It means that the include/exclude masks to not prohibit the disk from being used eventually.

 

One final note.  For a "split0" share (share where split level is set to a 0), it's like a cache-only share - it only includes the disks where the share currently already exists in it's calculation.

 

So there's still some questions:

- for a normal share that has some files still on the cache - should the cache disk be included in free space calculation?  Probably not, but it is currently.

- for a normal share that does not have any files on the cache - cache disk free space is currently not included.

 

Speaking of cache disk.  Let's say a certain cache-able share has 10GB total free space, and you have an empty 50GB cache drive.  Currently the system will let you fill up that cache drive with new data for the share until you hit the cache drive space limit (50GB).  But if you exceed 10GB then the 'mover' is going to fail.  So, should the amount of space a share can use on the cache drive have a "floor" of the size of free space for the share on the array?  Probably, but current code does not check this.

 

Link to comment

Tom ----

 

I was under the impression that version 5 would allow us to use a maximum of seven hard drives and the user would be allow to select whether to use the seventh drive as a cache drive or a data drive.  It does not appear (to me) that this feature has been added into this release.  Am I missing something, or is that feature still be to added, or have you changed your mind?

Not changing anything until "cache pool" is implemented.

Link to comment

Tom ----

 

I was under the impression that version 5 would allow us to use a maximum of seven hard drives and the user would be allow to select whether to use the seventh drive as a cache drive or a data drive.  It does not appear (to me) that this feature has been added into this release.  Am I missing something, or is that feature still be to added, or have you changed your mind?

Not changing anything until "cache pool" is implemented.

 

Thanks, Tom, for the explanation.  That makes sense.

Link to comment

Anyone else get this message?  I stopped the array and performed a clean shutdown prior to upgrading to rc16b.  Granted...I do have a non-standard configuration.  I PXE boot unRAID in ESXi.  Not sure if that is a factor but I'm pretty sure I have not seen this error before.  FYI...I do not see this error referenced in the syslog...only the console.

 

corrupt_zps52667edc.png

 

John

 

 

Link to comment

Also, kinda curious about the highlighted message below.  Any concerns?  Disk 2 is present and accounted for.  Not sure why I have that message in the syslog (attached).

 

Now that I think about it to, my array was not autostarted after I upgraded to RC16b.  Has anyone else experienced this?

 

Jul  4 13:52:38 unRAID emhttp: Device inventory:

Jul  4 13:52:38 unRAID emhttp: WDC_WD20EARS-00MVWB0_WD-WMAZA3373683 (sdb) 1953514584

Jul  4 13:52:38 unRAID emhttp: WDC_WD20EADS-00W4B0_WD-WCAVY6098461 (sdc) 1953514584

Jul  4 13:52:38 unRAID emhttp: Hitachi_HDS722020ALA330_JK11D1B8HXGZ6Z (sdd) 1953514584

Jul  4 13:52:38 unRAID emhttp: WDC_WD1001FALS-00J7B0_WD-WMATV0720742 (sde) 976762584

Jul  4 13:52:38 unRAID emhttp: ST31000528AS_6VPB4VB8 (sdf) 976762584

Jul  4 13:52:38 unRAID emhttp: WDC_WD20EARX-00PASB0_WD-WCAZAF042490 (sdg) 1953514584

Jul  4 13:52:38 unRAID emhttp: SAMSUNG_HD204UI_S2H7J1BB121032 (sdh) 1953514584

Jul  4 13:52:38 unRAID emhttp: get_device_size: open: No such file or directory

Jul  4 13:52:38 unRAID emhttp: device /dev/sdi problem getting size

Jul  4 13:52:38 unRAID emhttp: ST2000DM001-1CH164_W1E2FV13 (sdj) 1953514584

Jul  4 13:52:38 unRAID kernel: mdcmd (1): import 0 8,48 1953514552 Hitachi_HDS722020ALA330_JK11D1B8HXGZ6Z

Jul  4 13:52:38 unRAID kernel: md: import disk0: [8,48] (sdd) Hitachi_HDS722020ALA330_JK11D1B8HXGZ6Z size: 1953514552

Jul  4 13:52:38 unRAID kernel: mdcmd (2): import 1 8,16 1953514552 WDC_WD20EARS-00MVWB0_WD-WMAZA3373683

Jul  4 13:52:38 unRAID kernel: md: import disk1: [8,16] (sdb) WDC_WD20EARS-00MVWB0_WD-WMAZA3373683 size: 1953514552

Jul  4 13:52:38 unRAID kernel: mdcmd (3): import 2 0,0

Jul  4 13:52:38 unRAID kernel: md: disk2 missing

Jul  4 13:52:38 unRAID kernel: mdcmd (4): import 3 8,144 1953514552 ST2000DM001-1CH164_W1E2FV13

Jul  4 13:52:38 unRAID kernel: md: import disk3: [8,144] (sdj) ST2000DM001-1CH164_W1E2FV13 size: 1953514552

Jul  4 13:52:38 unRAID kernel: mdcmd (5): import 4 8,96 1953514552 WDC_WD20EARX-00PASB0_WD-WCAZAF042490

Jul  4 13:52:38 unRAID kernel: md: import disk4: [8,96] (sdg) WDC_WD20EARX-00PASB0_WD-WCAZAF042490 size: 1953514552

Jul  4 13:52:38 unRAID kernel: mdcmd (6): import 5 8,112 1953514552 SAMSUNG_HD204UI_S2H7J1BB121032

Jul  4 13:52:38 unRAID kernel: md: import disk5: [8,112] (sdh) SAMSUNG_HD204UI_S2H7J1BB121032 size: 1953514552

Jul  4 13:52:38 unRAID kernel: mdcmd (7): import 6 8,80 976762552 ST31000528AS_6VPB4VB8

Jul  4 13:52:38 unRAID kernel: md: import disk6: [8,80] (sdf) ST31000528AS_6VPB4VB8 size: 976762552

Jul  4 13:52:38 unRAID kernel: mdcmd (8): import 7 8,32 1953514552 WDC_WD20EADS-00W4B0_WD-WCAVY6098461

Jul  4 13:52:38 unRAID kernel: md: import disk7: [8,32] (sdc) WDC_WD20EADS-00W4B0_WD-WCAVY6098461 size: 1953514552

Jul  4 13:52:38 unRAID kernel: mdcmd (9): import 8 8,64 976762552 WDC_WD1001FALS-00J7B0_WD-WMATV0720742

Jul  4 13:52:38 unRAID kernel: md: import disk8: [8,64] (sde) WDC_WD1001FALS-00J7B0_WD-WMATV0720742 size: 976762552

 

John

syslog.txt

Link to comment

Also, kinda curious about the highlighted message below.  Any concerns?  Disk 2 is present and accounted for.  Not sure why I have that message in the syslog (attached).

 

Just before your highlighted line is:

Jul  4 13:52:38 unRAID emhttp: get_device_size: open: No such file or directory

Jul  4 13:52:38 unRAID emhttp: device /dev/sdi problem getting size

 

Later the drive comes online:

Jul  4 13:53:10 unRAID emhttp: WDC_WD20EARS-00MVWB0_WD-WMAZA3260729 (sdi) 1953514584

 

Something is not right with that drive, cabling, port, etc.  Check all cabling, maybe move to a different SATA port and keep an eye on it, drive might be on it's way out.

Link to comment

Also, kinda curious about the highlighted message below.  Any concerns?  Disk 2 is present and accounted for.  Not sure why I have that message in the syslog (attached).

 

Just before your highlighted line is:

Jul  4 13:52:38 unRAID emhttp: get_device_size: open: No such file or directory

Jul  4 13:52:38 unRAID emhttp: device /dev/sdi problem getting size

 

Later the drive comes online:

Jul  4 13:53:10 unRAID emhttp: WDC_WD20EARS-00MVWB0_WD-WMAZA3260729 (sdi) 1953514584

 

Something is not right with that drive, cabling, port, etc.  Check all cabling, maybe move to a different SATA port and keep an eye on it, drive might be on it's way out.

 

Thanks Tom.  I think I may try and make that drive fail sooner than later.  :)

 

Serial Number          Model Number                Warranty Status                Warranty Exp Date

WMAZA3260729        WD20EARS-00MVWB0    IN LIMITED WARRANTY      01/24/2014

 

Link to comment

Thanks Tom.  I think I may try and make that drive fail sooner than later.  :)

 

Serial Number          Model Number                Warranty Status                Warranty Exp Date

WMAZA3260729        WD20EARS-00MVWB0    IN LIMITED WARRANTY      01/24/2014

 

 

You can always just file an RMA send it back in now. I've sent back drives to WD and Seagate that were exhibiting weird behavior but otherwise "worked" and passed a SMART test. Just tell them the drive is dropping erratically and they'll approve it and send you a replacement. Since it's WD you can even do an advanced replacement. You may get lucky and get a bigger drive back as has been known to happen sometimes. I sent in a 3TB Green that was acting similar to how yours is now and they sent me a 4TB Hitachi 5K4000 in return.

Link to comment

Agree with mrow => just go ahead and RMA it, or at least just use it occasionally.   

 

You may want to connect it to a PC first and run Data Lifeguard on it ... if it passes all the tests [quick, extended, then write zeroes to the full drive, then repeat the short & extended] then it's actually fine for use outside of UnRAID.    If you don't have enough backup drives for all your data, just add it to that pool.

 

Otherwise, an Advanced Replacement RMA with WD works very well ... and as noted, they often send newer, higher-capacity drives [i've had this happen on 3 occasions].

 

Link to comment

I think I found a problem ?!

 

I have been using RC12a till now and have automated mounting commands for output of YAMJ to my PopcornHour media players:

mount -t cifs -o user=nmt,password=1234 //192.168.2.115/Share /mnt/cache/unRAID_Apps/PCH_MasterBedRoom
mount -t cifs -o user=nmt,password=1234 //192.168.2.120/Share /mnt/cache/unRAID_Apps/PCH_LivingRoom

 

in RC12a they work fine and the remote locations mount perfectly.

 

in RC16b they do not mount and command returns:

 

mount error(13): Permission denied
Refer to the mount.cifs( manual page (e.g. man mount.cifs)

 

I reverted back to RC12a, rebooted and again the mounting works fine.

 

something changed in new Kernel?

 

I came across this: https://bbs.archlinux.org/viewtopic.php?id=160047

does "sec=ntlm" make sense?

 

I will try it...

 

EDIT

Ok, I checked the "sec=ntlm" option and it seems to solve the problem. mounts are working.

mount -t cifs -o user=nmt,password=1234,sec=ntlm //192.168.2.115/Share /mnt/cache/unRAID_Apps/PCH_MasterBedRoom
mount -t cifs -o user=nmt,password=1234,sec=ntlm //192.168.2.120/Share /mnt/cache/unRAID_Apps/PCH_LivingRoom

 

Link to comment

I think I found a problem ?!

 

I have been using RC12a till now and have automated mounting commands for output of YAMJ to my PopcornHour media players:

mount -t cifs -o user=nmt,password=1234 //192.168.2.115/Share /mnt/cache/unRAID_Apps/PCH_MasterBedRoom
mount -t cifs -o user=nmt,password=1234 //192.168.2.120/Share /mnt/cache/unRAID_Apps/PCH_LivingRoom

 

in RC12a they work fine and the remote locations mount perfectly.

 

in RC16b they do not mount and command returns:

 

mount error(13): Permission denied
Refer to the mount.cifs( manual page (e.g. man mount.cifs)

 

I reverted back to RC12a, rebooted and again the mounting works fine.

 

something changed in new Kernel?

 

I came across this: https://bbs.archlinux.org/viewtopic.php?id=160047

does "sec=ntlm" make sense?

 

I will try it...

 

EDIT

Ok, I checked the "sec=ntlm" option and it seems to solve the problem. mounts are working.

mount -t cifs -o user=nmt,password=1234,sec=ntlm //192.168.2.115/Share /mnt/cache/unRAID_Apps/PCH_MasterBedRoom
mount -t cifs -o user=nmt,password=1234,sec=ntlm //192.168.2.120/Share /mnt/cache/unRAID_Apps/PCH_LivingRoom

 

Yeah, here's the commit that did this:

https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=81bcd8b795229c70d7244898efe282846e3b14ce

 

Good example of fixing something that wasn't broke IMHO.  If users wanted/needed better authentication it's easy enough to set it up, don't need someone fixing it for us.  Oh well.

Link to comment

Anyone else get this message?  I stopped the array and performed a clean shutdown prior to upgrading to rc16b.  Granted...I do have a non-standard configuration.  I PXE boot unRAID in ESXi.  Not sure if that is a factor but I'm pretty sure I have not seen this error before.  FYI...I do not see this error referenced in the syslog...only the console.

 

That is your flash drive, and it does appear in your syslog here:

Jul  4 13:52:38 unRAID kernel: sd 2:0:3:0: [sdj] Attached SCSI disk

Jul  4 13:52:38 unRAID kernel: FAT-fs (sda): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.

Jul  4 13:52:38 unRAID logger: /etc/rc.d/rc.inet1:  /sbin/ifconfig lo 127.0.0.1

 

It just means the FAT file system on your flash drive was not closed properly.  This typically occurs when you take the flash drive to another machine, update its files, then pull it without doing the 'Safe to Remove' procedure.  Just take it back there and run the appropriate CheckDisk/Scandisk/fsck on it.

Link to comment

Hello,

 

I have just upgraded from rc-14 to RC-16b.  In RC-14 I noticed this but thought it was a fluke.  It is still present in 16b, so I thought I would mention it.

 

Please take a look at the picture below:

 

5rje.jpg

 

I have a 4GB array right now (yes its tiny)

Next you will notice while a parity check is running, you see Total Size: 2TB (yes Terabytes).

 

This is a bit confusing, but I believe that this area is showing the status of the parity check, and therefor the size of the parity drive correct?  This value would not be the "total size" of the array as the label suggests.  If this is the case, shouldn't "Total Size" in this section be replaced with "Parity Drive Size"? to avoid confusion?

 

Also I know this is a formatting issue, but shouldn't all of these labels on the left be moved over closer to their values instead of sitting off on the far left like that?

 

I know these are small potatoes compared to what you are really trying to fix, but shouldn't these small issues be addressed also before release?  These formatting errors have been around for a very long time.  I know they don't hurt anything but before release I think these should be fixed.

 

I am performing a Parity Check right now but I don't think I will find anything wrong.

 

I have been following along for quite a while and Tom, I do have to say, the wait has been worth it.  Very good job.

 

-- Sideband Samurai

Link to comment

I think I found a problem ?!

 

I have been using RC12a till now and have automated mounting commands for output of YAMJ to my PopcornHour media players:

mount -t cifs -o user=nmt,password=1234 //192.168.2.115/Share /mnt/cache/unRAID_Apps/PCH_MasterBedRoom
mount -t cifs -o user=nmt,password=1234 //192.168.2.120/Share /mnt/cache/unRAID_Apps/PCH_LivingRoom

 

in RC12a they work fine and the remote locations mount perfectly.

 

in RC16b they do not mount and command returns:

 

mount error(13): Permission denied
Refer to the mount.cifs( manual page (e.g. man mount.cifs)

 

I reverted back to RC12a, rebooted and again the mounting works fine.

 

something changed in new Kernel?

 

I came across this: https://bbs.archlinux.org/viewtopic.php?id=160047

does "sec=ntlm" make sense?

 

I will try it...

 

EDIT

Ok, I checked the "sec=ntlm" option and it seems to solve the problem. mounts are working.

mount -t cifs -o user=nmt,password=1234,sec=ntlm //192.168.2.115/Share /mnt/cache/unRAID_Apps/PCH_MasterBedRoom
mount -t cifs -o user=nmt,password=1234,sec=ntlm //192.168.2.120/Share /mnt/cache/unRAID_Apps/PCH_LivingRoom

 

Yeah, here's the commit that did this:

https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=81bcd8b795229c70d7244898efe282846e3b14ce

 

Good example of fixing something that wasn't broke IMHO.  If users wanted/needed better authentication it's easy enough to set it up, don't need someone fixing it for us.  Oh well.

 

I hope this doesn't affect my netgear NTV550 will have to test tonight.

Link to comment
Guest
This topic is now closed to further replies.