unRAID Server Release 6.2.0-rc1 Available


Recommended Posts

Hey there,

 

I just updated my system and run into  a VM releated issue.

I'm running two Windows 10 VMs. one of them (always the same one) gets halted (yellow pause symbol in the web UI).

There are no information in the system log.

It happens a few seconds after the following message in the VM log:

 

ehci warning: guest updated active QH

 

My diagnostics file is attached below

 

The "crashing" VM is the "Guest.log". The functioning one is the "SkyFire (Guest mode).log"

tower-diagnostics-20160709-1011.zip

Link to comment
  • Replies 155
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Every time I restart there is a share called disks under /mnt. Is this something new? What's it for?

 

That's part of unassigned devices I believe.  I have my unassigned disk mount changed to virtualisation.  So I get a share at /mnt/virtualisation/ instead of /mnt/disks/Samsung_SSD_840_EVO_250GB_S1DBNSBDB0737Z

Link to comment

Every time I restart there is a share called disks under /mnt. Is this something new? What's it for?

 

That's part of unassigned devices I believe.  I have my unassigned disk mount changed to virtualisation.  So I get a share at /mnt/virtualisation/ instead of /mnt/disks/Samsung_SSD_840_EVO_250GB_S1DBNSBDB0737Z

 

Ah, makes sense then. Move got that plugin installed. Thanks.

Link to comment

Every time I restart there is a share called disks under /mnt. Is this something new? What's it for?

 

It's not really a share, but a mount point for the unassigned devices plugin disks and is created by the unassigned devices plugin for its sole use.  This is to keep unassigned devices mounts separate from the /mnt/disk* and /mnt/user/ mount points used by unRAID.

Link to comment

...  But a feature that didn't make into -rc1 was to automatically expand the docker/libvirt image files when they reach a certain fullness...

Full docker image files are often a sign that the user has not configured the docker to write data to unRAID storage.
Link to comment

appdata - to store, well, the appdata

domains - to store vdisk images

system - to store loopback image files along with other goodies planned for the future

isos - to store ISO image files

 

Each of these shares have various characteristics.  In particular, all but 'isos' are marked 'cache prefer'.  This is a new cache mode that attempts to create files on the cache first, but if not present or full, on the array.  In addition, when the 'mover' runs it will attempt to copy files on the array to the cache.

How are you handling appdata that uses symlinks? Since the fuse file system doesn't do symlinks, right now we are specifying the /mnt/diskX or /mnt/cache location so symlinks within the appdata structure will work. Moving the appdata folder around will break things unless fuse gets symlink support so we can use /mnt/user/* paths.

 

fuse/shfs does support symlinks but not hard links.  Maybe these apps can be configured to not use hard links?  Another approach would be to define a loopback file for appdata mounted at /mnt/user/appdata formatted with btrs, or even a loopback file for each container mounted at /mnt/user/appdata/container-name.  This would be to containers what vdisks are to VM's.  One thing nice about this is you could snapshot those loopback files and/or use reflink copies.

In the meantime however there still is the issue with links.  And this issue affects many many apps to a lesser or greater degree from strange little weird bugs occasionally all the way up to the app just plain not working.  And this includes your own supported apps (notably Plex).  While its not an ideal situation, a ton of problems will be solved if you enforce within the GUI the ability to only select disk shares for the /config mappings

 

This issue affect your own container (notably Plex) and is compounded by the fact that the placeholder for default appdata mapping shows /mnt/user/appdata, and the template also defaults to /mnt/user/appdata if no default appdata mapping has been set.  Whether you like it or not, Plex works far far better using /mnt/cache/appdata.

Link to comment

...  But a feature that didn't make into -rc1 was to automatically expand the docker/libvirt image files when they reach a certain fullness...

Full docker image files are often a sign that the user has not configured the docker to write data to unRAID storage.

True, but don't you remember the early days of 6 before docker.img came into being.  Back when docker actually just sat on a btrfs cache disk.  There would have been zero problems with it filling up due to misconfigured apps.

 

From a support point of view, I would love an auto-expanding image file as it would lessen the problems.

Link to comment

Hey there,

 

I just updated my system and run into  a VM releated issue.

I'm running two Windows 10 VMs. one of them (always the same one) gets halted (yellow pause symbol in the web UI).

There are no information in the system log.

It happens a few seconds after the following message in the VM log:

 

ehci warning: guest updated active QH

 

My diagnostics file is attached below

 

The "crashing" VM is the "Guest.log". The functioning one is the "SkyFire (Guest mode).log"

 

Your disk4 is at 100% capacity (20KB free) and this is where the vdisk for Guest lives.  Try freeing up some space on that disk to see if the the issue resolves itself. 

 

Unrelated, Disk1 is at 99% full (19GB free).  Personally, I strive to keep disk usage below 90% because anything above that could degrade performance for writes on certain filesystems.

Link to comment

appdata - to store, well, the appdata

domains - to store vdisk images

system - to store loopback image files along with other goodies planned for the future

isos - to store ISO image files

 

Each of these shares have various characteristics.  In particular, all but 'isos' are marked 'cache prefer'.  This is a new cache mode that attempts to create files on the cache first, but if not present or full, on the array.  In addition, when the 'mover' runs it will attempt to copy files on the array to the cache.

How are you handling appdata that uses symlinks? Since the fuse file system doesn't do symlinks, right now we are specifying the /mnt/diskX or /mnt/cache location so symlinks within the appdata structure will work. Moving the appdata folder around will break things unless fuse gets symlink support so we can use /mnt/user/* paths.

 

fuse/shfs does support symlinks but not hard links.  Maybe these apps can be configured to not use hard links?  Another approach would be to define a loopback file for appdata mounted at /mnt/user/appdata formatted with btrs, or even a loopback file for each container mounted at /mnt/user/appdata/container-name.  This would be to containers what vdisks are to VM's.  One thing nice about this is you could snapshot those loopback files and/or use reflink copies.

In the meantime however there still is the issue with links.  And this issue affects many many apps to a lesser or greater degree from strange little weird bugs occasionally all the way up to the app just plain not working.  And this includes your own supported apps (notably Plex).  While its not an ideal situation, a ton of problems will be solved if you enforce within the GUI the ability to only select disk shares for the /config mappings

 

This issue affect your own container (notably Plex) and is compounded by the fact that the placeholder for default appdata mapping shows /mnt/user/appdata, and the template also defaults to /mnt/user/appdata if no default appdata mapping has been set.  Whether you like it or not, Plex works far far better using /mnt/cache/appdata.

Had some time, and found a reference in the Plex FAQ regarding symlinks / hardlinks.

 

https://support.plex.tv/hc/en-us/articles/201424327-Why-do-I-get-metadata-but-no-images-for-my-library-items-

File System Doesn't Support Symlinks/Hardlinks

The metadata–and particularly artwork–stored on your Plex Media Server makes use of symbolic links/hard links for referencing the same item from multiple locations.

Some filesystems such as Microsoft's Resilient File System (ReFS) or "drive pooling" implementations don't correctly handle things in such a way that the links used for the metadata work correctly.

If you place your Plex Media Server data directory on such a file system or drive pool, you may run into problems.

 

Net result is that your supported container running under your recommended defaults are incompatible with each other.

Link to comment

Unrelated, Disk1 is at 99% full (19GB free).  Personally, I strive to keep disk usage below 90% because anything above that could degrade performance for writes on certain filesystems.

 

I believe this is a new recommendation? Can you expand on this in the context of RFS and XFS? Happy to start a new thread if needed as it sounds like quite an important consideration.

Link to comment

I just decided to check my system logs to come into thousands of error lines all like this

Jul  9 01:57:51 Archangel kernel: pcieport 0000:00:02.0: AER: Multiple Corrected error received: id=0010
Jul  9 01:57:51 Archangel kernel: pcieport 0000:00:02.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, id=0010(Receiver ID)
Jul  9 01:57:51 Archangel kernel: pcieport 0000:00:02.0:   device [8086:2f04] error status/mask=00000040/00002000

 

Attached is my system diagnostics, i get multiple of these errors every second or so

Others may have more expert opinions on this, here are my thoughts:

 

* Probably harmless, in that the AER function (Advanced Error Reporting) is detecting and correcting an issue.

 

* But indirectly not so harmless, because at the current rate, you're going to explode the logging space due to these, syslog growing at a rate just under 1MB per hour.

 

* "Bad TLP" means a bad Transaction Layer packet was detected, by CRC error or wrong sequence number.  "Bad DLLP" means a bad Data Link Layer packet was detected, same reasons.  So these sound similar to CRC errors on SATA cables, except it's on a PCI bus or connection.

 

* Suggestions:

- you've started a parity check, that doesn't seem like a good idea at the moment; until this is figured out, I'd cancel it

- try isolating it down to a particular PCI device - with no other drive activity happening, see if these errors occur when you read or write to a single drive, and cycle through all of your drives

- then try I/O through all other PCI attached devices, graphics units, USB devices, experiment at will and see what affects the production of these messages, positively or negatively.  I can't say you'll actually pin the cause down, but you may eliminate a number of possibilities.

 

Having just been on a game on both systems i can say the error is not caused by my GPUs or the usb controller (no connected devices)

 

The actual error coming up is from one of the onboard PLX chips which manages the PCI devices - this was a driver issue last time i saw these types of errors so perhaps it is an issue with the new adaptec driver? I haven't seen any slow downs on the system or issues but as you say, the log file is ballooning in size and the main system log is so long now that chrome starts to lock up when i open the page

 

The controller was plugged in before i did the update to this version and had the drivers so that is the only thing i can think of unless the drive cables could be suspect? In this instance my server is moving to a new case in the next 2 weeks so will be using a different cable and backplane - this cable did come from a working system however it was running windows

 

Jamie

Link to comment

As warning out there for the new 'preferred' setting for 'Use cache disk'. Do not use it if your share is bigger than the cache disk. It will try to copy the whole share to the cache disk filling it up. If mover runs, it will try to move data from cache and you get a loop basically. Moving stuff of and to.

 

I thought i'd put it out there.

Link to comment

As warning out there for the new 'preferred' setting for 'Use cache disk'. Do not use it if your share is bigger than the cache disk. It will try to copy the whole share to the cache disk filling it up. If mover runs, it will try to move data from cache and you get a loop basically. Moving stuff of and to.

 

I thought i'd put it out there.

 

Thanks for the heads up on this. Seems it needs some refinements.

 

Link to comment

[fuse/shfs does support symlinks but not hard links.

 

Are you sure? I don't know a lot about FUSE and I've just heard about SHFS. I have read that SHFS stopped development in 2004 and has been superseded by SSHFS. In my searching for a solution or workaround I've seen numerous pages mentioning hard link support for FUSE and SSHFS. If that's the case replacing SHFS with SSHFS (if possible) should allow hard linking.

Link to comment

[fuse/shfs does support symlinks but not hard links.

 

Are you sure? I don't know a lot about FUSE and I've just heard about SHFS. I have read that SHFS stopped development in 2004 and has been superseded by SSHFS. In my searching for a solution or workaround I've seen numerous pages mentioning hard link support for FUSE and SSHFS. If that's the case replacing SHFS with SSHFS (if possible) should allow hard linking.

 

I know I'm not the right one to speak up, I know very little about this, but I did find this old comment:

...

What I understand is that while a hardlink in the fuse shfs or underlying tmpfs might be possible, it's all volatile and would be lost on any reboot. (wiki states that the shfs is created on top of a tmpfs) A hardlink from the ReiserFS of the cache drive to the shfs just isn't possible and wouldn't work anyway cause on reboot the inode (or equivalent) in the shfs might not belong to the same target file anymore.

...

I don't know where that wiki quote about tmpfs is found, but it does sound correct.  The User Share file system is completely transient, is completely destroyed when you stop the array.  So all hard and soft links are gone.  All inodes would be different each session.  You'd need a mechanism for saving the links, then recreating them on array start.

 

I did see comments that the FUSE/SSHFS combination could support hard links, but I don't know how it would work in our context.  Perhaps it has some persistence mechanism?  And even if it's possible to change over to it, it would need major beta testing.

 

Correction: I believe symlinks are actual little files, stored on the physical drives, so they would be preserved across array sessions.

Link to comment

[fuse/shfs does support symlinks but not hard links.

 

Are you sure? I don't know a lot about FUSE and I've just heard about SHFS. I have read that SHFS stopped development in 2004 and has been superseded by SSHFS. In my searching for a solution or workaround I've seen numerous pages mentioning hard link support for FUSE and SSHFS. If that's the case replacing SHFS with SSHFS (if possible) should allow hard linking.

 

I know I'm not the right one to speak up, I know very little about this, but I did find this old comment:

...

What I understand is that while a hardlink in the fuse shfs or underlying tmpfs might be possible, it's all volatile and would be lost on any reboot. (wiki states that the shfs is created on top of a tmpfs) A hardlink from the ReiserFS of the cache drive to the shfs just isn't possible and wouldn't work anyway cause on reboot the inode (or equivalent) in the shfs might not belong to the same target file anymore.

...

I don't know where that wiki quote about tmpfs is found, but it does sound correct.  The User Share file system is completely transient, is completely destroyed when you stop the array.  So all hard and soft links are gone.  All inodes would be different each session.  You'd need a mechanism for saving the links, then recreating them on array start.

 

I did see comments that the FUSE/SSHFS combination could support hard links, but I don't know how it would work in our context.  Perhaps it has some persistence mechanism?  And even if it's possible to change over to it, it would need major beta testing.

 

Yeah, I saw that post as well but wasn't sure what that means, if anything, for SSHFS. I'm hopeful that switching to SSHFS would allow hard linking but I'd really be interested in what Tom has to say about it. Even if it isn't doable, I wonder if there might be some work around. I have a script I wrote (well, really a guy on stackoverflow) that will find the direct disk share path of a file in a user share. If SSHFS is a no go, perhaps there might be some way to intercept or interrupt a command to hard link a file and rewrite it to hard link the file from the disk share, since disk shares can be hard linked and still be valid when viewed from a user share. So when a program tries to hard link /mnt/user/my_share/myfile.doc, UnRAID can intercept that command, find out that /mnt/user/my_share/myfile.doc resides in the disk share at /mnt/disk4/my_share/myfile.doc and hardlink that instead.

Link to comment

After upgrading to RC1, when I go to install a Docker container using a template that I have not used before, all the VARIABLE sections are missing the actual variable names?

 

Is this by design??

 

I've attached 2 images... one if from the RC1 and shows no variable names, just the default value... and the other shows what it used to look like.

 

When I edit existing docker configs I see the correct layout... so this is ONLY happening when I'm installing something new that has never been installed.

Selection_006.png.40a5217ce4726756ee89125b569b2795.png

Selection_005.png.04f29acbaa5c8988f99038ae710746a3.png

Link to comment

Unrelated, Disk1 is at 99% full (19GB free).  Personally, I strive to keep disk usage below 90% because anything above that could degrade performance for writes on certain filesystems.

 

I believe this is a new recommendation? Can you expand on this in the context of RFS and XFS? Happy to start a new thread if needed as it sounds like quite an important consideration.

 

I would more info on this too. I have a 4TB disk full of Bluray Rips 99% on xfs, some posts on these forums suggest this fine as the data on that drive is static and my system only reads it.

Link to comment

After upgrading to RC1, when I go to install a Docker container using a template that I have not used before, all the VARIABLE sections are missing the actual variable names?

 

Is this by design??

 

I've attached 2 images... one if from the RC1 and shows no variable names, just the default value... and the other shows what it used to look like.

 

When I edit existing docker configs I see the correct layout... so this is ONLY happening when I'm installing something new that has never been installed.

 

Not the behavior I'm seeing which container are you adding? 

Link to comment

As warning out there for the new 'preferred' setting for 'Use cache disk'. Do not use it if your share is bigger than the cache disk. It will try to copy the whole share to the cache disk filling it up. If mover runs, it will try to move data from cache and you get a loop basically. Moving stuff of and to.

 

I thought i'd put it out there.

 

No it won't move back and forth unless you change the setting.  If cache fills up it just doesn't move the file(s).

Link to comment
Guest
This topic is now closed to further replies.