unRAID Server Release 6.2.0-rc1 Available


Recommended Posts

I had it as well. I'm not sure if this one has been around a while or not but I have a "domains" share which is labeled as "saved VM instances" as well

 

Jamie

 

I deleted that folder off my cache as well, but I thought I'd created that one tbh....  ::)

The creation of the folders have been mentioned in the beta announcement thread. Search for Automatic System Shares in this thread

 

Most likely the cause of the shares you have.

 

yes I understand that, but I have shares already defined in the needed places, and the last 2 betas did not create these extra shares along with my already defined/created shares - just RC1

 

Myk

 

I had the same experience.

Link to comment
  • Replies 155
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

why did a "isos" share get created all of a sudden?

 

I already have a default iso storage path/share already defined.....

 

Myk

 

We are on a mission to reduce the number of clicks it takes to get from installation of unRAID OS on a USB flash device, to getting a Trial key, to getting VM's and docker containers running.  That is, we are trying to make the Trial user experience the best we can make it.

 

As you guys know, it is necessary and desirable to have a number of shares set up in order to run VM's and containers.  We didn't want to have to educate a new user about what shares to create, cache modes, loopback images, etc just to get a container running or VM installed.

 

Hence we're taking the windows "My Documents" approach [don't shoot me this was Jon's idea - but a good one].

 

The idea is that we will auto-create a set of shares to make things easier:

 

appdata - to store, well, the appdata

domains - to store vdisk images

system - to store loopback image files along with other goodies planned for the future

isos - to store ISO image files

 

Each of these shares have various characteristics.  In particular, all but 'isos' are marked 'cache prefer'.  This is a new cache mode that attempts to create files on the cache first, but if not present or full, on the array.  In addition, when the 'mover' runs it will attempt to copy files on the array to the cache.  This handles the case where a Trial user creates a minimum server, say with only one or two storage devices (no cache), and then later decides to add a cache disk/pool.  It provides a way to move files which benefit from being on the cache, to the cache.  (The 'mover' takes into account open and loopback-mounted files and won't move those.)

 

As continue with the evolution of the 'My Documents' concept, we expect some changes to auto-creates shares as we move along.  Right, at preset, in this -rc1 release, we always create those shares & we realize this might not be ideal... we're working on it...

Link to comment

The creation of the folders have been mentioned in the beta announcement thread. Search for Automatic System Shares in this thread

 

Most likely the cause of the shares you have.

 

I get that. but this is how mine are defined since I can't remember when.  After upgrading to this release though it's created shares I don't require, nor want.  I understand the need for some automation and it will certainly help those new to Unraid but I'm not sure the behaviour on this release is entirely what was expected if people already have their own shares defined.

 

B7PuSRr.png

Link to comment

As continue with the evolution of the 'My Documents' concept, we expect some changes to auto-creates shares as we move along.  Right, at preset, in this -rc1 release, we always create those shares & we realize this might not be ideal... we're working on it...

 

OK, makes sense.  Thanks for explaining.  What about a toggle or something in settings?

 

Autocreate shares?

Link to comment

Your server must have access to the Internet to use the unRAID 6.2 rc.

 

Can you please define the precise mechanism for key validation?  Has it changed from the way it worked in the betas?

 

No.  The server will attempt to validate upon first boot with a timeout of 30 sec.  Thereafter, as long as you don't reboot, everything is good to go.  If it can't validate upon first boot, then array won't Start, but each time you navigate or refresh the webGui it will attempt validation again (with a very short timeout).  Once validated, it won't phone-home again until next boot.

 

The Internet connection here is almost as unreliable as our power supply - and when the Internet is down we often revert to watching a movie.  However, if the server won't start up without Internet, then that will no longer be possible.

You have the option of running a 'stable' release for your movie-critical server, or maybe invest in a UPS so the server doesn't need to reboot?  Maybe only reboot when you know you have an internet connection?

 

Will this requirement for Internet still be present in 'Final'?  If so, I will, regrettably, have to remain on v6.1.9 or investigate alternative storage solutions.

As of today I can say "no", but we have to reserve the right to change things it it makes business sense.  But you can be assured of this: we will never screw over our existing customers.

Link to comment

appdata - to store, well, the appdata

domains - to store vdisk images

system - to store loopback image files along with other goodies planned for the future

isos - to store ISO image files

 

Each of these shares have various characteristics.  In particular, all but 'isos' are marked 'cache prefer'.  This is a new cache mode that attempts to create files on the cache first, but if not present or full, on the array.  In addition, when the 'mover' runs it will attempt to copy files on the array to the cache.

How are you handling appdata that uses symlinks? Since the fuse file system doesn't do symlinks, right now we are specifying the /mnt/diskX or /mnt/cache location so symlinks within the appdata structure will work. Moving the appdata folder around will break things unless fuse gets symlink support so we can use /mnt/user/* paths.
Link to comment

As continue with the evolution of the 'My Documents' concept, we expect some changes to auto-creates shares as we move along.  Right, at preset, in this -rc1 release, we always create those shares & we realize this might not be ideal... we're working on it...

 

OK, makes sense.  Thanks for explaining.  What about a toggle or something in settings?

 

Autocreate shares?

 

if they are not defined in your configuration, make the default shares and then define them.  if they are defined, dont create the shares?

 

Myk

Link to comment

appdata - to store, well, the appdata

domains - to store vdisk images

system - to store loopback image files along with other goodies planned for the future

isos - to store ISO image files

 

Each of these shares have various characteristics.  In particular, all but 'isos' are marked 'cache prefer'.  This is a new cache mode that attempts to create files on the cache first, but if not present or full, on the array.  In addition, when the 'mover' runs it will attempt to copy files on the array to the cache.

How are you handling appdata that uses symlinks? Since the fuse file system doesn't do symlinks, right now we are specifying the /mnt/diskX or /mnt/cache location so symlinks within the appdata structure will work. Moving the appdata folder around will break things unless fuse gets symlink support so we can use /mnt/user/* paths.

 

That is a good point.

Link to comment

appdata - to store, well, the appdata

domains - to store vdisk images

system - to store loopback image files along with other goodies planned for the future

isos - to store ISO image files

 

Each of these shares have various characteristics.  In particular, all but 'isos' are marked 'cache prefer'.  This is a new cache mode that attempts to create files on the cache first, but if not present or full, on the array.  In addition, when the 'mover' runs it will attempt to copy files on the array to the cache.

How are you handling appdata that uses symlinks? Since the fuse file system doesn't do symlinks, right now we are specifying the /mnt/diskX or /mnt/cache location so symlinks within the appdata structure will work. Moving the appdata folder around will break things unless fuse gets symlink support so we can use /mnt/user/* paths.

 

That is a good point.

 

I've seen Plex, UniFi and a few others I can't think of have problems with this.

Link to comment

appdata - to store, well, the appdata

domains - to store vdisk images

system - to store loopback image files along with other goodies planned for the future

isos - to store ISO image files

 

Each of these shares have various characteristics.  In particular, all but 'isos' are marked 'cache prefer'.  This is a new cache mode that attempts to create files on the cache first, but if not present or full, on the array.  In addition, when the 'mover' runs it will attempt to copy files on the array to the cache.

How are you handling appdata that uses symlinks? Since the fuse file system doesn't do symlinks, right now we are specifying the /mnt/diskX or /mnt/cache location so symlinks within the appdata structure will work. Moving the appdata folder around will break things unless fuse gets symlink support so we can use /mnt/user/* paths.

 

That is a good point.

 

I've seen Plex, UniFi and a few others I can't think of have problems with this.

 

musicbrainz, anything that contains postgres

Link to comment

Thanks for the response.

 

Your server must have access to the Internet to use the unRAID 6.2 rc.

 

Can you please define the precise mechanism for key validation?  Has it changed from the way it worked in the betas?

 

No.  The server will attempt to validate upon first boot with a timeout of 30 sec.  Thereafter, as long as you don't reboot, everything is good to go.  If it can't validate upon first boot, then array won't Start, but each time you navigate or refresh the webGui it will attempt validation again (with a very short timeout).  Once validated, it won't phone-home again until next boot.

 

Hmmm ... those timeouts may be a problem.  Often it takes 30 seconds, or more, for a socket  connect.  What I believe is happening is that the ISP's CGN exhausts the available port numbers for the NAT translation and the connection has to wait for a 'port' to become available.  How many people find that they have to wait for 30 - 40 seconds for a Google search to return results?  (When the ISP first introduced CGN, I found that paying extra for a static IP solved the problem but, of late, I find that even my static IP is going through some address translation - at least, that's what I believe from the fact that my traceroutes show up some 10.x.x.x addresses before they get outside of the ISP's domain.)

The Internet connection here is almost as unreliable as our power supply - and when the Internet is down we often revert to watching a movie.  However, if the server won't start up without Internet, then that will no longer be possible.

You have the option of running a 'stable' release for your movie-critical server, or maybe invest in a UPS so the server doesn't need to reboot?

I have a UPS, but it will only support about 30 minutes running.  However, I have it set to power off after 5 minutes outage - I have to keep some reserve for the occasions when the power goes off/on several times in the space of a couple of hours, not allowing the UPS batteries to recharge.

 

It's now almost a year since I had a parity check run to completion.  What this does imply, of course, is that unRAID/apcupsd has been so reliable that, despite frequent power outages, I have not had to perform a recovery in a very long time.

 

Maybe only reboot when you know you have an internet connection?

I don't voluntarily reboot very often!

 

Will this requirement for Internet still be present in 'Final'?  If so, I will, regrettably, have to remain on v6.1.9 or investigate alternative storage solutions.

As of today I can say "no", but we have to reserve the right to change things it it makes business sense.  But you can be assured of this: we will never screw over our existing customers.

That's good to know - thanks for the reassurance.  I have had a spare drive waiting to install as a second parity drive since before the first v6.2 'public' beta was announced!  I hope that I can put it to use soon!

Link to comment

I just decided to check my system logs to come into thousands of error lines all like this

Jul  9 01:57:51 Archangel kernel: pcieport 0000:00:02.0: AER: Multiple Corrected error received: id=0010
Jul  9 01:57:51 Archangel kernel: pcieport 0000:00:02.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, id=0010(Receiver ID)
Jul  9 01:57:51 Archangel kernel: pcieport 0000:00:02.0:   device [8086:2f04] error status/mask=00000040/00002000

 

Attached is my system diagnostics, i get multiple of these errors every second or so

 

Jamie

archangel-diagnostics-20160709-0201.zip

Link to comment

I just decided to check my system logs to come into thousands of error lines all like this

Jul  9 01:57:51 Archangel kernel: pcieport 0000:00:02.0: AER: Multiple Corrected error received: id=0010
Jul  9 01:57:51 Archangel kernel: pcieport 0000:00:02.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, id=0010(Receiver ID)
Jul  9 01:57:51 Archangel kernel: pcieport 0000:00:02.0:   device [8086:2f04] error status/mask=00000040/00002000

 

Attached is my system diagnostics, i get multiple of these errors every second or so

Others may have more expert opinions on this, here are my thoughts:

 

* Probably harmless, in that the AER function (Advanced Error Reporting) is detecting and correcting an issue.

 

* But indirectly not so harmless, because at the current rate, you're going to explode the logging space due to these, syslog growing at a rate just under 1MB per hour.

 

* "Bad TLP" means a bad Transaction Layer packet was detected, by CRC error or wrong sequence number.  "Bad DLLP" means a bad Data Link Layer packet was detected, same reasons.  So these sound similar to CRC errors on SATA cables, except it's on a PCI bus or connection.

 

* Suggestions:

- you've started a parity check, that doesn't seem like a good idea at the moment; until this is figured out, I'd cancel it

- try isolating it down to a particular PCI device - with no other drive activity happening, see if these errors occur when you read or write to a single drive, and cycle through all of your drives

- then try I/O through all other PCI attached devices, graphics units, USB devices, experiment at will and see what affects the production of these messages, positively or negatively.  I can't say you'll actually pin the cause down, but you may eliminate a number of possibilities.

Link to comment

appdata - to store, well, the appdata

domains - to store vdisk images

system - to store loopback image files along with other goodies planned for the future

isos - to store ISO image files

 

Each of these shares have various characteristics.  In particular, all but 'isos' are marked 'cache prefer'.  This is a new cache mode that attempts to create files on the cache first, but if not present or full, on the array.  In addition, when the 'mover' runs it will attempt to copy files on the array to the cache.

How are you handling appdata that uses symlinks? Since the fuse file system doesn't do symlinks, right now we are specifying the /mnt/diskX or /mnt/cache location so symlinks within the appdata structure will work. Moving the appdata folder around will break things unless fuse gets symlink support so we can use /mnt/user/* paths.

 

fuse/shfs does support symlinks but not hard links.  Maybe these apps can be configured to not use hard links?  Another approach would be to define a loopback file for appdata mounted at /mnt/user/appdata formatted with btrs, or even a loopback file for each container mounted at /mnt/user/appdata/container-name.  This would be to containers what vdisks are to VM's.  One thing nice about this is you could snapshot those loopback files and/or use reflink copies.

Link to comment

appdata - to store, well, the appdata

domains - to store vdisk images

system - to store loopback image files along with other goodies planned for the future

isos - to store ISO image files

 

Each of these shares have various characteristics.  In particular, all but 'isos' are marked 'cache prefer'.  This is a new cache mode that attempts to create files on the cache first, but if not present or full, on the array.  In addition, when the 'mover' runs it will attempt to copy files on the array to the cache.

How are you handling appdata that uses symlinks? Since the fuse file system doesn't do symlinks, right now we are specifying the /mnt/diskX or /mnt/cache location so symlinks within the appdata structure will work. Moving the appdata folder around will break things unless fuse gets symlink support so we can use /mnt/user/* paths.

 

fuse/shfs does support symlinks but not hard links.  Maybe these apps can be configured to not use hard links?  Another approach would be to define a loopback file for appdata mounted at /mnt/user/appdata formatted with btrs, or even a loopback file for each container mounted at /mnt/user/appdata/container-name.  This would be to containers what vdisks are to VM's.  One thing nice about this is you could snapshot those loopback files and/or use reflink copies.

Those apps have no such options. 

 

This sounds like too much new functionality that you need to impliment in an RC just because you dont want to respect what the existing users already have configured for some sense of being noob friendly.

 

I think everyone will hate having to create a 100 - 120 Gig appdata loopback file for Plex or Emby metadata.

 

Also, please don't force BTRFS on us for our app data. There's existing reasons many have moved to using XFS on the cache drive instead. Let the existing users decide when they trust BTRFS enough to use it, don't make that decision for them.

 

Thanks for understanding user choice.

Link to comment

appdata - to store, well, the appdata

domains - to store vdisk images

system - to store loopback image files along with other goodies planned for the future

isos - to store ISO image files

 

Each of these shares have various characteristics.  In particular, all but 'isos' are marked 'cache prefer'.  This is a new cache mode that attempts to create files on the cache first, but if not present or full, on the array.  In addition, when the 'mover' runs it will attempt to copy files on the array to the cache.

How are you handling appdata that uses symlinks? Since the fuse file system doesn't do symlinks, right now we are specifying the /mnt/diskX or /mnt/cache location so symlinks within the appdata structure will work. Moving the appdata folder around will break things unless fuse gets symlink support so we can use /mnt/user/* paths.

 

fuse/shfs does support symlinks but not hard links.  Maybe these apps can be configured to not use hard links?  Another approach would be to define a loopback file for appdata mounted at /mnt/user/appdata formatted with btrs, or even a loopback file for each container mounted at /mnt/user/appdata/container-name.  This would be to containers what vdisks are to VM's.  One thing nice about this is you could snapshot those loopback files and/or use reflink copies.

Those apps have no such options. 

 

This sounds like too much new functionality that you need to impliment in an RC just because you dont want to respect what the existing users already have configured for some sense of being noob friendly.

What we implemented doesn't affect existing users at all except that extraneous shares (in their case) may get created.  As mentioned earlier, we will find a way to refine that.

 

I think everyone will hate having to create a 100 - 120 Gig appdata loopback file for Plex or Emby metadata.

Meh, doesn't need to be anywhere near that big, and not really an issue if it's done automatically.  Besides it was just a thought.  But a feature that didn't make into -rc1 was to automatically expand the docker/libvirt image files when they reach a certain fullness.  For btrfs this can be done online with the file system mounted.  This feature could be leveraged on per-container loopbacks.

 

Also, please don't force BTRFS on us for our app data. There's existing reasons many have moved to using XFS on the cache drive instead. Let the existing users decide when they trust BTRFS enough to use it, don't make that decision for them.

Nothing is forced.  Well docker loopback has to be btrfs because it needs the subvolume capability to implement the layers.  I guess we don't offer anything other than btrfs for the libvirt loopback, but really, the only thing stored in there of any consequence is per-VM OVMF firmware variables.

 

Thanks for understanding user choice.

The idea is to provide sensible defaults and let users choose to override if they decide to put the time and effort into understanding any benefits.  But as much as possible, most features should work "out of the box".

Link to comment

As continue with the evolution of the 'My Documents' concept, we expect some changes to auto-creates shares as we move along.  Right, at preset, in this -rc1 release, we always create those shares & we realize this might not be ideal... we're working on it...

 

OK, makes sense.  Thanks for explaining.  What about a toggle or something in settings?

 

Autocreate shares?

 

if they are not defined in your configuration, make the default shares and then define them.  if they are defined, dont create the shares?

 

Myk

 

Yes, surely it would be easy to check if paths are already defined and only if not create the shares

Link to comment

I just decided to check my system logs to come into thousands of error lines all like this

Jul  9 01:57:51 Archangel kernel: pcieport 0000:00:02.0: AER: Multiple Corrected error received: id=0010
Jul  9 01:57:51 Archangel kernel: pcieport 0000:00:02.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, id=0010(Receiver ID)
Jul  9 01:57:51 Archangel kernel: pcieport 0000:00:02.0:   device [8086:2f04] error status/mask=00000040/00002000

 

Attached is my system diagnostics, i get multiple of these errors every second or so

Others may have more expert opinions on this, here are my thoughts:

 

* Probably harmless, in that the AER function (Advanced Error Reporting) is detecting and correcting an issue.

 

* But indirectly not so harmless, because at the current rate, you're going to explode the logging space due to these, syslog growing at a rate just under 1MB per hour.

 

* "Bad TLP" means a bad Transaction Layer packet was detected, by CRC error or wrong sequence number.  "Bad DLLP" means a bad Data Link Layer packet was detected, same reasons.  So these sound similar to CRC errors on SATA cables, except it's on a PCI bus or connection.

 

* Suggestions:

- you've started a parity check, that doesn't seem like a good idea at the moment; until this is figured out, I'd cancel it

- try isolating it down to a particular PCI device - with no other drive activity happening, see if these errors occur when you read or write to a single drive, and cycle through all of your drives

- then try I/O through all other PCI attached devices, graphics units, USB devices, experiment at will and see what affects the production of these messages, positively or negatively.  I can't say you'll actually pin the cause down, but you may eliminate a number of possibilities.

 

So I didn't see this till now so the parity check did finish, after finishing it seems to have calmed down and appears at a much slower rate

 

This system has been on beta a while and that last time I saw these type of errors was when I have a bad no idea driver in a vm. Neither vms have had new drivers so that shouldn't be causing it

 

The USB controller connected has been for a while but has nothing plugged into it which leaves me to believe it may be the new adaptec hba that we just got drivers for

Link to comment

Up and running.  Installed a second parity drive too.  Does dual parity give me anything other than more resiliency for my data ?  Wasn't there a fast write option at one stage ?

 

The fast write option is not related to dual parity, go to Settings -> Disk Settings page and set Tunable (md_write_method) to "reconstruct write" (AKA Turbo Write).

Link to comment
Guest
This topic is now closed to further replies.