ptr727
-
Posts
139 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by ptr727
-
-
Thx, I'll give it a try tonight when I get home.
-
I see, does this imply that the /dev/foo identifiers need to match, or can btrfs figure out how to match the UUID with the devid?
E.g. what if I swap drive bay positions, or a different controller changes the device identifier?
-
Thx, out of curiosity, how does the OS know the other partitions?
-
How can I mount the BTRFS cache volume from Ubuntu?
My cache consists of 4 x 1TB SSD drives.
Looking at the btrfs docs, it looks like I need to know the disk layout when I mount the disks.
What would the btrfs mount options be for a 4 drive cache volume created by Unraid?
-
Switching my systems to the LSI HBA corrected the behavior, see my blog post for details; https://blog.insanegenius.com/2020/01/10/unraid-repeat-parity-errors-on-reboot/
-
4 minutes ago, trurl said:
That "config" as you are calling it is stored on flash as a template. The template is used to fill in the form on the Add Container page. Apps on the Community Apps page know about templates that have already been created by the docker authors, and that is how it is able to help you install a new docker.
But anytime you use the Add Container page, whether for one of the Unraid supported dockers from Community Apps, or for something on dockerhub, the settings you make on the Add Container page is stored as a template on flash and it can be reused to get those same settings for the Add Container page.
So even those docker hub containers can be setup with the same settings as before.
I hear you, but that is not what I see, at least one of my manually created containers, and one from docker hub via apps search, are not listed on the previous apps page (these containers do not have Unraid templates).
Anyway, restoring to last known is not restoring to a versioned config, e.g. if I restore container data to date x, I may want to restore container config to date x or date y.
But, I'll leave it at that.
-
13 minutes ago, trurl said:
Your knowledge is incomplete. It does have a copy of the old config because it is on flash and it goes to flash and gets it and uses it to reinstall your docker just as it was.
Thanks, wish I'd known that (I bet many people don't, and like me they may look for it in the backup / restore section).
But, there is no history of any of my docker hub only containers, there is also no historic versioning (or am I going to find it when I try to use it), so I still think it would be a good idea to implement docker (and maybe VM) config backup and restore along with the appdata used by the containers.
-
8 minutes ago, trurl said:
These are already saved on flash, and you can reuse them without going to all that trouble of setting each one up again. The simplest way is to just use the Previous Apps feature on the Apps page.
The config may have been on the flash before, but after losing the cache, restoring it, there is no docker config, and bringing back the apps, leaves them with default configs, not the old config.
Previous apps makes it easier to see what I previously installed, to my knowledge it does not have a copy of the old config.
Yes, I could restore the flash with appdata, or I could manually copy config files (I don't even know where to start), or ... the backup app can do it for me.
-
Hi, I lost my cache volume (something went wrong during a disk replace), restored appdata from backup, but all my docker configs were gone.
With lots of effort I recreated each container's config, custom network bridges, environment variables, volume mappings, etc.
For docker, the container configs are as important as the appdata, can an option be added to backup and restore container configs along with appdata? (same really applies to VM configs)
-
Hi, after running an extended test, and clicking view results, the UI loads a few thousand lines, then becomes unresponsive, and the main Unraid UI is also unresponsive.
I assume results file is too big for the method being used to display the contents, maybe a download vs. display is a better option.
Is there a log file on the filesystem I can view instead?
-
Thanks, where is the template code hosted, I asked saspus, the author of the container, and he knew nothing of the Unraid template?
-
Ok, seems to user error, me not noticing it and the template creator
In the container config the cache and logs folders are mapped to "appdata/duplicacy/...", while the config folder is mapped to "appdata/Duplicacy".
Will fix template mappings.
-
Using Unraid 6.7.2.
I installed the Duplicacy container using the Unraid template.
Appdata is mapped to "appdata/Duplicacy", after starting the container I noticed another folder named "appdata/duplicacy", using a different owner.
root@Server-1:/mnt/user/appdata# ls -la total 16 drwxrwxrwx 1 nobody users 36 Jan 6 07:47 . drwxrwxrwx 1 nobody users 42 Jan 6 07:35 .. drwxrwxrwx 1 nobody users 116 Jan 6 07:54 Duplicacy drwxrwxrwx 1 root root 18 Jan 6 07:47 duplicacy root@Server-1:/mnt/user/appdata/duplicacy# ls -la total 0 drwxrwxrwx 1 root root 18 Jan 6 07:47 . drwxrwxrwx 1 nobody users 36 Jan 6 07:47 .. drwxrwxrwx 1 nobody users 18 Jan 6 07:47 cache drwxrwxrwx 1 nobody users 88 Jan 6 07:59 logs root@Server-1:/mnt/user/appdata/Duplicacy# ls -la total 16 drwxrwxrwx 1 nobody users 116 Jan 6 07:54 . drwxrwxrwx 1 nobody users 36 Jan 6 07:47 .. drwx------ 1 nobody users 50 Jan 6 07:47 bin -rw------- 1 nobody users 1117 Jan 6 07:54 duplicacy.json -rw------- 1 nobody users 950 Jan 6 07:47 licenses.json -rw-r--r-- 1 root root 33 Jan 6 07:47 machine-id -rw-r--r-- 1 nobody users 144 Jan 6 07:47 settings.json drwx------ 1 nobody users 34 Jan 6 07:47 stats
It appears that the container created new content, and that Docker or Unraid mapped it using a different paths, bifurcating the storage location.
When my backup completes I will modify the container config to use all lowercase, and I will merge the files.
It is very strange that a container can create content outside of a mapped volume by using a different case version of the same mapped volume path.
Is this an issue with Unraid or a Docker or user error?
-
The 9340 flashed to IT mode acts like a more expensive 9300, so unless that is not supported, it should work fine, i.e. objective is no parity errors on reboot?
The problem with SSD drives appear to be EVO specific, none of my EVO drives are detected by the LSI controller, only the Pro drives are.
I am busy swapping EVO's for Pro's in the 4 x 1TB cache, one drive at a time.
How long should it take to rebuild the BTRFS volume, it has been running 12+ hours, and I can't see any progress indicator?
-
I got two LSI SAS9340-8i (I need mini SAS HD connectors) ServeRAID M1215 cards in IT mode (Art of Server on eBay), but I can't get the card to recognize my Samsung SSD drives.
So a tough spot, Adaptec HBA 5 parity errors per boot, recommended catchall LSI in IT mode does not detect SSD drives.
-
The systems do use similar disks 12TB Seagate, and 4TB Hitachi, and 1TB Samsung, and similar processors, and similar memory, and similar motherboards.
It could be that the Adaptec driver and the SAS2LP driver have a similar problem, or it could be Unraid, causation vs. correlation. E.g. how long did it take to fix the SQLite bug caused by Unraid, and experienced only by some.
How can I find out what files are affected by the parity repair, so that I can determine the impact of corruption, and possibility of restore from backup?
How can I see what driver Unraid is using for the Adaptec controller, so that I can see if it is a common driver or an adaptec specific driver?
-
I enabled syslog, did a controlled reboot, started a check, and again got 5 errors:
Jan 3 10:03:07 Server-2 kernel: md: recovery thread: P corrected, sector=1962934168 Jan 3 10:03:07 Server-2 kernel: md: recovery thread: P corrected, sector=1962934176 Jan 3 10:03:07 Server-2 kernel: md: recovery thread: P corrected, sector=1962934184 Jan 3 10:03:07 Server-2 kernel: md: recovery thread: P corrected, sector=1962934192 Jan 3 10:03:07 Server-2 kernel: md: recovery thread: P corrected, sector=1962934200
Nothing extraordinary in syslog, attached, diagnostics also attached.
I looked at the other threads that report similar 5 errors after reboots, blaming the SAS2LP / Supermicro / Marvell driver / hardware as the cause.
I find it suspicious that the problem was attributed to a specific driver / hardware, when it started happening in Unraid v6, and it happens with my Adaptec hardware, I can't help but think it is a more generic issue in Unraid, e.g. handling of SAS backplanes, spindown, caching, taking the array offline, parity calculation, etc.
Especially since it appears the parity errors are at the same reported locations.
-
12 minutes ago, itimpi said:
As long as you are running Unraid 6.7.2 or later you can configure Settings->Syslog to keep a copy that survives a reboot (and is appended to after the reboot).
If I enable the local syslog server, does unraid automatically use it, or is there another config?
How reliable is using syslog vs. an option to just write to local disk during crashes or shutdown troubleshooting?
-
Server-1 uses a 81605ZQ and Server-2 uses a 7805Q controller.
The parity check just completed, I'll do one more while the server is up, then reboot, followed by another check.
How do I get the logs to persist during reboots, really need to see what happens at shutdown?
-
Before the power outage servers were up for around 240 something days, no parity errors.
Note, I said 6.7.0, actually 6.7.2.
Supermicro 4U chassis with SM X10SLM+-F motherboard and Xeon E3 processors, Adaptec Series 8 RAID controllers in HBA passthrough mode, 12TB parity + 3 x mixture of 4TB and 12TB data disks, 4 x 1TB SSD cache, 2 x 12TB parity + 16 x mixture of 4TB and 12TB data, 4 x 1TB SSD cache.
-
I have two servers running 6.7.2 (corrected), connected to a UPS, extended power outage two weeks ago, graceful shutdown orchestrated by UPS, first scheduled parity check after restart reporting 5 errors, with exactly the same sector details.
Jan 1 06:09:23 Server-1 kernel: md: recovery thread: PQ corrected, sector=1962934168 Jan 1 06:09:23 Server-1 kernel: md: recovery thread: PQ corrected, sector=1962934176 Jan 1 06:09:23 Server-1 kernel: md: recovery thread: PQ corrected, sector=1962934184 Jan 1 06:09:23 Server-1 kernel: md: recovery thread: PQ corrected, sector=1962934192 Jan 1 06:09:23 Server-1 kernel: md: recovery thread: PQ corrected, sector=1962934200 Jan 1 04:42:39 Server-2 kernel: md: recovery thread: P corrected, sector=1962934168 Jan 1 04:42:39 Server-2 kernel: md: recovery thread: P corrected, sector=1962934176 Jan 1 04:42:39 Server-2 kernel: md: recovery thread: P corrected, sector=1962934184 Jan 1 04:42:39 Server-2 kernel: md: recovery thread: P corrected, sector=1962934192 Jan 1 04:42:39 Server-2 kernel: md: recovery thread: P corrected, sector=1962934200
Both servers use the same model 12TB parity disks, one has 1 parity drive, the other 2 parity drives.
It seems highly unlikely that this is actual corruption, and more likely some kind of logical issue?
Any ideas?
-
There are docker options that are not exposed in the GUI, e.g. tmpfs, user, dependencies, etc..
Having the ability to switch a container setup between vanilla GUI, or compose YAML text, would be ideal, as it allows native configuration, without needing to use the CLI, or the additional cumbersome command options in the GUI.
The management code can always apply filters or sanitization, such that e.g. options like restart are exposed in the GUI, or invalid configs detected. Alternatively the config may simply be GUI or YAML, where if YAML it is all under control of the user.
-
Ok thx, so CLI use only then.
-
Ok, it is ugly, but I'll give it a try.
I really wish we could just use compose files.
[Plugin] CA Appdata Backup / Restore v2
in Plugin Support
Posted · Edited by ptr727
Hi, not sure if it is related to this plugin, but this AM I noticed that none of my dockers are running.
Actually only one is running, postfix, but that does not use any appdata storage.
I looked at the log, and it looks like backup ran, last reported verifying the backup, but I'm not sure why the containers were not restarted after the backup.
The CA backup status tab says verifying, but it has been more than 2 hours of verifying.
Is it really verifying, if so, should the containers not be restarted after backup, not after verify, else they will be offline much longer than needed?
Any ideas how to find out if verify is really running, or something went wrong?