Jump to content

ptr727

Members
  • Posts

    139
  • Joined

  • Last visited

Posts posted by ptr727

  1. Hi, not sure if it is related to this plugin, but this AM I noticed that none of my dockers are running.

    Actually only one is running, postfix, but that does not use any appdata storage.

    I looked at the log, and it looks like backup ran, last reported verifying the backup, but I'm not sure why the containers were not restarted after the backup.

    Feb  2 02:00:01 Server-1 Plugin Auto Update: Checking for available plugin updates
    Feb  2 02:00:03 Server-1 Plugin Auto Update: community.applications.plg version 2020.02.01 does not meet age requirements to update
    Feb  2 02:00:04 Server-1 Plugin Auto Update: Community Applications Plugin Auto Update finished
    Feb  2 03:00:01 Server-1 CA Backup/Restore: #######################################
    Feb  2 03:00:01 Server-1 CA Backup/Restore: Community Applications appData Backup
    Feb  2 03:00:01 Server-1 CA Backup/Restore: Applications will be unavailable during
    Feb  2 03:00:02 Server-1 CA Backup/Restore: this process.  They will automatically
    Feb  2 03:00:02 Server-1 CA Backup/Restore: be restarted upon completion.
    Feb  2 03:00:02 Server-1 CA Backup/Restore: #######################################
    Feb  2 03:00:02 Server-1 CA Backup/Restore: Stopping Duplicacy
    Feb  2 03:00:02 Server-1 CA Backup/Restore: docker stop -t 60 Duplicacy
    Feb  2 03:00:02 Server-1 CA Backup/Restore: Stopping nginx
    Feb  2 03:00:06 Server-1 kernel: veth8a1bd58: renamed from eth0
    Feb  2 03:00:06 Server-1 CA Backup/Restore: docker stop -t 60 nginx
    Feb  2 03:00:06 Server-1 CA Backup/Restore: Stopping plex
    Feb  2 03:00:10 Server-1 kernel: vethf123b74: renamed from eth0
    Feb  2 03:00:10 Server-1 CA Backup/Restore: docker stop -t 60 plex
    Feb  2 03:00:10 Server-1 CA Backup/Restore: postfix set to not be stopped by ca backup's advanced settings.  Skipping
    Feb  2 03:00:10 Server-1 CA Backup/Restore: Stopping radarr
    Feb  2 03:00:14 Server-1 kernel: veth3440e0e: renamed from eth0
    Feb  2 03:00:14 Server-1 CA Backup/Restore: docker stop -t 60 radarr
    Feb  2 03:00:14 Server-1 CA Backup/Restore: Stopping sabnzbd
    Feb  2 03:00:18 Server-1 kernel: vethd3efd80: renamed from eth0
    Feb  2 03:00:18 Server-1 CA Backup/Restore: docker stop -t 60 sabnzbd
    Feb  2 03:00:18 Server-1 CA Backup/Restore: Stopping sonarr
    Feb  2 03:00:22 Server-1 kernel: veth323bd38: renamed from eth0
    Feb  2 03:00:22 Server-1 CA Backup/Restore: docker stop -t 60 sonarr
    Feb  2 03:00:22 Server-1 CA Backup/Restore: Stopping vouch-proxy
    Feb  2 03:00:23 Server-1 kernel: veth21f55b3: renamed from eth0
    Feb  2 03:00:23 Server-1 CA Backup/Restore: docker stop -t 60 vouch-proxy
    Feb  2 03:00:23 Server-1 CA Backup/Restore: Backing up USB Flash drive config folder to 
    Feb  2 03:00:23 Server-1 CA Backup/Restore: Using command: /usr/bin/rsync  -avXHq --delete  --log-file="/var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log" /boot/ "/mnt/user/backup/Unraid/USB" > /dev/null 2>&1
    Feb  2 03:00:23 Server-1 CA Backup/Restore: Changing permissions on backup
    Feb  2 03:00:23 Server-1 CA Backup/Restore: Backing up libvirt.img to /mnt/user/backup/Unraid/libvirt/
    Feb  2 03:00:23 Server-1 CA Backup/Restore: Using Command: /usr/bin/rsync  -avXHq --delete  --log-file="/var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log" "/mnt/user/system/libvirt/libvirt.img" "/mnt/user/backup/Unraid/libvirt/" > /dev/null 2>&1
    Feb  2 03:00:27 Server-1 CA Backup/Restore: Backing Up appData from /mnt/user/appdata/ to /mnt/user/backup/Unraid/appdata/[email protected]
    Feb  2 03:00:27 Server-1 CA Backup/Restore: Using command: cd '/mnt/user/appdata/' && /usr/bin/tar -cvaf '/mnt/user/backup/Unraid/appdata/[email protected]/CA_backup.tar.gz' --exclude 'docker.img'  * >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/backupInProgress
    Feb  2 04:00:01 Server-1 Docker Auto Update: Community Applications Docker Autoupdate running
    Feb  2 04:00:01 Server-1 Docker Auto Update: Checking for available updates
    Feb  2 04:00:10 Server-1 Docker Auto Update: Installing Updates for code-server nginx sabnzbd
    Feb  2 04:00:40 Server-1 Docker Auto Update: Community Applications Docker Autoupdate finished
    Feb  2 04:40:01 Server-1 apcupsd[10670]: apcupsd exiting, signal 15
    Feb  2 04:40:01 Server-1 apcupsd[10670]: apcupsd shutdown succeeded
    Feb  2 04:40:04 Server-1 apcupsd[13274]: apcupsd 3.14.14 (31 May 2016) slackware startup succeeded
    Feb  2 04:40:04 Server-1 apcupsd[13274]: NIS server startup succeeded
    Feb  2 06:18:13 Server-1 CA Backup/Restore: Backup Complete
    Feb  2 06:18:13 Server-1 CA Backup/Restore: Verifying backup
    Feb  2 06:18:13 Server-1 CA Backup/Restore: Using command: cd '/mnt/user/appdata/' && /usr/bin/tar --diff -C '/mnt/user/appdata/' -af '/mnt/user/backup/Unraid/appdata/[email protected]/CA_backup.tar.gz' > /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log & echo $! > /tmp/ca.backup2/tempFiles/verifyInProgress
    

    The CA backup status tab says verifying, but it has been more than 2 hours of verifying.

    Is it really verifying, if so, should the containers not be restarted after backup, not after verify, else they will be offline much longer than needed?

    Any ideas how to find out if verify is really running, or something went wrong?

  2. 4 minutes ago, trurl said:

    That "config" as you are calling it is stored on flash as a template. The template is used to fill in the form on the Add Container page. Apps on the Community Apps page know about templates that have already been created by the docker authors, and that is how it is able to help you install a new docker.

     

    But anytime you use the Add Container page, whether for one of the Unraid supported dockers from Community Apps, or for something on dockerhub, the settings you make on the Add Container page is stored as a template on flash and it can be reused to get those same settings for the Add Container page.

     

    So even those docker hub containers can be setup with the same settings as before.

    I hear you, but that is not what I see, at least one of my manually created containers, and one from docker hub via apps search, are not listed on the previous apps page (these containers do not have Unraid templates). 

    Anyway, restoring to last known is not restoring to a versioned config, e.g. if I restore container data to date x, I may want to restore container config to date x or date y.

    But, I'll leave it at that.

  3. 13 minutes ago, trurl said:

    Your knowledge is incomplete. It does have a copy of the old config because it is on flash and it goes to flash and gets it and uses it to reinstall your docker just as it was.

    Thanks, wish I'd known that (I bet many people don't, and like me they may look for it in the backup / restore section).

    But, there is no history of any of my docker hub only containers, there is also no historic versioning (or am I going to find it when I try to use it), so I still think it would be a good idea to implement docker (and maybe VM) config backup and restore along with the appdata used by the containers.

  4. 8 minutes ago, trurl said:

    These are already saved on flash, and you can reuse them without going to all that trouble of setting each one up again. The simplest way is to just use the Previous Apps feature on the Apps page.

    The config may have been on the flash before, but after losing the cache, restoring it, there is no docker config, and bringing back the apps, leaves them with default configs, not the old config.

    Previous apps makes it easier to see what I previously installed, to my knowledge it does not have a copy of the old config.

    Yes, I could restore the flash with appdata, or I could manually copy config files (I don't even know where to start), or ... the backup app can do it for me.

  5. Hi, I lost my cache volume (something went wrong during a disk replace), restored appdata from backup, but all my docker configs were gone.

    With lots of effort I recreated each container's config, custom network bridges, environment variables, volume mappings, etc.

     

    For docker, the container configs are as important as the appdata, can an option be added to backup and restore container configs along with appdata? (same really applies to VM configs)

     

  6. Hi, after running an extended test, and clicking view results, the UI loads a few thousand lines, then becomes unresponsive, and the main Unraid UI is also unresponsive.

    I assume results file is too big for the method being used to display the contents, maybe a download vs. display is a better option.

    Is there a log file on the filesystem I can view instead?

  7. Using Unraid 6.7.2.

    I installed the Duplicacy container using the Unraid template.

    Appdata is mapped to "appdata/Duplicacy", after starting the container I noticed another folder named "appdata/duplicacy", using a different owner.

     

    root@Server-1:/mnt/user/appdata# ls -la
    total 16
    drwxrwxrwx 1 nobody users  36 Jan  6 07:47 .
    drwxrwxrwx 1 nobody users  42 Jan  6 07:35 ..
    drwxrwxrwx 1 nobody users 116 Jan  6 07:54 Duplicacy
    drwxrwxrwx 1 root   root   18 Jan  6 07:47 duplicacy
    
    root@Server-1:/mnt/user/appdata/duplicacy# ls -la
    total 0
    drwxrwxrwx 1 root   root  18 Jan  6 07:47 .
    drwxrwxrwx 1 nobody users 36 Jan  6 07:47 ..
    drwxrwxrwx 1 nobody users 18 Jan  6 07:47 cache
    drwxrwxrwx 1 nobody users 88 Jan  6 07:59 logs
    
    root@Server-1:/mnt/user/appdata/Duplicacy# ls -la
    total 16
    drwxrwxrwx 1 nobody users  116 Jan  6 07:54 .
    drwxrwxrwx 1 nobody users   36 Jan  6 07:47 ..
    drwx------ 1 nobody users   50 Jan  6 07:47 bin
    -rw------- 1 nobody users 1117 Jan  6 07:54 duplicacy.json
    -rw------- 1 nobody users  950 Jan  6 07:47 licenses.json
    -rw-r--r-- 1 root   root    33 Jan  6 07:47 machine-id
    -rw-r--r-- 1 nobody users  144 Jan  6 07:47 settings.json
    drwx------ 1 nobody users   34 Jan  6 07:47 stats

     

    It appears that the container created new content, and that Docker or Unraid mapped it using a different paths, bifurcating the storage location.

    When my backup completes I will modify the container config to use all lowercase, and I will merge the files.

     

    It is very strange that a container can create content outside of a mapped volume by using a different case version of the same mapped volume path.

    Is this an issue with Unraid or a Docker or user error?

  8. The 9340 flashed to IT mode acts like a more expensive 9300, so unless that is not supported, it should work fine, i.e. objective is no parity errors on reboot?

    The problem with SSD drives appear to be EVO specific, none of my EVO drives are detected by the LSI controller, only the Pro drives are.

     

    I am busy swapping EVO's for Pro's in the 4 x 1TB cache, one drive at a time.

    How long should it take to rebuild the BTRFS volume, it has been running 12+ hours, and I can't see any progress indicator?

  9. The systems do use similar disks 12TB Seagate, and 4TB Hitachi, and 1TB Samsung, and similar processors, and similar memory, and similar motherboards.

     

    It could be that the Adaptec driver and the SAS2LP driver have a similar problem, or it could be Unraid, causation vs. correlation. E.g. how long did it take to fix the SQLite bug caused by Unraid, and experienced only by some.

     

    How can I find out what files are affected by the parity repair, so that I can determine the impact of corruption, and possibility of restore from backup?

    How can I see what driver Unraid is using for the Adaptec controller, so that I can see if it is a common driver or an adaptec specific driver?

     

  10. I enabled syslog, did a controlled reboot, started a check, and again got 5 errors:

    Jan  3 10:03:07 Server-2 kernel: md: recovery thread: P corrected, sector=1962934168
    Jan  3 10:03:07 Server-2 kernel: md: recovery thread: P corrected, sector=1962934176
    Jan  3 10:03:07 Server-2 kernel: md: recovery thread: P corrected, sector=1962934184
    Jan  3 10:03:07 Server-2 kernel: md: recovery thread: P corrected, sector=1962934192
    Jan  3 10:03:07 Server-2 kernel: md: recovery thread: P corrected, sector=1962934200

     

    Nothing extraordinary in syslog, attached, diagnostics also attached.

     

    I looked at the other threads that report similar 5 errors after reboots, blaming the SAS2LP / Supermicro / Marvell driver / hardware as the cause.

    I find it suspicious that the problem was attributed to a specific driver / hardware, when it started happening in Unraid v6, and it happens with my Adaptec hardware, I can't help but think it is a more generic issue in Unraid, e.g. handling of SAS backplanes, spindown, caching, taking the array offline, parity calculation, etc. 

    Especially since it appears the parity errors are at the same reported locations.

    syslog server-2-diagnostics-20200103-2156.zip

  11. 12 minutes ago, itimpi said:

    As long as you are running Unraid 6.7.2 or later you can configure Settings->Syslog to keep a copy that survives a reboot (and is appended to after the reboot).

    If I enable the local syslog server, does unraid automatically use it, or is there another config?

    How reliable is using syslog vs. an option to just write to local disk during crashes or shutdown troubleshooting?

     

     

  12. Before the power outage servers were up for around 240 something days, no parity errors.

     

    Note, I said 6.7.0, actually 6.7.2.

    Supermicro 4U chassis with SM X10SLM+-F motherboard and Xeon E3 processors, Adaptec Series 8 RAID controllers in HBA passthrough mode, 12TB parity + 3 x mixture of 4TB and 12TB data disks, 4 x 1TB SSD cache, 2 x 12TB parity + 16 x mixture of 4TB and 12TB data, 4 x 1TB SSD cache.

     

     

     

     

  13. I have two servers running 6.7.2 (corrected), connected to a UPS, extended power outage two weeks ago, graceful shutdown orchestrated by UPS, first scheduled parity check after restart reporting 5 errors, with exactly the same sector details.

     

    Jan 1 06:09:23 Server-1 kernel: md: recovery thread: PQ corrected, sector=1962934168
    Jan 1 06:09:23 Server-1 kernel: md: recovery thread: PQ corrected, sector=1962934176
    Jan 1 06:09:23 Server-1 kernel: md: recovery thread: PQ corrected, sector=1962934184
    Jan 1 06:09:23 Server-1 kernel: md: recovery thread: PQ corrected, sector=1962934192
    Jan 1 06:09:23 Server-1 kernel: md: recovery thread: PQ corrected, sector=1962934200
    
    Jan 1 04:42:39 Server-2 kernel: md: recovery thread: P corrected, sector=1962934168
    Jan 1 04:42:39 Server-2 kernel: md: recovery thread: P corrected, sector=1962934176
    Jan 1 04:42:39 Server-2 kernel: md: recovery thread: P corrected, sector=1962934184
    Jan 1 04:42:39 Server-2 kernel: md: recovery thread: P corrected, sector=1962934192
    Jan 1 04:42:39 Server-2 kernel: md: recovery thread: P corrected, sector=1962934200

     

    Both servers use the same model 12TB parity disks, one has 1 parity drive, the other 2 parity drives.

    It seems highly unlikely that this is actual corruption, and more likely some kind of logical issue?

     

    Any ideas?

     

  14. There are docker options that are not exposed in the GUI, e.g. tmpfs, user, dependencies, etc..

     

    Having the ability to switch a container setup between vanilla GUI, or compose YAML text, would be ideal, as it allows native configuration, without needing to use the CLI, or the additional cumbersome command options in the GUI.

     

    The management code can always apply filters or sanitization, such that e.g. options like restart are exposed in the GUI, or invalid configs detected. Alternatively the config may simply be GUI or YAML, where if YAML it is all under control of the user.

×
×
  • Create New...