ptr727

Members
  • Posts

    139
  • Joined

  • Last visited

Everything posted by ptr727

  1. Switching my systems to the LSI HBA corrected the behavior, see my blog post for details; https://blog.insanegenius.com/2020/01/10/unraid-repeat-parity-errors-on-reboot/
  2. I hear you, but that is not what I see, at least one of my manually created containers, and one from docker hub via apps search, are not listed on the previous apps page (these containers do not have Unraid templates). Anyway, restoring to last known is not restoring to a versioned config, e.g. if I restore container data to date x, I may want to restore container config to date x or date y. But, I'll leave it at that.
  3. Thanks, wish I'd known that (I bet many people don't, and like me they may look for it in the backup / restore section). But, there is no history of any of my docker hub only containers, there is also no historic versioning (or am I going to find it when I try to use it), so I still think it would be a good idea to implement docker (and maybe VM) config backup and restore along with the appdata used by the containers.
  4. The config may have been on the flash before, but after losing the cache, restoring it, there is no docker config, and bringing back the apps, leaves them with default configs, not the old config. Previous apps makes it easier to see what I previously installed, to my knowledge it does not have a copy of the old config. Yes, I could restore the flash with appdata, or I could manually copy config files (I don't even know where to start), or ... the backup app can do it for me.
  5. Hi, I lost my cache volume (something went wrong during a disk replace), restored appdata from backup, but all my docker configs were gone. With lots of effort I recreated each container's config, custom network bridges, environment variables, volume mappings, etc. For docker, the container configs are as important as the appdata, can an option be added to backup and restore container configs along with appdata? (same really applies to VM configs)
  6. Hi, after running an extended test, and clicking view results, the UI loads a few thousand lines, then becomes unresponsive, and the main Unraid UI is also unresponsive. I assume results file is too big for the method being used to display the contents, maybe a download vs. display is a better option. Is there a log file on the filesystem I can view instead?
  7. This an annoyance that should take a but a few minutes to fix, please.
  8. Thanks, where is the template code hosted, I asked saspus, the author of the container, and he knew nothing of the Unraid template?
  9. Ok, seems to user error, me not noticing it and the template creator In the container config the cache and logs folders are mapped to "appdata/duplicacy/...", while the config folder is mapped to "appdata/Duplicacy". Will fix template mappings.
  10. Using Unraid 6.7.2. I installed the Duplicacy container using the Unraid template. Appdata is mapped to "appdata/Duplicacy", after starting the container I noticed another folder named "appdata/duplicacy", using a different owner. root@Server-1:/mnt/user/appdata# ls -la total 16 drwxrwxrwx 1 nobody users 36 Jan 6 07:47 . drwxrwxrwx 1 nobody users 42 Jan 6 07:35 .. drwxrwxrwx 1 nobody users 116 Jan 6 07:54 Duplicacy drwxrwxrwx 1 root root 18 Jan 6 07:47 duplicacy root@Server-1:/mnt/user/appdata/duplicacy# ls -la total 0 drwxrwxrwx 1 root root 18 Jan 6 07:47 . drwxrwxrwx 1 nobody users 36 Jan 6 07:47 .. drwxrwxrwx 1 nobody users 18 Jan 6 07:47 cache drwxrwxrwx 1 nobody users 88 Jan 6 07:59 logs root@Server-1:/mnt/user/appdata/Duplicacy# ls -la total 16 drwxrwxrwx 1 nobody users 116 Jan 6 07:54 . drwxrwxrwx 1 nobody users 36 Jan 6 07:47 .. drwx------ 1 nobody users 50 Jan 6 07:47 bin -rw------- 1 nobody users 1117 Jan 6 07:54 duplicacy.json -rw------- 1 nobody users 950 Jan 6 07:47 licenses.json -rw-r--r-- 1 root root 33 Jan 6 07:47 machine-id -rw-r--r-- 1 nobody users 144 Jan 6 07:47 settings.json drwx------ 1 nobody users 34 Jan 6 07:47 stats It appears that the container created new content, and that Docker or Unraid mapped it using a different paths, bifurcating the storage location. When my backup completes I will modify the container config to use all lowercase, and I will merge the files. It is very strange that a container can create content outside of a mapped volume by using a different case version of the same mapped volume path. Is this an issue with Unraid or a Docker or user error?
  11. The 9340 flashed to IT mode acts like a more expensive 9300, so unless that is not supported, it should work fine, i.e. objective is no parity errors on reboot? The problem with SSD drives appear to be EVO specific, none of my EVO drives are detected by the LSI controller, only the Pro drives are. I am busy swapping EVO's for Pro's in the 4 x 1TB cache, one drive at a time. How long should it take to rebuild the BTRFS volume, it has been running 12+ hours, and I can't see any progress indicator?
  12. I got two LSI SAS9340-8i (I need mini SAS HD connectors) ServeRAID M1215 cards in IT mode (Art of Server on eBay), but I can't get the card to recognize my Samsung SSD drives. So a tough spot, Adaptec HBA 5 parity errors per boot, recommended catchall LSI in IT mode does not detect SSD drives.
  13. The systems do use similar disks 12TB Seagate, and 4TB Hitachi, and 1TB Samsung, and similar processors, and similar memory, and similar motherboards. It could be that the Adaptec driver and the SAS2LP driver have a similar problem, or it could be Unraid, causation vs. correlation. E.g. how long did it take to fix the SQLite bug caused by Unraid, and experienced only by some. How can I find out what files are affected by the parity repair, so that I can determine the impact of corruption, and possibility of restore from backup? How can I see what driver Unraid is using for the Adaptec controller, so that I can see if it is a common driver or an adaptec specific driver?
  14. I enabled syslog, did a controlled reboot, started a check, and again got 5 errors: Jan 3 10:03:07 Server-2 kernel: md: recovery thread: P corrected, sector=1962934168 Jan 3 10:03:07 Server-2 kernel: md: recovery thread: P corrected, sector=1962934176 Jan 3 10:03:07 Server-2 kernel: md: recovery thread: P corrected, sector=1962934184 Jan 3 10:03:07 Server-2 kernel: md: recovery thread: P corrected, sector=1962934192 Jan 3 10:03:07 Server-2 kernel: md: recovery thread: P corrected, sector=1962934200 Nothing extraordinary in syslog, attached, diagnostics also attached. I looked at the other threads that report similar 5 errors after reboots, blaming the SAS2LP / Supermicro / Marvell driver / hardware as the cause. I find it suspicious that the problem was attributed to a specific driver / hardware, when it started happening in Unraid v6, and it happens with my Adaptec hardware, I can't help but think it is a more generic issue in Unraid, e.g. handling of SAS backplanes, spindown, caching, taking the array offline, parity calculation, etc. Especially since it appears the parity errors are at the same reported locations. syslog server-2-diagnostics-20200103-2156.zip
  15. If I enable the local syslog server, does unraid automatically use it, or is there another config? How reliable is using syslog vs. an option to just write to local disk during crashes or shutdown troubleshooting?
  16. Server-1 uses a 81605ZQ and Server-2 uses a 7805Q controller. The parity check just completed, I'll do one more while the server is up, then reboot, followed by another check. How do I get the logs to persist during reboots, really need to see what happens at shutdown?
  17. Before the power outage servers were up for around 240 something days, no parity errors. Note, I said 6.7.0, actually 6.7.2. Supermicro 4U chassis with SM X10SLM+-F motherboard and Xeon E3 processors, Adaptec Series 8 RAID controllers in HBA passthrough mode, 12TB parity + 3 x mixture of 4TB and 12TB data disks, 4 x 1TB SSD cache, 2 x 12TB parity + 16 x mixture of 4TB and 12TB data, 4 x 1TB SSD cache.
  18. I have two servers running 6.7.2 (corrected), connected to a UPS, extended power outage two weeks ago, graceful shutdown orchestrated by UPS, first scheduled parity check after restart reporting 5 errors, with exactly the same sector details. Jan 1 06:09:23 Server-1 kernel: md: recovery thread: PQ corrected, sector=1962934168 Jan 1 06:09:23 Server-1 kernel: md: recovery thread: PQ corrected, sector=1962934176 Jan 1 06:09:23 Server-1 kernel: md: recovery thread: PQ corrected, sector=1962934184 Jan 1 06:09:23 Server-1 kernel: md: recovery thread: PQ corrected, sector=1962934192 Jan 1 06:09:23 Server-1 kernel: md: recovery thread: PQ corrected, sector=1962934200 Jan 1 04:42:39 Server-2 kernel: md: recovery thread: P corrected, sector=1962934168 Jan 1 04:42:39 Server-2 kernel: md: recovery thread: P corrected, sector=1962934176 Jan 1 04:42:39 Server-2 kernel: md: recovery thread: P corrected, sector=1962934184 Jan 1 04:42:39 Server-2 kernel: md: recovery thread: P corrected, sector=1962934192 Jan 1 04:42:39 Server-2 kernel: md: recovery thread: P corrected, sector=1962934200 Both servers use the same model 12TB parity disks, one has 1 parity drive, the other 2 parity drives. It seems highly unlikely that this is actual corruption, and more likely some kind of logical issue? Any ideas?
  19. There are docker options that are not exposed in the GUI, e.g. tmpfs, user, dependencies, etc.. Having the ability to switch a container setup between vanilla GUI, or compose YAML text, would be ideal, as it allows native configuration, without needing to use the CLI, or the additional cumbersome command options in the GUI. The management code can always apply filters or sanitization, such that e.g. options like restart are exposed in the GUI, or invalid configs detected. Alternatively the config may simply be GUI or YAML, where if YAML it is all under control of the user.
  20. Ok thx, so CLI use only then.
  21. Ok, it is ugly, but I'll give it a try. I really wish we could just use compose files.
  22. I am trying to create a container that uses systemd and requires the use of tmpfs mappings. E.g. Docker Run Example: docker run -d \ --name=dwspectrum-test-container \ --restart=unless-stopped \ --network=host \ --tmpfs /run \ --tmpfs /run/lock \ --tmpfs /tmp \ -v /sys/fs/cgroup:/sys/fs/cgroup:ro \ -v "/.mount/media:/config/DW Spectrum Media" \ -v /.mount/config:/opt/digitalwatchdog/mediaserver/var \ ptr727/dwspectrum Docker Compose Example: services: dwspectrum: image: ptr727/dwspectrum container_name: dwspectrum-test-container hostname: dwspectrum-test-host domainname: foo.net build: . volumes: - /sys/fs/cgroup:/sys/fs/cgroup:ro - ./.mount/media:/config/DW Spectrum Media - ./.mount/config/:/opt/digitalwatchdog/mediaserver/var tmpfs: - /run - /run/lock - /tmp restart: unless-stopped network_mode: host ports: - 7001:7001 How do I add tmpfs mappings in Unraid's docker configuration, or do I need to specify them as additional options on the commandline?
  23. Yes, but this functionality is not available for any log files that are not natively integrated in the unraid GUI. If the code that displays the logs can be used to display contents of other log files, that would be equally ideal.
  24. I am looking for an easy way to view log files in a web browser. Something where I can browse for log files, e.g. from unraid or appdata from containers, and have a "tail" like live view, ideally with context highlighting, but in my web browser. Something like https://dozzle.dev/ or papertail. Any ideas?
  25. Yep, I understand the meaninglessness of mapping ports in bridge mode, but without it the web UI link does not work. Of all my containers running in bridge mode, home assistant is the only container where the launch web ui link does not appear. Most other other containers do list port mappings such that they work correctly in host or bridge mode. Would be great if the web ui link always works, maybe use port mapping, or use IP address assigned in bridge setup.