ptr727

Members
  • Posts

    139
  • Joined

  • Last visited

Posts posted by ptr727

  1. Seems like a lot of dangling volumes for the once or twice I changed a tag, and I can't control what the container authe does.

    Should these not be cleaned up the same way orphan containers are detected and cleaned?

     

    I'll delete them for now.

  2. I noticed that my server has many many dangling docker volumes:

     

    root@Server-1:~# docker volume ls --filter dangling=true
    DRIVER              VOLUME NAME
    local               0ba5094dffcf33bd70ec9bb78ed0d3641a4fb64e0c791429b38df5153a66af50
    local               0bae1eb00b48ae1ef5355af7dd7eca6f4f3ac13ce800b66a5102479a6859673c
    local               1c45b2c3475f654a3105aa0190e4afa61af2683a87969685e892246f54257d38
    local               1eb893aac8214c83c0c1073958583c41566ef174421bc1b05da931ea37884e7a
    local               1fc2714cc077fec6a8bf8d73a9c6a0dcca5f2dca6e0eabb41accd94d043645d8
    local               2b54d36658f495bab15960e2551fed5b7b56f309a89cf86530dfca1241ac96ea
    local               3c2940350c48c07ccb49c1468f775df8d0dcbfbe8676c10fccc1e68b5b86dbee
    local               3cb4872b5aceb2652f7aebdbb3ba2d8570c2a646b748f516077c141429c5d648
    local               3dd25a2b301027698e470040271ccc15ce128e07d05d6318d7804ca53e0ac1b5
    local               3eb3b12de3266f890fbed2dc5817b924b1be22c7cf79558aad83b61dc759894b
    local               4cedcb55b0e0d461729aee3b3edff6bc6e0917348a8554ddead92dc5d4c6ea97
    local               5daada5a741ec7b46b7861f1bbafce4e2cec6eec6c587714f5cebdbbe6f4aa31
    local               6a499eef70c54c7ba5404cd1a9cb04209d4225d53ce754d3ef90d84c37519a4a
    local               6ca7dc1f22d83997cc3de9aca32ca426710d4592617c743dcf64df4d51901284
    local               7a5f7dbed474d98ff640da70ef6f8b6c391563302ff5134c33d4657e15768199
    local               8b48a369ed5bd2975d0c07c127c078af2ad87043fb147bf7ccf1094b29309bff
    local               8b8299336f3d24384299acfec10c6debc354ae92fc966be09e368214eece1556
    local               8cb9b784c3a731786f3c0e9cb38c88d776e365e24b82732508d9d4abd9feb75d
    local               8e2120cc6746af544b55521d1b1dff58f56f860347b024fd623c459bdea47fe0
    local               8ea8ba7266017e65d51c1756d8c8c78d7297a0cf279c42ee793c5ef2d6b9c6d4
    local               08b2822c3b25578568a4342e15f4a605d97e751e68e76fbe55c919bc693a3bbd
    local               9c6be94de472ff22bd23c9f35cc0edc4f2e9cb86c57b727de3a756e71a80155e
    local               9f552dcba0debad6b8f1db8d77599e238ffe6f3772e838f189961c0071951bef
    local               9fc7f40d03c3d0d4a1750c8903bd9cf677ca4e72f0d775519b6c9bfc8a2b7657
    local               12a82918e7bca4b0ba3ec9c06b6405ec1f3043d922eba9672cdcbdba7b392680
    local               16d47df3ecdd99db8d5298a8c7cf4c943e8c1d86a8ee74b0792168043d949dc4
    local               17d2ba123a57d82add7b8f66ab434f63795d6b8fdbd335d3be18d4d85a8e19a3
    local               21cf82ec509bfbaa294d87ceec9eee50ed78e0317b2fba4e42e2e398376d5fcb
    local               26aa0f1615d09c7099509cde919b1029d4a69806e7f7ce28d79e462ae404c942
    local               32d7562f889adc79b35e1f5b7c34bd3cb9a809d3b6edb8886f7fc2c74bc51852
    local               33f7181599fed19baaf6cc584c88c1f64218d689c6ceb7d4b3e2b5413fbd10b5
    local               0036ca6a315c699d0e0b6cdc5c5c9e9320b40841815d19d786477d589d4a632d
    local               39d634a4f88618dcf8297e9090a630cbf8ad0cc3bbf8a8fb4ea8b0534dfed336
    local               040d22a1827ed6a895ba06f79c2b61b5d78067b9b28c2f17aaa4eef52b419845
    local               42d8335863f313a7c9891f269e86f7fc1a12843d635376165cad096e04d194a2
    local               050dba4623bbcadae9152051cc530aba2af2c304026513183ef5ae401457231f
    local               58bfcad418a3ab7201b61985b27f9002b3982f94aedafc4c7cc18d817b2e40ad
    local               62e3b3eb4cc9b7ccd2441e5135be0b91b214d04058e34f6f7891f0755b4dfde9
    local               66f6108f42a512ee40a48e49666de464732625a058b0a888fee390c81a82c159
    local               74be80d32a57abddddc12059a01f1abcba37d95679fd0299a22d9d1b48ae4a95
    local               84c779adda114875b8080c7ef4c377b0ef422946a69b401ce59afe2be3f0886d
    local               90cf4300ff4183743094837f0567ea5d66965257721aa70b56438fdc80f8fe69
    local               91ab878addc510405fa5786d8eeafebe7f2390860bc44a6cfe036ba79fd1aade
    local               94aac047de515d6433804416643546d018dd6ab3e34ae1a1f1fa3f03a3761889
    local               109a2c3a458e704e1b366ecfe6cfb126aa8adff39633fc9559a780fd00daa6f2
    local               0274f5b53786839283743d4aace871da57109cb749f5ed1106352df3f9354048
    local               363f7410e68a05eb8208998bdf5f360917fe7192806650d7e9991a5f562eb44a
    local               513bbc6d1113d20f1f1fb048c6ca6253e7406f213699b5d472361eb0bee45f3d
    local               554cb6440e38b9f866127558fde189009ccffe2fffe020402791ed7307cbc2cf
    local               633ea99bbd1b14288cc33135dad25ebf206a5301c606b2289f7563989e42ea1f
    local               778e95f47bfa7babbde920f31a66d8e191117a062f6b937fdd90a70567034865
    local               799f0dd7615758b833e85fbc502c62015233e237f644958431199990d8b9d36b
    local               900d3f693c9382801068835650fa22457b14d3f8a29b830ec1b878786474b2d9
    local               906eec9f15bbb386ae75e4ecff4866f626e469be5d843077ce1f3d6def18c13f
    local               1720e7fefa26139af28c724d91dbe02930e27a2768553ad69af6c941c365e174
    local               2778bbf1356bf1633b4ae9ec1bdd5823c6c28062c808c458a4aab64c02cc66ec
    local               9921f44aa7d00b3a6d01c5326f925debd5285978318622b0de2f5f3c3c4f4d50
    local               38042c4a79002076313400fa79275c80a508574b485be6a9171dfe6ce0520fa3
    local               592267e0fbfafb21c228e4664447bb9254565a6c06b6d4de0ee5095756375730
    local               2456727b799ec90e06b8bf9d3d12adab69ec7d2151822fc358d38b045c5ac6cd
    local               6272694e114f4c5a1c683966914dc276a5b2786d59551ed80d89dd7f9f2b4882
    local               6335079e4e4873c6fb472b9bef1136a2e86e26a57cdbfc24fc30c21ad60717b4
    local               7607692bd6877301f868f703ab7df771c9666f00f20e96bf0fa21fe64f2b5483
    local               8815124c23bee7a8d6b55e75a74a9c52e5589706e5f2c5064296c88726169775
    local               27659576bd05e56efe5c72d5f0fd58a64a9d863c870487cf8c0e7b5842f31015
    local               1651077376dbc05b5726c0a43adbf3821c0c2c197ba25c86f1f76e1ed4567e44
    local               a3d763caa0822612da8b65a3e11a6a95d3416846206b7e4361713a6425752418
    local               a8e5380221990c90defed125f69e206c9ad4ae045088bf4462580b46f7e18778
    local               a31a3811cb19e54dad492bac34c6f771c13ad316cd3c00790936acedb724d593
    local               a44d2a96cf79c51e2fab7a02541c7b05c80df4bb59ed3a87ac6fbee1168e2a95
    local               a99b757e20052a5f064f34f55b7f6bdd7f4ab545c278a15e70303cc6b4b42e98
    local               a7176c2660c7c5fa618d821ea95446140b0905c54e978e026f1026af9e3e19be
    local               aa3fda870c05b95f1d3adeb84e74f98bde648920a73f2b30f3151d76b5fdc51f
    local               aaddef3b9e5014dce7dc517622b772cbffff53178c3fcf9eee2f20fae882ac02
    local               b4edbbfad833f10c9394a5bcd1925721a66cc322530d06ab6e85ee14ebfe236d
    local               b6b2b18c620e0d2f8832777ab83e2619671ca38954b5c80d2c8c04a3c908376d
    local               b14657bd1542b35573e64c852cf543f32951dd6e0f1a888dd388f06ff8ab61f3
    local               bc8e8bc066509bc65ab5d453b17988538d5df8b87ffeb3c8841a0fae953138c7
    local               bd894ed6bdec599b047952184d01f9c3ef19da7ff80ca46cd94d2af2cf4c06a1
    local               bf1166b85b0de409235c342d7821d3c0a88a72dfec2cca2c3181c2e053bd6d19
    local               c2e0e6f4e7f5e8943f31cfc4c31a8187a70e7aa563d223e37d1295f11a267bbf
    local               c3c0269d87808ff1b046262aab24d9c2978c02a109d3c8104626eb0613db43fb
    local               c5db1ff205df3454739bd2d016e87a6930d75da484afa296827177a0575c359f
    local               c8fcb8c363c0a3a821d1a26def8d6e8a1ae30ea454d43431a82b6307ecf1c8b0
    local               c76f173d9ed52a1e98703cb1c7239593faf674e94a5d649efe158f81aceb7fb7
    local               c88c0f4b2533f9f127e52a7f8fa0bc7a1c8d28bb5fe633fd7d668c02c8272381
    local               c696b2343bd41fb38b894173c02e8f682d81202dade5962503f3d3764e22f823
    local               c4247ed18ad96b4a4be4c290b3c2599e23ad7e69c5ae2b61e9adae9934c2c65e
    local               c9253951bc3f5ca0a96ad65d1601696a1a4e5dfdfe03ac44bd6c7889dd2a936e
    local               ced572fad9d73e2cfe406831e6e9dd2777da590e499464987cd269bf4b0ebbde
    local               d2b06f94e8d02fb91559f6289a59007ca5fc5ad8a41128a0e8092bc6b2b52f9d
    local               d20e4304b0f635353ef04fb23002fd13f299fb450417da444eddf9ce7a24038e
    local               d113c687080fa7d4457b2d6c4e05e6eb01791e02deba23cce4e9c0e08bd4d93c
    local               da610ec7c395feb90453ad2db60195c17758c3a6d3e28230b5f057af8ab4c2bf
    local               dbcaf5db103cc231f3cee1effd6e3f79eac886adb564bf61039f88088864368d
    local               de9742bdcb34d5e9796d2148e016e1410f8b2b8d9c0c7471720c1ee271470e9f
    local               e4ec369f8b353e9a6ba14026bf54fe8f2717deb2ac3b1bb02b97764dfc8bfaea
    local               e5e24d890c9d739b2265a020deb52f7662e1fb16336cf6a23d7d26aa3f7f26e5
    local               e10b43b39c99eae92f5c46994c3e21f7f41eb90c968401f312105617a109b8e7
    local               e17b5e18e4af64ba42a31023d18c7d2b79b98ccf2fe6b65648d5b9f009c3c024
    local               e39ab9979529fe059185773bf79bc48eb84e519742e4c5f4e34b98470fe4f32f
    local               e88f619472bd5359e8a0406aa162d62b13d84f41673f1f776afa09dfd55a81b8
    local               eb11645d3333bfd60e72e2e829766d0ba03849c9f19d7fa57612d67e28b14e39
    local               ecd8b3e9b7594b5dcd98d9bdaad762c3acf1b662c42423a975f2ffcecbbeb1b7
    local               ed4a6e0d8ee2382f9b039b5c8bdbb1b4fd3a2035e134a402d72b56f34f90bed9
    local               efb9064152f3fd682dde869ddbeff87b0ae54ea2cd746be065006c9d3359c416
    local               f2cd3b92f38c2200305f2ab46a86724bd9611ff7b9b244906efff1e877b2e708
    local               f4cbfa96bf3d7b5ee8853dc0a6c74396316d23472a8b00dcfef76c53a16ff10f
    local               f5cd96bc2ca512aa40f91f63bc2c38b40bfd48829a2828b3c488e51df6b1fdd7
    local               f7be593ee92d98ec3c337456873d47c5c5c7e6a2c6b6663beb7d65569e16843c
    local               f025a437b1b036915d972a7e8d679eae9f07946a56674e4224fd91d89a565d86
    local               f93f44310e76ebe885563dfc8becb2664f5bfee3b0ca0d0893137d4302e0e169
    local               f609c86470b20aefa1d48366b01e65c00ebc6055f623938d68cfc692c7fb7831
    local               f72969cc93564e720998f8ed7019649e204727b95a3172b0e0f7c7d3bc7d7c55
    local               f7864781b0787de129eb194c9b3847fdf676552557a4752a46dd2c7ea1164268
    local               fa3f212b923f11e2f78c412e717219fe2512388b55ca22b50fa4ce5015b716ca
    local               fc77d523159dc21408437aa7f380439f92c7e0b415d0ee35ac4ca573a6001e4a
    local               ff7cbd0df2a61036bfa689ea5faa492a2ade39970f5f791f8812677cd19e9325
    root@Server-1:~#

     

    Why are the volumes left dangling?

    Can I safely delete them using docker volume prune?

  3. 24 minutes ago, ionred said:

    Is your garage climate controlled? Here in Florida, that would destroy the equipment just getting it near a garage in summer. 😓💦

     

    Unfortunately not, but we (90266) only have about two months of 90F+ in my garage, rest of the year mid 70's.

    My garage is insulated and drywalled, I have a 24/7 extractor fan in the ceiling, I have 4 top mounted extractor fans on the rack, and SNMP and email alert environment monitoring in the rack. On really hot days I run a high speed floor fan and leave the garage door about 6" open.

    The biggest problem I have is when we park hot engine cars in the garage, the servers don't like that. I've trained the family to let the cars cool down before parking inside in the summer.

    I wish I had planned better when we built the house and ran ducting for the rack to the outside.

  4. Hi, I'm wondering if BTRFS is the right solution for a resilient cache / storage solution?

     

    I run two Unraid servers, primary 40TB disk plus 2TB cache (4 x 1TB SSD), secondary 26TB disk plus 2TB cache (4 x 1TB SSD). On two occasions I've lost my entire Cache volume due one of the drives "failing". I say failing, but really both times it was my own fault, I didn't want to shut down, and I pulled the wrong drive, and immediately plugged it back in. But this is no different to a drive failing, or a connection failing. Pulling disks during certification of large resilient storage systems is a perfectly good test.

     

    One would expect the loss of 1 disk in a 4 disk BTRFS RAID10 config to be a non-issue, not so, first the log started showing BTRFS corruption issue, ok, seems it is not being auto fixed, then I run a cache scrub, no errors, still errors in the log, scrub with repair, reported repaired. Then I started getting docker write failures, seems my cache became read-only, and BTRFS corrupt.

    In both cases I resorted to rebuilding the cache from scratch, and restored appdata backups, lost the VM's (unlike docker stop/restart no easy way to backup VM's).

     

    I've run hardware RAID for a long time, including hardware that uses SSD caching, I've lost disks, pulled disks, but in all cases the array eventually comes back on its own.

    I simply do not have the same trust in Unraid's cache, I think it is fragile, I think it is unreliable to the point where it needs to be backed up constantly.

     

    I'd like to see the Unraid/Limetech publish their resiliency test and performance plans? What is tested for, what are known failure scenarios, what are known recoverable scenarios, are my expectations of resiliency and performance unfounded?

     

    And this is not about BTRFS, this is about Unraid, I don't care what Unraid uses for the cache volume, it could have supported SSD's in data volumes and no cache would be required, it could have used ZFS and we would have different problems, BTRFS was an Unraid choice, and I find it fragile.

     

    What are your experiences with cache resiliency?

  5. 6 hours ago, mishmash- said:

    Note that partly due to legacy issues from upgrading from HDD to SSD array, I still use a cache drive SSD in BTRFS raid1. But I think I actually like having a cache SSD with an array SSD, as the cache can be trimmed, and is constantly seeing tiny writes etc. I might upgrade it to an NVME for next time. In reality though I think it does not matter at all on having cache+array ssd or just full ssd array no cache.

    How are your SSD's in the array? My understanding is that SSD's are not supported in the array, something to do with how parity calculations are done.

  6. I am having a weird problem that I can't explain.

     

    I use HandBrakeCLI to convert a file \\server\share\foo.mkv to \\server\share\foo.tmp

    I rename \\server\share\foo.tmp to \\server\share\foo.mkv

    I store the \\server\share\foo.mkv modification time for later use.

    The \\server\share\foo.mkv attributes are read by MediaInfo, FFprobe, and MKVMerge.

    I come back later, and the stored time no longer matches the file modified time.

     

    No other apps are modifying the file, and I can't figure out why the modification time would change.

    I am running from Win10 x64 to Unraid over SMB to a user share.

    I ran ProcessMonitor on the Win10 machine, filtering by the file name, and after HanBrakeCLI exists, no modification are done from the Win10 system to the file.

    The pattern is HandBrakeCLI open write close, My code open read attributes close, FFprobe/MediaInfo/MKVMerge open read close.

    Wait, My code open read attributes close, timestamp changed from last read timestamp.

     

    I wrote a little monitoring app that will compare the file modified time with the previous time every second.

    The pattern is always the same, file modified time changes on HandBrakeCLI exit, then it changes two more times.

     

    E.g.

    4/30/2020 10:31:25 PM : 4/30/2020 10:31:21 PM != 4/27/2020 7:40:43 PM
    4/30/2020 10:31:45 PM : 4/30/2020 10:31:36 PM != 4/30/2020 10:31:21 PM
    4/30/2020 10:31:48 PM : 4/30/2020 10:31:48 PM != 4/30/2020 10:31:36 PM

     

    E.g.

    4/30/2020 10:54:02 PM : 4/30/2020 10:54:01 PM != 4/27/2020 7:40:43 PM
    4/30/2020 10:54:12 PM : 4/30/2020 10:54:03 PM != 4/30/2020 10:54:01 PM
    4/30/2020 10:54:17 PM : 4/30/2020 10:54:17 PM != 4/30/2020 10:54:03 PM

     

    E.g.

    4/30/2020 11:16:13 PM : 4/30/2020 11:16:12 PM != 4/27/2020 7:40:43 PM
    4/30/2020 11:16:24 PM : 4/30/2020 11:16:15 PM != 4/30/2020 11:16:12 PM
    4/30/2020 11:16:26 PM : 4/30/2020 11:16:26 PM != 4/30/2020 11:16:15 PM

     

    I am speculating that the file modification is happening on the Unraid side.

    A wild guess; maybe the FUSE code buffers the write, and the last buffered write updates the modified time, and when Samba comes back later, the now modified time is read, instead of the time at SMB file close?

     

    Any other ideas?

  7. I have some code that uses dotnetcore filesystemwatcher to trigger when changes are made to directories.

    When the directory is a SMB share on Unraid, and changes are made from a docker container the underlying directory, the SMB share does not trigger the change.

     

    E.g.

    SMB share \\server\share\foo points to /mnt/user/foo

    Windows client monitors for changes in \\server\share\foo

    Docker container writes changes to /mnt/user/foo

    Windows client is not notified of the changes.

     

    Is this expected behavior with Samba on Linux (I did not test), or is this something with Unraid and user shares that are not triggering Samba changes?

  8. Trying to do GPU only, does not seem to make any progress.

    Log shows:

    16:40:33:Enabled folding slot 00: READY gpu:0:GP106GL [Quadro P2000] [MED-XN71]  3935
    91m16:40:33:ERROR:No compute devices matched GPU #0 NVIDIA:7 GP106GL [Quadro P2000] [MED-XN71]  3935.  You may need to update your graphics drivers.

     

    Is it a driver issue or no work items for a P2000?

  9. 5 minutes ago, Roxedus said:

    As I said in the second comment on this post, you should get going by defining runtime, and add the variables for capabilities and devices. as you would to your mediaserver

    I'll take a look, but for a moment imagine somebody like me read the blog post, wanting to contribute, I come to the forum, I see pages and pages of posts, with no single place pointing to instructions, after reading a couple pages, you know what they do, they leave.

    Update: I added the same runtime and device parameters as for my Plex docker, no web UI, event the app says web ui is broken, folding at home forum post is 6 pages long, no single point of instructions.

    • Like 1
  10. New user, installed on two similar systems, only difference is number of drives.

    Default ports does not work for host, need to change port number from 18888 to 8888, else keep getting connection refused.

    Fist server no problem with running tests.

    Second server crash in what appears to be a timeout waiting for the 20 spinning drives to spin up:

    ```

    DiskSpeed - Disk Diagnostics & Reporting tool
    Version: 2.4
     

    Scanning Hardware
    08:25:25 Spinning up hard drives
    08:25:25 Scanning system storage

    Lucee 5.2.9.31 Error (application)

    Messagetimeout [90000 ms] expired while executing [/usr/sbin/hwinfo --pci --bridge --storage-ctrl --disk --ide --scsi]

    StacktraceThe Error Occurred in
    /var/www/ScanControllers.cfm: line 243

    241: <CFOUTPUT>#TS()# Scanning system storage<br></CFOUTPUT><CFFLUSH>
    242: <CFFILE action="write" file="#PersistDir#/hwinfo_storage_exec.txt" output=" /usr/sbin/hwinfo --pci --bridge --storage-ctrl --disk --ide --scsi" addnewline="NO" mode="666">
    243: <cfexecute name="/usr/sbin/hwinfo" arguments="--pci --bridge --storage-ctrl --disk --ide --scsi" variable="storage" timeout="90" /><!--- --usb-ctrl --usb --hub --->
    244: <CFFILE action="delete" file="#PersistDir#/hwinfo_storage_exec.txt">
    245: <CFFILE action="write" file="#PersistDir#/hwinfo_storage.txt" output="#storage#" addnewline="NO" mode="666">
     

    called from /var/www/ScanControllers.cfm: line 242

    240:
    241: <CFOUTPUT>#TS()# Scanning system storage<br></CFOUTPUT><CFFLUSH>
    242: <CFFILE action="write" file="#PersistDir#/hwinfo_storage_exec.txt" output=" /usr/sbin/hwinfo --pci --bridge --storage-ctrl --disk --ide --scsi" addnewline="NO" mode="666">
    243: <cfexecute name="/usr/sbin/hwinfo" arguments="--pci --bridge --storage-ctrl --disk --ide --scsi" variable="storage" timeout="90" /><!--- --usb-ctrl --usb --hub --->
    244: <CFFILE action="delete" file="#PersistDir#/hwinfo_storage_exec.txt">
     

    Java Stacktracelucee.runtime.exp.ApplicationException: timeout [90000 ms] expired while executing [/usr/sbin/hwinfo --pci --bridge --storage-ctrl --disk --ide --scsi]
      at lucee.runtime.tag.Execute._execute(Execute.java:241)
      at lucee.runtime.tag.Execute.doEndTag(Execute.java:252)
      at scancontrollers_cfm$cf.call_000006(/ScanControllers.cfm:243)
      at scancontrollers_cfm$cf.call(/ScanControllers.cfm:242)
      at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:933)
      at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:823)
      at lucee.runtime.listener.ClassicAppListener._onRequest(ClassicAppListener.java:66)
      at lucee.runtime.listener.MixedAppListener.onRequest(MixedAppListener.java:45)
      at lucee.runtime.PageContextImpl.execute(PageContextImpl.java:2464)
      at lucee.runtime.PageContextImpl._execute(PageContextImpl.java:2454)
      at lucee.runtime.PageContextImpl.executeCFML(PageContextImpl.java:2427)
      at lucee.runtime.engine.Request.exe(Request.java:44)
      at lucee.runtime.engine.CFMLEngineImpl._service(CFMLEngineImpl.java:1090)
      at lucee.runtime.engine.CFMLEngineImpl.serviceCFML(CFMLEngineImpl.java:1038)
      at lucee.loader.engine.CFMLEngineWrapper.serviceCFML(CFMLEngineWrapper.java:102)
      at lucee.loader.servlet.CFMLServlet.service(CFMLServlet.java:51)
      at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
      at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:292)
      at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
      at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
      at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
      at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
      at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212)
      at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:94)
      at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:492)
      at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141)
      at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:80)
      at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:620)
      at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:684)
      at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88)
      at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:502)
      at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1152)
      at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:684)
      at org.apache.tomcat.util.net.AprEndpoint$SocketWithOptionsProcessor.run(AprEndpoint.java:2464)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
      at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
      at java.lang.Thread.run(Thread.java:748)
     

    Timestamp2/20/20 8:29:32 AM PST

    ```

  11. 8 minutes ago, Squid said:

    image.png.ff24ab4f4e2337f816007b87516f324b.png

    Ah, I was about to say but it only allows the top level to be excluded, not child folders, then I noticed the edit box, and I can add my own path instead of using the GUI.

    Would be cool if the GUI allowed sub-folder selection, but I'll wait for restore to complete, then try to exclude the Plex metadata folder.

    Thx

  12. 11 minutes ago, Squid said:

    The backup took 3:18, you can expect the verify to take that long.

     

    It would be a somewhat safe assumption that when starting back up, any given app is going to wind up making changes to the appdata due to its start procedure (ie: rescanning media, etc).  If you start the apps before verification, then any verification is going to fail.

     

    Turn off verification.  

    Ah, so verify is comparing the files, not just verifying integrity, got it.

    I bet it is the Plex metadata that is taking forever, is there an ability to exclude the metadata from backup (it can be redownloaded), maybe a path exclusion?