jaso

Members
  • Posts

    31
  • Joined

  • Last visited

Everything posted by jaso

  1. Solved by rolling back t previous version, by changing the template entry for 'repository' from linuxserver/sickchill to linuxserver/sickchill:2024.3.1-ls192
  2. Like the title says, sickchill won't start since the last sickchill update. I have updated Unraid from 6.12.9 to 6.12.10, but it still won't start. The following errors where appearing in my Unraid syslog, before and after the upgrade of Unraid. I have confirmed the errors are sourced from sickchill by starting and stopping each docker. The errors only appear when sickchill is started (and stop when sickchill is stopped): Apr 9 20:57:32 Tower kernel: traps: python3[11080] trap invalid opcode ip:15276047f14e sp:7ffc5821ba10 error:0 in etree.cpython-311-x86_64-linux-musl.so[15276044d000+32a000] Apr 9 20:57:34 Tower kernel: traps: python3[11220] trap invalid opcode ip:148d0ee7f14e sp:7ffd670c8300 error:0 in etree.cpython-311-x86_64-linux-musl.so[148d0ee4d000+32a000] Apr 9 20:57:38 Tower kernel: traps: python3[11325] trap invalid opcode ip:14c0dca7f14e sp:7ffec052d360 error:0 in etree.cpython-311-x86_64-linux-musl.so[14c0dca4d000+32a000] Apr 9 20:57:42 Tower kernel: traps: python3[11418] trap invalid opcode ip:146ccec7f14e sp:7ffdcacc9050 error:0 in etree.cpython-311-x86_64-linux-musl.so[146ccec4d000+32a000] Apr 9 20:57:45 Tower kernel: traps: python3[11550] trap invalid opcode ip:14d71687f14e sp:7ffe90072fc0 error:0 in etree.cpython-311-x86_64-linux-musl.so[14d71684d000+32a000] Apr 9 20:57:48 Tower kernel: traps: python3[11640] trap invalid opcode ip:14a154e7f14e sp:7fffe7b38550 error:0 in etree.cpython-311-x86_64-linux-musl.so[14a154e4d000+32a000] I have attached screenshots of: * unraid logs * sickchill docker settings * sickchill logs I am assuming it's probably some silly config setting I am neglecting, but I just can't figure it out. I would be great if someone can point me in the right direction. Cheers, Jaso
  3. Just wanted to do two things: 1. A big thankyou to itimpi for the unRAIDFindDuplicates script. I have had a few copy/move errors over the last decade and itempi's script just found nearly 400GB of dupes scattered over my 42GB unraid array. 2. I banged together a little script that looks at the output of the itimpi's script, and deletes the dupes. Note that you must do a bit of cleaning of itimpi's output file first - delete everything except the file paths. That is, remove the lines at beginning of duplicates.txt that look like this: (also delete file size warnings, and the lines for files associated to the warnings) COMMAND USED: ./unRAIDFindDuplicates.sh Duplicate Files --------------- Here is my script - I called it 'delete-dupes.sh'. Execute it like this: bash ./delete-dupes.sh '/boot/duplicates.txt' #!/bin/bash # Check if the file exists if [ ! -f "$1" ]; then echo "File not found!" exit 1 fi # Read the file line by line while IFS= read -r line; do # Check if the line is empty if [ -n "$line" ]; then # Prepend "/mnt/user/" to the line path="/mnt/user/$line" # Delete the file path rm -v "$path" fi done < "$1" Be careful. If you execute the delete-dupes script twice in a row it will delete the remaining (now unique) files. I had thousands of files that were duplicated, without the script I would have been manually deleting duplicate files for weeks. Thanks again itempi!
  4. Sickchill died for me recently after updating to latest docker version of sickchill via unraid docker screen "check for updates" button. I am on Unraid 6.9.1. Sickchill container ID: 46be26380d7a. I have restarted the sickchill docker a few times but no joy. Any ideas? Edit: I also tried editing the docker config to run sickchill in "Privileged" mode. Still no joy There are a few errors appearing when I view the logs: warnings.warn("urllib3 ({}) or chardet ({})/charset_normalizer ({}) doesn't match a supported " Checking poetry sickchill installed: True /usr/lib/python3.9/site-packages/requests/__init__.py:102: RequestsDependencyWarning: urllib3 (1.26.7) or chardet (5.0.0)/charset_normalizer (2.0.7) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({})/charset_normalizer ({}) doesn't match a supported " Traceback (most recent call last): File "/usr/bin/SickChill", line 8, in <module> sys.exit(main()) File "/usr/lib/python3.9/site-packages/SickChill.py", line 345, in main SickChill().start() File "/usr/lib/python3.9/site-packages/SickChill.py", line 90, in start settings.DATA_DIR = choose_data_dir(settings.PROG_DIR) File "/usr/lib/python3.9/site-packages/sickchill/helper/common.py", line 404, in choose_data_dir if location.joinpath(check).exists(): File "/usr/lib/python3.9/pathlib.py", line 1424, in exists self.stat() File "/usr/lib/python3.9/pathlib.py", line 1232, in stat return self._accessor.stat(self) PermissionError: [Errno 13] Permission denied: '/root/sickchill/sickbeard.db' EDIT 2: A new image of sickchill was released [Image ID: 357610432]. Pulled new image. Everything fixed.
  5. Here's a snapshot of my unRaid network settings. Is your's different?:
  6. My understanding of the root cause is that <something> is impeding your unraid from getting packets to/from servers that host the docker images/updates. That something could be mis-configuration of routes, your ISP playing silly-buggers, lots of things. To take your ISP out of the equation (at least a little bit) you could change the DNS settings from the ISP's default to something like google DNS: IPv4 DNS server: 8.8.8.8 IPv4 DNS server 2: 8.8.4.4 Or, OpenDNS: IPv4 DNS server: 208.67.222.222 IPv4 DNS server 2: 208.67.220.22 You can make this change just for unraid: Settings > Network settings > IPv4 DNS server and IPv4 DNS server 2 Alternatively, you can update your DNS servers at the router and make the change to your entire household network. When I had the same issue that you are facing I found that option C from previous post, in combination with using OpenDNS as my DNS servers (on both router and unraid) fixed my problem. Good luck If this doesn't work, I'd suggest the following to narrow down the root cause: Shut down *everything* else in your network and try again with just unraid and router. Log on to unraid local UI and see if you can get it to update docker images. If still no good I'd build a second unraid box for testing purposes to see if it can host a docker image and get updates. If the second unraid works AOK you are looking at FUBAR config somewhere in your original unraid... HTH, Jaso
  7. Regarding the router troubleshooting. When you say 'reset my router' - did you: a) turn the router off and on again b) reset the router back to factory settings c) reset the router back to factory settings AND update to latest router firmware I found that c) was what worked for me. There is one other thing to try, but if you can confirm the above questions we can go from there. Cheers, Jaso
  8. My router was not playing nice. Not exactly sure what I did to make it fail, but a firmware upgrade to latest and a factory reset did the trick. Everything AOK now.
  9. Solved. My router was not playing nice. Not exactly sure what I did to make it fail, but a firmware upgrade to latest and a factory reset did the trick.
  10. So it get's even weirder - I checked Settings --> Fix Common Problems and got this: Maybe I just need to reset my router. Maybe back to factory condition and set it up again. Sigh. (Note that I am not seeing any symptoms of misbehaving router on 30+ other devices on the network)
  11. ooh. I am getting this too (just posted here). Now that you mention it started just after i updated community apps. Mmm suspicious. I wonder if it's a weird combination of apps/dockers/plugins/configs not playing nicely?
  12. Hi everyone, I am getting this error whenever I check for docker updates: Jun 22 20:52:37 Tower nginx: 2020/06/22 20:52:37 [error] 8724#8724: *3066436 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.10.0.12, server: , request: "POST /plugins/dynamix.docker.manager/include/DockerUpdate.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "10.10.0.3", referrer: "http://10.10.0.3/Docker" It stays in "checking" for ages, then reverts to orange text "not available" P.S. I am also getting a weird error in the "transmission VPN" docker. Do you think these could be related? (It dumps the connection after one minute, tries to reconnect, dumps connection after one minute). I haven't changed any configs for docker overall or the transmission_VPN docker. Mon Jun 22 16:47:43 2020 [UNDEF] Inactivity timeout (--ping-exit), exiting Both errors started at the same time. A few days ago. Kind Regards, Jason
  13. Yep - it seems to have been a cabling issue. I recall moving the box around when I was diagnosing the raid card fault. After checking each sata cable and power cable and turning it back on (and being super gentle sliding the box back into place) everything was AOK.
  14. I've got the diags zip from before I shut it down and after I turned it on again. I've also uploaded an image of a notification I got saying "Array turned good". It seems that the array is pretty happy with itself...but I haven't had the balls to actually start the array yet - maintenance mode or normal. Cheers, Jaso tower-diagnostics-20200607-1502 BEFORE.zip tower-diagnostics-20200607-2215 AFTER REBOOT.zip
  15. @johnnie.black I've attached the syslog. Do you need any of the other diag files? Cheers - much appreciated. syslog.2.txt
  16. So 4 days ago one my raid cards died. (Topic thread here FYI). It looks as if some bad juju is going down at my place because one of the disks that I moved to another slot is reporting ReiserFS errors. It looks like I have to do a reiserfsck as doco'd here and here. My understanding is that I should be in maintenance mode, and do the reiserfsck on the managed disk (i.e. /mnt/mdx) as that will maintain parity. My question: is this an OK action to take while I have disk being emulated? (I am still waiting for replacement raid card to arrive via mail). For now I have shut the device down and reading as much as I can to ensure I don't make things worse. I had to do a hard shutdown as the file system errors were blocking a graceful shutdown. Lots of these in my syslog: Jun 5 19:55:20 Tower kernel: REISERFS error (device md4): zam-7001 reiserfs_find_entry: io error and these Jun 5 20:55:11 Tower kernel: sd 3:0:0:0: [sdg] tag#10 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Jun 5 20:55:11 Tower kernel: sd 3:0:0:0: [sdg] tag#10 CDB: opcode=0x85 85 06 20 00 00 00 00 00 00 00 00 00 00 40 e0 00 Jun 5 20:55:11 Tower kernel: md: do_drive_cmd: disk4: ATA_OP e0 ioctl error: -5 Jun 5 20:55:12 Tower emhttpd: error: mdcmd, 2723: Input/output error (5): write Reading was mostly working (MC had a few issues as I was navigating around the disk...), but I could watch some vids and view some pics on that drive. Just couldn't write to that disk. Kind Regards, Jaso
  17. Figured out what the problem was. Dead RAID card. /mnt/disk6 and /mnt/cache were both being served by a generic 2x sata card. It just up and died after 5 years of top-notch service. Will have to wait for a few days for a new raid card to arrive. In the meantime I've moved my cache ssd to another sata slot, and /mnt/disk6 is being emulated for now... Cheers, jaso
  18. My unraid server had some trouble earlier: Unraid Cache disk message: 03-06-2020 19:06 Warning [TOWER] - Cache pool BTRFS missing device(s) Samsung_SSD_860_EVO_500GB_S4BENG0KC05104W (sdg) + Unraid Disk 6 error: 03-06-2020 19:07 Alert [TOWER] - Disk 6 in error state (disk dsbl) WDC_WD40EZRX-00SPEB0_WD-WCC4E52UR3RJ (sdh) + Unraid array errors: 03-06-2020 19:07 Warning [TOWER] - array has errors Array has 1 disk with read errors I used the Tools > Diagnostics > Download to grab all the logs and config. Then thought I shut down the array to do some troubleshooting. Unfortunately I am now stuck in a constant loop of "Array Stopping • Retry unmounting disk share(s)...". From the Syslog Jun 3 17:55:35 Tower kernel: mdcmd (772): spindown 6 Jun 3 19:05:39 Tower kernel: ata5.00: exception Emask 0x52 SAct 0xfc0 SErr 0xffffffff action 0xe frozen Jun 3 19:05:39 Tower kernel: ata5: SError: { RecovData RecovComm UnrecovData Persist Proto HostInt PHYRdyChg PHYInt CommWake 10B8B Dispar BadCRC Handshk LinkSeq TrStaTrns UnrecFIS DevExch } Jun 3 19:05:39 Tower kernel: ata5.00: failed command: READ FPDMA QUEUED Jun 3 19:05:39 Tower kernel: ata5.00: cmd 60/20:30:40:d9:08/00:00:16:00:00/40 tag 6 ncq dma 16384 in Jun 3 19:05:39 Tower kernel: res 40/00:00:00:4f:c2/00:00:00:00:00/00 Emask 0x56 (ATA bus error) Jun 3 19:05:39 Tower kernel: ata5.00: status: { DRDY } Jun 3 19:05:39 Tower kernel: ata5.00: failed command: READ FPDMA QUEUED Jun 3 19:05:39 Tower kernel: ata5.00: cmd 60/08:38:d8:b5:6c/00:00:04:00:00/40 tag 7 ncq dma 4096 in Jun 3 19:05:39 Tower kernel: res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x56 (ATA bus error) Jun 3 19:05:39 Tower kernel: ata5.00: status: { DRDY } then a bit later in the syslog: Jun 3 19:05:39 Tower kernel: ata5.00: status: { DRDY } Jun 3 19:05:39 Tower kernel: ata5: hard resetting link Jun 3 19:05:39 Tower kernel: ahci 0000:02:00.0: AHCI controller unavailable! Jun 3 19:05:40 Tower kernel: ata5: failed to resume link (SControl FFFFFFFF) Jun 3 19:05:40 Tower kernel: ata5: SATA link down (SStatus FFFFFFFF SControl FFFFFFFF) Jun 3 19:05:46 Tower kernel: ata5: hard resetting link Jun 3 19:05:46 Tower kernel: ahci 0000:02:00.0: AHCI controller unavailable! Jun 3 19:05:47 Tower kernel: ata5: failed to resume link (SControl FFFFFFFF) Jun 3 19:05:47 Tower kernel: ata5: SATA link down (SStatus FFFFFFFF SControl FFFFFFFF) Jun 3 19:05:47 Tower kernel: ata5: limiting SATA link speed to <unknown> Jun 3 19:05:52 Tower kernel: ata5: hard resetting link Jun 3 19:05:52 Tower kernel: ahci 0000:02:00.0: AHCI controller unavailable! Jun 3 19:05:53 Tower kernel: ata5: failed to resume link (SControl FFFFFFFF) Jun 3 19:05:53 Tower kernel: ata5: SATA link down (SStatus FFFFFFFF SControl FFFFFFFF) Jun 3 19:05:53 Tower kernel: ata5.00: disabled Jun 3 19:05:53 Tower kernel: ahci 0000:02:00.0: AHCI controller unavailable! Jun 3 19:05:53 Tower kernel: sd 4:0:0:0: [sdg] tag#6 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Jun 3 19:05:53 Tower kernel: sd 4:0:0:0: [sdg] tag#6 Sense Key : 0x5 [current] Jun 3 19:05:53 Tower kernel: sd 4:0:0:0: [sdg] tag#6 ASC=0x21 ASCQ=0x4 Jun 3 19:05:53 Tower kernel: sd 4:0:0:0: [sdg] tag#6 CDB: opcode=0x28 28 00 16 08 d9 40 00 00 20 00 Jun 3 19:05:53 Tower kernel: print_req_error: I/O error, dev sdg, sector 369678656 Jun 3 19:05:53 Tower kernel: BTRFS error (device sdg1): bdev /dev/sdg1 errs: wr 0, rd 1, flush 0, corrupt 0, gen 0 Jun 3 19:05:53 Tower kernel: sd 4:0:0:0: [sdg] tag#7 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Jun 3 19:05:53 Tower kernel: sd 4:0:0:0: [sdg] tag#7 Sense Key : 0x5 [current] Jun 3 19:05:53 Tower kernel: sd 4:0:0:0: [sdg] tag#7 ASC=0x21 ASCQ=0x4 Jun 3 19:05:53 Tower kernel: sd 4:0:0:0: [sdg] tag#7 CDB: opcode=0x28 28 00 04 6c b5 d8 00 00 08 00 Jun 3 19:05:53 Tower kernel: print_req_error: I/O error, dev sdg, sector 74233304 Jun 3 19:05:53 Tower kernel: BTRFS error (device sdg1): bdev /dev/sdg1 errs: wr 0, rd 2, flush 0, corrupt 0, gen 0 Jun 3 19:05:53 Tower kernel: sd 4:0:0:0: [sdg] tag#8 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Jun 3 19:05:53 Tower kernel: sd 4:0:0:0: [sdg] tag#8 Sense Key : 0x5 [current] Jun 3 19:05:53 Tower kernel: sd 4:0:0:0: [sdg] tag#8 ASC=0x21 ASCQ=0x4 Jun 3 19:05:53 Tower kernel: sd 4:0:0:0: [sdg] tag#8 CDB: opcode=0x28 28 00 05 20 56 88 00 00 08 00 Jun 3 19:05:53 Tower kernel: print_req_error: I/O error, dev sdg, sector 86005384 and then a little bit later: Jun 3 19:05:53 Tower kernel: sd 4:0:0:0: [sdg] tag#11 CDB: opcode=0x2a 2a 00 01 de 5e 08 00 02 00 00 Jun 3 19:05:53 Tower kernel: print_req_error: I/O error, dev sdg, sector 31350280 Jun 3 19:05:53 Tower kernel: BTRFS error (device sdg1): bdev /dev/sdg1 errs: wr 3, rd 2, flush 0, corrupt 0, gen 0 Jun 3 19:05:53 Tower kernel: ata5: EH complete Jun 3 19:05:53 Tower kernel: sd 4:0:0:0: rejecting I/O to offline device Jun 3 19:05:53 Tower kernel: print_req_error: I/O error, dev sdg, sector 86005384 Jun 3 19:05:53 Tower kernel: BTRFS error (device sdg1): bdev /dev/sdg1 errs: wr 3, rd 3, flush 0, corrupt 0, gen 0 Jun 3 19:05:53 Tower kernel: sd 4:0:0:0: rejecting I/O to offline device Jun 3 19:05:53 Tower kernel: print_req_error: I/O error, dev sdg, sector 75279920 Jun 3 19:05:53 Tower kernel: BTRFS error (device sdg1): bdev /dev/sdg1 errs: wr 4, rd 3, flush 0, corrupt 0, gen 0 Jun 3 19:05:53 Tower kernel: ata5.00: detaching (SCSI 4:0:0:0) Jun 3 19:05:53 Tower kernel: print_req_error: I/O error, dev sdg, sector 75281280 Jun 3 19:05:53 Tower kernel: BTRFS error (device sdg1): bdev /dev/sdg1 errs: wr 5, rd 3, flush 0, corrupt 0, gen 0 Jun 3 19:05:53 Tower kernel: print_req_error: I/O error, dev sdg, sector 27140976 Jun 3 19:05:53 Tower kernel: BTRFS error (device sdg1): bdev /dev/sdg1 errs: wr 6, rd 3, flush 0, corrupt 0, gen 0 Jun 3 19:05:53 Tower kernel: BTRFS error (device sdg1): bdev /dev/sdg1 errs: wr 7, rd 3, flush 0, corrupt 0, gen 0 Jun 3 19:05:53 Tower kernel: BTRFS: error (device sdg1) in btrfs_commit_transaction:2267: errno=-5 IO failure (Error while writing out transaction) Jun 3 19:05:53 Tower kernel: BTRFS info (device sdg1): forced readonly Jun 3 19:05:53 Tower kernel: BTRFS warning (device sdg1): Skipping commit of aborted transaction. Jun 3 19:05:53 Tower kernel: BTRFS: error (device sdg1) in cleanup_transaction:1860: errno=-5 IO failure Jun 3 19:05:53 Tower kernel: BTRFS info (device sdg1): delayed_refs has NO entry Jun 3 19:05:53 Tower kernel: loop: Write error at byte offset 14237696, length 4096. Jun 3 19:05:53 Tower kernel: loop: Write error at byte offset 20107264, length 4096. Jun 3 19:05:53 Tower kernel: loop: Write error at byte offset 2207744000, length 4096. Jun 3 19:05:53 Tower kernel: BTRFS warning (device loop2): chunk 13631488 missing 1 devices, max tolerance is 0 for writeable mount Jun 3 19:05:53 Tower kernel: BTRFS: error (device loop2) in write_all_supers:3716: errno=-5 IO failure (errors while submitting device barriers.) I grabbed the syslog again, in an attempt to see what was causing the "unmounting loop": Jun 3 20:06:47 Tower kernel: print_req_error: I/O error, dev loop2, sector 2969408 Jun 3 20:06:50 Tower emhttpd: Unmounting disks... Jun 3 20:06:50 Tower emhttpd: shcmd (91679): umount /mnt/disk4 Jun 3 20:06:50 Tower root: umount: /mnt/disk4: target is busy. Jun 3 20:06:50 Tower emhttpd: shcmd (91679): exit status: 32 Jun 3 20:06:50 Tower emhttpd: shcmd (91680): umount /mnt/cache Jun 3 20:06:50 Tower root: umount: /mnt/cache: target is busy. Jun 3 20:06:50 Tower emhttpd: shcmd (91680): exit status: 32 Jun 3 20:06:50 Tower emhttpd: Retry unmounting disk share(s)... Jun 3 20:06:52 Tower kernel: btrfs_dev_stat_print_on_error: 110 callbacks suppressed Jun 3 20:06:52 Tower kernel: BTRFS error (device sdg1): bdev /dev/sdg1 errs: wr 42, rd 38010, flush 0, corrupt 0, gen 0 I'd prefer a graceful shutdown rather than a hard restart. Any got any ideas how to unmount disk4 and my cache? Kind Regards, jaso
  19. The Jquery Web-UI is a bit too fancy for my ebook reader (Kobo Aura H2O). I can see some bits of the UI quite clearly but the bright blue bar that contains the downloads link renders as a grey blob on the Kobo. Nonetheless on the kobo I can't actually initiate a download to my e-reader. Dees anyone know if there is there is a simpler, old-fashioned URL I could hit to view a much simpler version of the UI?
  20. FIXED: I had to a) stop calibre-web; b) delete the app.db file , and c) restart calibre-web. I installed this Calibre-Web after being informed by Fix-Common-Problems that my previous Calibre docker was deprecated. I can't log in to the Calibre-Web Server. When I try the default admin/admin123 I get this message "Wrong Username or Password". Not sure how to proceed. I assume it's a PEBCAK error, but I can't see ehere I've gone wrong. I thought I'd be able to update the secrets file, but not sure what to put in there, or format. It's currently empty: root@1e58b165290d:/config# cat client_secrets.json {} root@1e58b165290d:/config# FYI I am running UnRaid 6.8.2. Here is a pic of login screen:
  21. Oh. My. God. What the?! I did not know that was a thing. I futzed around for hours yesterday re-installing and re-configuring just one relatively uncomplicated docker. After reading your post and poking around for a bit I found CA's "Previous Apps" section. (It's kind-of hidden away - I've never explored that area before). Anyway, I just installed my remaining dockers at the click of a few buttons. Saved me hours of time trying to match up weird configs I came up with years ago. Thanks folks. I am a seriously happy Unraid user. Cheers, Jaso
  22. Thanks Squid that did the trick (After another restart All my docker images have disappeared but I figure I can rebuild them and maintain the data if I get the config the same as before.
  23. Hi, I recently upgraded my cache drive from a 250GB SSD to 500GB SSD, following the guide here --> https://wiki.unraid.net/Replace_A_Cache_Drive I configured the new drive to be formatted using BTRFS - the previous drive was XFS. Everything was going fine. Array started AOK. VM's up and running, but I got a series of errors about the cache drive: Your existing Docker image file needs to be recreated due to an issue from an earlier beta of Unraid 6. Failure to do so may result in your docker image suffering corruption at a later time. Please do this NOW Docker service failed to start Unraid unable to write to docker image Here are the things I've tried: I deleted the docker image and restarted docker. No joy. (Docker service failed to start) I moved the location of my docker.img from cache\Apps\Docker\docker.img to cache\appdata\Docker\docker.img. No joy. (Docker service failed to start) I increased the size of my docker image from 15GB to 30GB. No joy. (Docker service failed to start) I did a full stop array/restart machine. No joy. (Docker service failed to start) Really not sure what to do any more. Any assistance would be appreciated. I've attached a diagnostics file in case you can glean anything from that. Regards, Jaso tower-diagnostics-20190201-0659.zip
  24. I am running 6.1.3. Same thing happening with me. Lost webgui and access to file shares. Most container-based (docker) systems still work aok. Has happened maybe once every 6 to 8 weeks. Containers: Calibre-server (not responding) CrashPlan (unsure) SABnzbd (was AOK) SickBeard (was AOK) Transmision (was AOK) Had to do a hard reset to get it back, which took ~10 minutes before the array started. The only plug-in I have is OpenVPN Client. I just noticed that there is an update for OpenVPN Client: 2015.10.11 --> 2015.12.23 Also, as I am using Unraid 6.1.3 I will update to Unraid 6.1.7 as soon as the parity check completes. Only posting in case it helps someone diagnose what is happening.