rtho782

Members
  • Posts

    49
  • Joined

  • Last visited

Everything posted by rtho782

  1. I think this is fine, just don't announce you're being acquired by Broadcom next week!!
  2. I'm also getting permissions errors. I've deleted and recreated the docker and appdata folders to no avail. text error warn system array login % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Warning: Failed to create the file serverinstall_100_2298: Permission denied 0 5356k 0 3669 0 0 40766 0 0:02:14 --:--:-- 0:02:14 40766 curl: (23) Failure writing output to destination + chmod +x serverinstall_100_2298 chmod: cannot access 'serverinstall_100_2298': No such file or directory + curl -o server-icon.png 'https://www.feed-the-beast.com/_next/image?url=https%3A%2F%2Fapps.modpacks.ch%2Fmodpacks%2Fart%2F96%2Fstoneblock_logo.png&w=64&q=75' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Warning: Failed to create the file server-icon.png: Permission denied 15 4806 15 756 0 0 6750 0 --:--:-- --:--:-- --:--:-- 6750 curl: (23) Failure writing output to destination + ./serverinstall_100_2298 --path /data --auto /launch.sh: line 13: ./serverinstall_100_2298: No such file or directory + rm -f user_jvm_args.txt + [[ -n NAGP Server Powered by Printyplease ]] + sed -i '/motd\s*=/ c motd=NAGP Server Powered by Printyplease' /data/server.properties sed: can't read /data/server.properties: No such file or directory + [[ -n world ]] + sed -i '/level-name\s*=/ c level-name=world' /data/server.properties sed: can't read /data/server.properties: No such file or directory + [[ -n ChairmanMeow782 ]] + echo ChairmanMeow782 + awk -v RS=, '{print}' /launch.sh: line 24: ops.txt: Permission denied + [[ true = \f\a\l\s\e ]] + echo eula=true /launch.sh: line 27: eula.txt: Permission denied + [[ -n -Xms6144m -Xmx14336m ]] + echo -Xms6144m -Xmx14336m + awk -v RS=, '{print}' /launch.sh: line 30: user_jvm_args.txt: Permission denied + curl -o log4j2_112-116.xml https://launcher.mojang.com/v1/objects/02937d122c86ce73319ef9975b58896fc1b491d1/log4j2_112-116.xml % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Warning: Failed to create the file log4j2_112-116.xml: Permission denied 100 1131 100 1131 0 0 19842 0 --:--:-- --:--:-- --:--:-- 19500 curl: (23) Failure writing output to destination + chmod +x start.sh chmod: cannot access 'start.sh': No such file or directory + ./start.sh /launch.sh: line 35: ./start.sh: No such file or directory + cd /data + [[ -f serverinstall_100_2298 ]] + rm -f 'serverinstall*' 'forge-*.jar' + curl -o serverinstall_100_2298 https://api.modpacks.ch/public/modpack/100/2298/server/linux % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Warning: Failed to create the file serverinstall_100_2298: Permission denied 0 5356k 0 3669 0 0 41693 0 0:02:11 --:--:-- 0:02:11 41693 curl: (23) Failure writing output to destination + chmod +x serverinstall_100_2298 chmod: cannot access 'serverinstall_100_2298': No such file or directory + curl -o server-icon.png 'https://www.feed-the-beast.com/_next/image?url=https%3A%2F%2Fapps.modpacks.ch%2Fmodpacks%2Fart%2F96%2Fstoneblock_logo.png&w=64&q=75' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Warning: Failed to create the file server-icon.png: Permission denied 15 4806 15 754 0 0 6854 0 --:--:-- --:--:-- --:--:-- 6854 curl: (23) Failure writing output to destination + ./serverinstall_100_2298 --path /data --auto /launch.sh: line 13: ./serverinstall_100_2298: No such file or directory + rm -f user_jvm_args.txt + [[ -n NAGP Server Powered by Printyplease ]] + sed -i '/motd\s*=/ c motd=NAGP Server Powered by Printyplease' /data/server.properties sed: can't read /data/server.properties: No such file or directory + [[ -n world ]] + sed -i '/level-name\s*=/ c level-name=world' /data/server.properties sed: can't read /data/server.properties: No such file or directory + [[ -n ChairmanMeow782 ]] + echo ChairmanMeow782 + awk -v RS=, '{print}' /launch.sh: line 24: ops.txt: Permission denied + [[ true = \f\a\l\s\e ]] + echo eula=true /launch.sh: line 27: eula.txt: Permission denied + [[ -n -Xms6144m -Xmx14336m ]] + echo -Xms6144m -Xmx14336m + awk -v RS=, '{print}' /launch.sh: line 30: user_jvm_args.txt: Permission denied + curl -o log4j2_112-116.xml https://launcher.mojang.com/v1/objects/02937d122c86ce73319ef9975b58896fc1b491d1/log4j2_112-116.xml % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Warning: Failed to create the file log4j2_112-116.xml: Permission denied 100 1131 100 1131 0 0 19169 0 --:--:-- --:--:-- --:--:-- 19169 curl: (23) Failure writing output to destination + chmod +x start.sh chmod: cannot access 'start.sh': No such file or directory + ./start.sh /launch.sh: line 35: ./start.sh: No such file or directory ** Press ANY KEY to close this window **
  3. This was also my update to 6.10.2, so it seems plex not working is somewhat common?
  4. Diagnostics attached. I deleted docker.img and reinstalled all my docker applications. All work fine except Plex, which doesn't ever come up. The log doesn't seem to have any errors: [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 40-plex-first-run: executing... Plex Media Server first run setup complete [cont-init.d] 40-plex-first-run: exited 0. [cont-init.d] 45-plex-hw-transcode-and-connected-tuner: executing... [cont-init.d] 45-plex-hw-transcode-and-connected-tuner: exited 0. [cont-init.d] 50-plex-update: executing... [cont-init.d] 50-plex-update: exited 0. [cont-init.d] done. [services.d] starting services Starting Plex Media Server. [services.d] done. And yet it doesn't work at all. Any ideas? tacgnol-core-diagnostics-20220707-0706.zip
  5. One of my cache drives occasionally drops out. I don't quite know why, but it's happened twice in a 3 month period. Last time I replaced cables etc. The cache is mirrored. Each time the drive comes back if I reboot the system, but I have to remove it from the array, balance the cache to single drive, then add it back and balance back to RAID1. This process scares me. I'd like to throw a 3rd drive in, and not use RAID5 (as the FAQ cautions against this) but instead have 3 copies. Is this possible? If not in the UI, in the CLI? Diagnostics attached in case they are relevant. tacgnol-core-diagnostics-20220530-2321.zip
  6. I ended up pulling the "failed" drive, letting it rebalance to single drive, then putting the failed drive back (with new cable obviously) and rebalancing to raid1 again. I had to rebuild docker.img but everything else seems fine.
  7. Update: Cache seems to have gone read-only and wont fix itself. Scrub just aborts. What do? tacgnol-core-diagnostics-20220418-2014.zip
  8. I have two cache arrays, one is effectively a write cache for the array, and is striped, the other is where my appdata sits, and is mirrored. I'm getting a lot of errors filling up syslog from the appdata cache: BTRFS warning (device sdb1): lost page write due to IO error on /dev/sdc1 (-5) BTRFS error (device sdb1): error writing primary super block to device 2 I can see these errors from the btrfs command too: root@Tacgnol-Core:/var/log# btrfs dev stats /mnt/appdatacache/ [/dev/sdb1].write_io_errs 0 [/dev/sdb1].read_io_errs 0 [/dev/sdb1].flush_io_errs 0 [/dev/sdb1].corruption_errs 0 [/dev/sdb1].generation_errs 0 [/dev/sdc1].write_io_errs 21657204 [/dev/sdc1].read_io_errs 4497345 [/dev/sdc1].flush_io_errs 1108503 [/dev/sdc1].corruption_errs 0 [/dev/sdc1].generation_errs 0 The other array is fine. These two SSDs are relatively new, and I suspect it's a cable issue, which is nice and easy to replace, but how do I tell it to "rebuild" the array or whatever it is going to need to do? Data, RAID1: total=313.00GiB, used=213.75GiB System, RAID1: total=32.00MiB, used=80.00KiB Metadata, RAID1: total=2.00GiB, used=1.67GiB GlobalReserve, single: total=387.48MiB, used=0.00B Diagnostics attached. tacgnol-core-diagnostics-20220418-1903.zip
  9. ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:04 [alert] 22207#22207: worker process 21417 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:05 [alert] 22207#22207: worker process 21456 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:06 [alert] 22207#22207: worker process 21501 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:06 [alert] 22207#22207: worker process 21503 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:07 [alert] 22207#22207: worker process 21537 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:08 [alert] 22207#22207: worker process 21550 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:08 [alert] 22207#22207: worker process 21676 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:09 [alert] 22207#22207: worker process 21700 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:10 [alert] 22207#22207: worker process 21710 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:12 [alert] 22207#22207: worker process 21775 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:14 [alert] 22207#22207: worker process 22100 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:14 [alert] 22207#22207: worker process 22160 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:15 [alert] 22207#22207: worker process 22161 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:17 [alert] 22207#22207: worker process 22162 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:18 [alert] 22207#22207: worker process 22373 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:19 [alert] 22207#22207: worker process 22664 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:20 [alert] 22207#22207: worker process 22692 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:21 [alert] 22207#22207: worker process 22744 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:22 [alert] 22207#22207: worker process 22745 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:23 [alert] 22207#22207: worker process 22832 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:24 [alert] 22207#22207: worker process 22871 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:26 [alert] 22207#22207: worker process 22959 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:28 [alert] 22207#22207: worker process 23025 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:28 [alert] 22207#22207: worker process 23116 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:30 [alert] 22207#22207: worker process 23117 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:30 [alert] 22207#22207: worker process 23251 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:32 [alert] 22207#22207: worker process 23292 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:34 [alert] 22207#22207: worker process 23364 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:34 [alert] 22207#22207: worker process 23479 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:35 [alert] 22207#22207: worker process 23480 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:35 [alert] 22207#22207: worker process 23524 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:36 [alert] 22207#22207: worker process 23526 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:37 [alert] 22207#22207: worker process 23598 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:38 [alert] 22207#22207: worker process 23675 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:39 [alert] 22207#22207: worker process 23727 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:40 [alert] 22207#22207: worker process 23728 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:40 [alert] 22207#22207: worker process 23929 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:41 [alert] 22207#22207: worker process 23936 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:42 [alert] 22207#22207: worker process 23938 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:44 [alert] 22207#22207: worker process 24132 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:46 [alert] 22207#22207: worker process 24335 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:48 [alert] 22207#22207: worker process 24425 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:49 [alert] 22207#22207: worker process 24623 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:50 [alert] 22207#22207: worker process 24640 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:52 [alert] 22207#22207: worker process 24710 exited on signal 6 This seems to be spamming my nginx/error.log
  10. tacgnol-core-diagnostics-20211211-1930.zipSpamming it got me there.
  11. I will step back to symptoms, about 50% of the time UI pages won't load, just the banner loads with nothing below. I can't manage to actually successfully download a diagnostics file due to the above. Docker containers become randomly inaccessible constantly.
  12. What are the below processes that eat all of my CPU?
  13. Yes, seems it does. The fact that after many years and many topics on this subject, unraid cannot automatically kill anything that is preventing a umount, is an absolute joke. Guess I have 2 days of throttled IO and no parity protection ahead of me.
  14. Will this mean a full parity check?
  15. lsof | grep /mnt finds nothing who finds only the terminal i'm looking at, at ~
  16. Nope, only one session connected and it's at ~
  17. Diagnostics attached. Don't think it was that in the end, I managed to remove the plugin, but still won't unmount. Can't see any opened files.... tacgnol-core-diagnostics-20211207-2304.zip
  18. I can't stop my array (despite stopping all dockers and VMs first) seemingly because the unraid.net myservers plugin is trying to run a flash backup. This is preventing unraid unmounting disks. I have tried to uninstall the plugin but it won't uninstall, I assume because it's doing stuff. How do I kill it? Preferably with fire.
  19. Standard SAS connectors: For me, I have an LSI 9201-16e driving a Netapp DS4246 disk shelf, which is a SAS disk shelf, both SATA and SAS drives connect in to it.
  20. rtho782

    Disk Errors

    I have done so, but it shows no errors.
  21. Haha, honestly these 12TB SAS drives show the same performance as the WD Golds in my array (much better than the shucked 12TB "Red" drives) and were £185 each, so about the same as I can pick up a 12TB external drive for when they come on offer via Amazon. Going forward all my new drives will be SAS. If you want SAS drives for testing, I have a glut of 450GB 15krpm SAS drives I have been slowly throwing in the bin as they are utterly useless to me due to the tiny size. I'm sure the seek times are awesome at 15k, but even the sequential reads are not great due to the low density.
  22. rtho782

    Disk Errors

    I have 1 disk, an older 4tb WD Black, in my array with read errors. There were 3 of these drives in the array, this one and another had previous read errors shown, the other also had reallocated sectors so I swapped it out. The drive in question has no SMART data that would indicate a failure to me. Most of the drives in this array are in the same Netapp DS4243 disk shelf on the same LSI 9201 16e controller, so it's not cabling. The array is currently in a parity rebuild (but has double parity anyway) as I pulled an old 450GB SAS drive from it to put a 12TB SAS drive in it's place. I do have another 12TB SAS drive precleared and ready to add to the array I could swap the 4TB with, but I'm not sure if I'm being overly cautious. The last time I saw read errors, was also in a parity rebuild. Are these read errors recovered errors or unrecovered errors? Diagnostics attached https://imgur.com/MaXkvye Screenshot of main page. https://imgur.com/1ScIKew drive to controller mappings. The SSDs on the LSI controller are not in the disk shelf, all the HDDs are. tacgnol-core-diagnostics-20210316-0710.zip
  23. Disregard my last post, I think it was because that drive was in the middle of preclearing and so didn't respond in a timely manner. It's working this morning having finished preclear Edit: Missing an image for the V1 WD Red Pro, the V2's have images, I guess it should be the same image: https://imgur.com/Ygu6oYU
  24. Does this tool dislike SAS drives? It seems to run until it hits my first SAS drive then freezes before eventually timing out: DiskSpeed - Disk Diagnostics & Reporting tool Version: 2.9.1 Scanning Hardware 20:51:34 Spinning up hard drives 20:51:35 Scanning system storage 20:51:35 Scanning USB Bus 20:51:36 Scanning hard drives 20:51:41 Scanning storage controllers 20:51:43 Scanning USB hubs & devices 20:51:45 Scanning motherboard resources 20:51:47 Fetching known drive vendors from the Hard Drive Database 20:51:49 Found controller SAS2116 PCI-Express Fusion-MPT SAS-2 [Meteor] 20:51:50 Found drive Western Digital WD120EDAZ Rev: 81.00A81 Serial: 5PJY7WNB (sdaa), 1 partition 20:51:51 Found drive Western Digital WD6002FFWX Rev: 83.H0A83 Serial: NCH3EUUZ (sdab), 1 partition 20:51:51 Found drive Western Digital WD6002FFWX Rev: 83.H0A83 Serial: NCH386ZZ (sdac), 1 partition 20:51:53 Found drive Western Digital WD6001FFWX Rev: 81.00A81 Serial: WXB1HB4JUJ9W (sdad), 1 partition 20:51:53 Found drive Western Digital WD6002FFWX Rev: 83.H0A83 Serial: K1GR5MUD (sdae), 1 partition 20:51:53 Found drive Western Digital WD6002FFWX Rev: 83.H0A83 Serial: NCH3875Z (sdaf), 1 partition 20:51:56 Found drive HGST HUH721212AL4200 Rev: 0 Serial: AAG9TLXH (sdag), 0 partitions Lucee 5.2.9.31 Error (application) Message timeout [90000 ms] expired while executing [/sbin/parted -m /dev/sdah unit B print free] Stacktrace The Error Occurred in /var/www/ScanControllers.cfm: line 1706 1704: <CFIF DriveID NEQ ""> 1705: <!--- Fetch partition information ---> 1706: <cfexecute name="/sbin/parted" arguments="-m /dev/#DriveID# unit B print free" variable="PartInfo" timeout="90" /> 1707: <CFFILE action="write" file="#PersistDir#/parted_#DriveID#.txt" output="#PartInfo#" addnewline="NO" mode="666"> 1708: <CFSET TotalPartitions=0> called from /var/www/ScanControllers.cfm: line 1635 1633: </CFIF> 1634: </CFLOOP> 1635: </CFLOOP> 1636: 1637: <!--- Admin drive creation ---> Java Stacktrace lucee.runtime.exp.ApplicationException: timeout [90000 ms] expired while executing [/sbin/parted -m /dev/sdah unit B print free] at lucee.runtime.tag.Execute._execute(Execute.java:241) at lucee.runtime.tag.Execute.doEndTag(Execute.java:252) at scancontrollers_cfm$cf.call_000163(/ScanControllers.cfm:1706) at scancontrollers_cfm$cf.call(/ScanControllers.cfm:1635) at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:933) at lucee.runtime.PageContextImpl._doInclude(PageContextImpl.java:823) at lucee.runtime.listener.ClassicAppListener._onRequest(ClassicAppListener.java:66) at lucee.runtime.listener.MixedAppListener.onRequest(MixedAppListener.java:45) at lucee.runtime.PageContextImpl.execute(PageContextImpl.java:2464) at lucee.runtime.PageContextImpl._execute(PageContextImpl.java:2454) at lucee.runtime.PageContextImpl.executeCFML(PageContextImpl.java:2427) at lucee.runtime.engine.Request.exe(Request.java:44) at lucee.runtime.engine.CFMLEngineImpl._service(CFMLEngineImpl.java:1090) at lucee.runtime.engine.CFMLEngineImpl.serviceCFML(CFMLEngineImpl.java:1038) at lucee.loader.engine.CFMLEngineWrapper.serviceCFML(CFMLEngineWrapper.java:102) at lucee.loader.servlet.CFMLServlet.service(CFMLServlet.java:51) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:292) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:94) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:492) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:80) at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:620) at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:684) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:502) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1152) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:684) at org.apache.tomcat.util.net.AprEndpoint$SocketWithOptionsProcessor.run(AprEndpoint.java:2464) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:748) Timestamp 3/14/21 8:53:26 PM GMT
  25. Excellent, it's back up and running. I've got duplicati backing up the USB to google drive daily now. I think I got that working before but promptly forgot what it was installed for and deleted it!! I'm going to put this image here in case I have future problems and can't remember drive assignments, it's running a parity check due to the unclean shutdown but that's fine. Thanks for all your help! T