rtho782
Members-
Posts
49 -
Joined
-
Last visited
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
rtho782's Achievements
Rookie (2/14)
10
Reputation
-
I think this is fine, just don't announce you're being acquired by Broadcom next week!!
-
I'm also getting permissions errors. I've deleted and recreated the docker and appdata folders to no avail. text error warn system array login % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Warning: Failed to create the file serverinstall_100_2298: Permission denied 0 5356k 0 3669 0 0 40766 0 0:02:14 --:--:-- 0:02:14 40766 curl: (23) Failure writing output to destination + chmod +x serverinstall_100_2298 chmod: cannot access 'serverinstall_100_2298': No such file or directory + curl -o server-icon.png 'https://www.feed-the-beast.com/_next/image?url=https%3A%2F%2Fapps.modpacks.ch%2Fmodpacks%2Fart%2F96%2Fstoneblock_logo.png&w=64&q=75' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Warning: Failed to create the file server-icon.png: Permission denied 15 4806 15 756 0 0 6750 0 --:--:-- --:--:-- --:--:-- 6750 curl: (23) Failure writing output to destination + ./serverinstall_100_2298 --path /data --auto /launch.sh: line 13: ./serverinstall_100_2298: No such file or directory + rm -f user_jvm_args.txt + [[ -n NAGP Server Powered by Printyplease ]] + sed -i '/motd\s*=/ c motd=NAGP Server Powered by Printyplease' /data/server.properties sed: can't read /data/server.properties: No such file or directory + [[ -n world ]] + sed -i '/level-name\s*=/ c level-name=world' /data/server.properties sed: can't read /data/server.properties: No such file or directory + [[ -n ChairmanMeow782 ]] + echo ChairmanMeow782 + awk -v RS=, '{print}' /launch.sh: line 24: ops.txt: Permission denied + [[ true = \f\a\l\s\e ]] + echo eula=true /launch.sh: line 27: eula.txt: Permission denied + [[ -n -Xms6144m -Xmx14336m ]] + echo -Xms6144m -Xmx14336m + awk -v RS=, '{print}' /launch.sh: line 30: user_jvm_args.txt: Permission denied + curl -o log4j2_112-116.xml https://launcher.mojang.com/v1/objects/02937d122c86ce73319ef9975b58896fc1b491d1/log4j2_112-116.xml % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Warning: Failed to create the file log4j2_112-116.xml: Permission denied 100 1131 100 1131 0 0 19842 0 --:--:-- --:--:-- --:--:-- 19500 curl: (23) Failure writing output to destination + chmod +x start.sh chmod: cannot access 'start.sh': No such file or directory + ./start.sh /launch.sh: line 35: ./start.sh: No such file or directory + cd /data + [[ -f serverinstall_100_2298 ]] + rm -f 'serverinstall*' 'forge-*.jar' + curl -o serverinstall_100_2298 https://api.modpacks.ch/public/modpack/100/2298/server/linux % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Warning: Failed to create the file serverinstall_100_2298: Permission denied 0 5356k 0 3669 0 0 41693 0 0:02:11 --:--:-- 0:02:11 41693 curl: (23) Failure writing output to destination + chmod +x serverinstall_100_2298 chmod: cannot access 'serverinstall_100_2298': No such file or directory + curl -o server-icon.png 'https://www.feed-the-beast.com/_next/image?url=https%3A%2F%2Fapps.modpacks.ch%2Fmodpacks%2Fart%2F96%2Fstoneblock_logo.png&w=64&q=75' % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Warning: Failed to create the file server-icon.png: Permission denied 15 4806 15 754 0 0 6854 0 --:--:-- --:--:-- --:--:-- 6854 curl: (23) Failure writing output to destination + ./serverinstall_100_2298 --path /data --auto /launch.sh: line 13: ./serverinstall_100_2298: No such file or directory + rm -f user_jvm_args.txt + [[ -n NAGP Server Powered by Printyplease ]] + sed -i '/motd\s*=/ c motd=NAGP Server Powered by Printyplease' /data/server.properties sed: can't read /data/server.properties: No such file or directory + [[ -n world ]] + sed -i '/level-name\s*=/ c level-name=world' /data/server.properties sed: can't read /data/server.properties: No such file or directory + [[ -n ChairmanMeow782 ]] + echo ChairmanMeow782 + awk -v RS=, '{print}' /launch.sh: line 24: ops.txt: Permission denied + [[ true = \f\a\l\s\e ]] + echo eula=true /launch.sh: line 27: eula.txt: Permission denied + [[ -n -Xms6144m -Xmx14336m ]] + echo -Xms6144m -Xmx14336m + awk -v RS=, '{print}' /launch.sh: line 30: user_jvm_args.txt: Permission denied + curl -o log4j2_112-116.xml https://launcher.mojang.com/v1/objects/02937d122c86ce73319ef9975b58896fc1b491d1/log4j2_112-116.xml % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Warning: Failed to create the file log4j2_112-116.xml: Permission denied 100 1131 100 1131 0 0 19169 0 --:--:-- --:--:-- --:--:-- 19169 curl: (23) Failure writing output to destination + chmod +x start.sh chmod: cannot access 'start.sh': No such file or directory + ./start.sh /launch.sh: line 35: ./start.sh: No such file or directory ** Press ANY KEY to close this window **
-
This was also my update to 6.10.2, so it seems plex not working is somewhat common?
-
Diagnostics attached. I deleted docker.img and reinstalled all my docker applications. All work fine except Plex, which doesn't ever come up. The log doesn't seem to have any errors: [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 40-plex-first-run: executing... Plex Media Server first run setup complete [cont-init.d] 40-plex-first-run: exited 0. [cont-init.d] 45-plex-hw-transcode-and-connected-tuner: executing... [cont-init.d] 45-plex-hw-transcode-and-connected-tuner: exited 0. [cont-init.d] 50-plex-update: executing... [cont-init.d] 50-plex-update: exited 0. [cont-init.d] done. [services.d] starting services Starting Plex Media Server. [services.d] done. And yet it doesn't work at all. Any ideas? tacgnol-core-diagnostics-20220707-0706.zip
-
One of my cache drives occasionally drops out. I don't quite know why, but it's happened twice in a 3 month period. Last time I replaced cables etc. The cache is mirrored. Each time the drive comes back if I reboot the system, but I have to remove it from the array, balance the cache to single drive, then add it back and balance back to RAID1. This process scares me. I'd like to throw a 3rd drive in, and not use RAID5 (as the FAQ cautions against this) but instead have 3 copies. Is this possible? If not in the UI, in the CLI? Diagnostics attached in case they are relevant. tacgnol-core-diagnostics-20220530-2321.zip
-
I ended up pulling the "failed" drive, letting it rebalance to single drive, then putting the failed drive back (with new cable obviously) and rebalancing to raid1 again. I had to rebuild docker.img but everything else seems fine.
-
Update: Cache seems to have gone read-only and wont fix itself. Scrub just aborts. What do? tacgnol-core-diagnostics-20220418-2014.zip
-
I have two cache arrays, one is effectively a write cache for the array, and is striped, the other is where my appdata sits, and is mirrored. I'm getting a lot of errors filling up syslog from the appdata cache: BTRFS warning (device sdb1): lost page write due to IO error on /dev/sdc1 (-5) BTRFS error (device sdb1): error writing primary super block to device 2 I can see these errors from the btrfs command too: root@Tacgnol-Core:/var/log# btrfs dev stats /mnt/appdatacache/ [/dev/sdb1].write_io_errs 0 [/dev/sdb1].read_io_errs 0 [/dev/sdb1].flush_io_errs 0 [/dev/sdb1].corruption_errs 0 [/dev/sdb1].generation_errs 0 [/dev/sdc1].write_io_errs 21657204 [/dev/sdc1].read_io_errs 4497345 [/dev/sdc1].flush_io_errs 1108503 [/dev/sdc1].corruption_errs 0 [/dev/sdc1].generation_errs 0 The other array is fine. These two SSDs are relatively new, and I suspect it's a cable issue, which is nice and easy to replace, but how do I tell it to "rebuild" the array or whatever it is going to need to do? Data, RAID1: total=313.00GiB, used=213.75GiB System, RAID1: total=32.00MiB, used=80.00KiB Metadata, RAID1: total=2.00GiB, used=1.67GiB GlobalReserve, single: total=387.48MiB, used=0.00B Diagnostics attached. tacgnol-core-diagnostics-20220418-1903.zip
-
ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:04 [alert] 22207#22207: worker process 21417 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:05 [alert] 22207#22207: worker process 21456 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:06 [alert] 22207#22207: worker process 21501 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:06 [alert] 22207#22207: worker process 21503 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:07 [alert] 22207#22207: worker process 21537 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:08 [alert] 22207#22207: worker process 21550 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:08 [alert] 22207#22207: worker process 21676 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:09 [alert] 22207#22207: worker process 21700 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:10 [alert] 22207#22207: worker process 21710 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:12 [alert] 22207#22207: worker process 21775 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:14 [alert] 22207#22207: worker process 22100 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:14 [alert] 22207#22207: worker process 22160 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:15 [alert] 22207#22207: worker process 22161 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:17 [alert] 22207#22207: worker process 22162 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:18 [alert] 22207#22207: worker process 22373 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:19 [alert] 22207#22207: worker process 22664 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:20 [alert] 22207#22207: worker process 22692 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:21 [alert] 22207#22207: worker process 22744 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:22 [alert] 22207#22207: worker process 22745 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:23 [alert] 22207#22207: worker process 22832 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:24 [alert] 22207#22207: worker process 22871 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:26 [alert] 22207#22207: worker process 22959 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:28 [alert] 22207#22207: worker process 23025 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:28 [alert] 22207#22207: worker process 23116 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:30 [alert] 22207#22207: worker process 23117 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:30 [alert] 22207#22207: worker process 23251 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:32 [alert] 22207#22207: worker process 23292 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:34 [alert] 22207#22207: worker process 23364 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:34 [alert] 22207#22207: worker process 23479 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:35 [alert] 22207#22207: worker process 23480 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:35 [alert] 22207#22207: worker process 23524 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:36 [alert] 22207#22207: worker process 23526 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:37 [alert] 22207#22207: worker process 23598 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:38 [alert] 22207#22207: worker process 23675 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:39 [alert] 22207#22207: worker process 23727 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:40 [alert] 22207#22207: worker process 23728 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:40 [alert] 22207#22207: worker process 23929 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:41 [alert] 22207#22207: worker process 23936 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:42 [alert] 22207#22207: worker process 23938 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:44 [alert] 22207#22207: worker process 24132 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:46 [alert] 22207#22207: worker process 24335 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:48 [alert] 22207#22207: worker process 24425 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:49 [alert] 22207#22207: worker process 24623 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:50 [alert] 22207#22207: worker process 24640 exited on signal 6 ker process: ./nchan-1.2.7/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2021/12/11 22:50:52 [alert] 22207#22207: worker process 24710 exited on signal 6 This seems to be spamming my nginx/error.log
-
tacgnol-core-diagnostics-20211211-1930.zipSpamming it got me there.
-
I will step back to symptoms, about 50% of the time UI pages won't load, just the banner loads with nothing below. I can't manage to actually successfully download a diagnostics file due to the above. Docker containers become randomly inaccessible constantly.
-
-
Yes, seems it does. The fact that after many years and many topics on this subject, unraid cannot automatically kill anything that is preventing a umount, is an absolute joke. Guess I have 2 days of throttled IO and no parity protection ahead of me.
-
Will this mean a full parity check?
-
lsof | grep /mnt finds nothing who finds only the terminal i'm looking at, at ~