gowg

Members
  • Posts

    22
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

gowg's Achievements

Noob

Noob (1/14)

3

Reputation

1

Community Answers

  1. Solved. I ran memtest and encountered thousands of errors within seconds. I pulled the ram sticks one by one, and the very last one was the culprit. My server is stable now, thanks. Edit: I did also have to run a parity check and restore some backups since the bad RAM corrupted a bunch of stuff.
  2. Can I post the log file here? It seems to have LAN IP addresses, is it safe to post? Symptoms: server works fine for a couple hours but then all dockers stop and the docker service becomes unavailable, then the server becomes unresponsive. Here are some choice bits from the log: Jul 1 23:18:45 quad-unraid kernel: Code: 73 76 8b 0e 48 8d 5c c6 10 31 d2 89 f8 f7 f1 44 8b 0c 93 45 85 c9 74 60 44 89 c8 2b 46 04 83 cf 01 48 01 c8 89 fd 48 8d 1c 83 <44> 8b 23 48 83 c3 04 44 89 e0 83 c8 01 39 c5 75 2f 49 8b 42 58 45 This error appears dozens of times in a row, a different core each time (this error is the end of the log file) and also Jul 1 22:59:52 quad-unraid kernel: BTRFS info (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 0, flush 0, corrupt 65, gen 0 and Jul 1 23:18:20 quad-unraid kernel: docker_load[7974]: segfault at f9 ip 0000000000457780 sp 00007ffc3812d1f0 error 4 in bash[426000+c5000] likely on CPU 15 (core 3, socket 0)
  3. Same issue. Tried docker new perms tool and it didn't fix it. I later realized that the containers were complaining about things in /appdata and I don't know how to fix that so I'm reverting to 6.9
  4. ich777, thanks for all your hard work. would you be able to add a Natural Selection 2 server? works with steamcmd
  5. Did you ever figure this out? I'm having the same issue.
  6. Ahhh, I forgot that I had another cache pool for my /backups share. And I think CA appdata backup, urbackup and the mover are all running at once. I'm going to change the schedule and then I'll report back with my findings. edit: and my windows VM was doing a defrag on a 4gb VDISK, hahaha. disabled that. edit2: I seem to remember plex's /appdata being a metadata disaster for the poor mover. hundreds of files couldn't be moved. I can't see them in the mover log because it's so long, though? or maybe it's fixed. any way to find out? edit3: and a parity-sync was running as well...facepalm
  7. Good question, I have never fiddled with that tool, no. Could it be something related to the mover? The mover has been extremely slow lately. It's been running for 9 hours with only ~100GB filled before it started. Also the cache pool is two RAID1 1TB nvme drives. I enabled logging, should I put the syslog here? I'm not sure if it's sanitized, didn't see an option for it?
  8. Just happened to see this in htop. Should I be concerned?? lol Also: My mover is running, and taking a lot longer than it usually does. like it usually finishes within an hour or two but it's been almost 5 hours now. Also I'm seeing 20% cpu usage on a 12-core 5900x. no VMs or dockers running. edit: ok the cpu usage is 'CA appdata backup'. I wonder if it's also running that command
  9. ohh, I didn't nkow that it made a raid1 array if I added another cache drive. Thanks for your help. I'll start the steps to remove it. There's no way to just make the cache pool JBOD?
  10. Thanks for the reply! I added a cache drive so it's a pool of 2 drives. 500GB+128GB, both SSDs
  11. I'm having this issue on 6.9.2, any way to fix it?
  12. I'm fairly certain, yeah. And you can test deepstack by just installing its docker (and following the gpu instructions, just 3 steps iirc) and "deepstack ui". Both in the unraid app store. The deepstack UI allows you to easily spit your own image to the AI and test it. Thanks for all your help
  13. COMMAND python3 /app/intelligencelayer/shared/detection.py COMMAND python3 /app/intelligencelayer/shared/face.py shucks, seems to be the same