gowg

Members
  • Posts

    22
  • Joined

  • Last visited

Everything posted by gowg

  1. Solved. I ran memtest and encountered thousands of errors within seconds. I pulled the ram sticks one by one, and the very last one was the culprit. My server is stable now, thanks. Edit: I did also have to run a parity check and restore some backups since the bad RAM corrupted a bunch of stuff.
  2. Can I post the log file here? It seems to have LAN IP addresses, is it safe to post? Symptoms: server works fine for a couple hours but then all dockers stop and the docker service becomes unavailable, then the server becomes unresponsive. Here are some choice bits from the log: Jul 1 23:18:45 quad-unraid kernel: Code: 73 76 8b 0e 48 8d 5c c6 10 31 d2 89 f8 f7 f1 44 8b 0c 93 45 85 c9 74 60 44 89 c8 2b 46 04 83 cf 01 48 01 c8 89 fd 48 8d 1c 83 <44> 8b 23 48 83 c3 04 44 89 e0 83 c8 01 39 c5 75 2f 49 8b 42 58 45 This error appears dozens of times in a row, a different core each time (this error is the end of the log file) and also Jul 1 22:59:52 quad-unraid kernel: BTRFS info (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 0, flush 0, corrupt 65, gen 0 and Jul 1 23:18:20 quad-unraid kernel: docker_load[7974]: segfault at f9 ip 0000000000457780 sp 00007ffc3812d1f0 error 4 in bash[426000+c5000] likely on CPU 15 (core 3, socket 0)
  3. Same issue. Tried docker new perms tool and it didn't fix it. I later realized that the containers were complaining about things in /appdata and I don't know how to fix that so I'm reverting to 6.9
  4. ich777, thanks for all your hard work. would you be able to add a Natural Selection 2 server? works with steamcmd
  5. Did you ever figure this out? I'm having the same issue.
  6. Ahhh, I forgot that I had another cache pool for my /backups share. And I think CA appdata backup, urbackup and the mover are all running at once. I'm going to change the schedule and then I'll report back with my findings. edit: and my windows VM was doing a defrag on a 4gb VDISK, hahaha. disabled that. edit2: I seem to remember plex's /appdata being a metadata disaster for the poor mover. hundreds of files couldn't be moved. I can't see them in the mover log because it's so long, though? or maybe it's fixed. any way to find out? edit3: and a parity-sync was running as well...facepalm
  7. Good question, I have never fiddled with that tool, no. Could it be something related to the mover? The mover has been extremely slow lately. It's been running for 9 hours with only ~100GB filled before it started. Also the cache pool is two RAID1 1TB nvme drives. I enabled logging, should I put the syslog here? I'm not sure if it's sanitized, didn't see an option for it?
  8. Just happened to see this in htop. Should I be concerned?? lol Also: My mover is running, and taking a lot longer than it usually does. like it usually finishes within an hour or two but it's been almost 5 hours now. Also I'm seeing 20% cpu usage on a 12-core 5900x. no VMs or dockers running. edit: ok the cpu usage is 'CA appdata backup'. I wonder if it's also running that command
  9. ohh, I didn't nkow that it made a raid1 array if I added another cache drive. Thanks for your help. I'll start the steps to remove it. There's no way to just make the cache pool JBOD?
  10. Thanks for the reply! I added a cache drive so it's a pool of 2 drives. 500GB+128GB, both SSDs
  11. I'm having this issue on 6.9.2, any way to fix it?
  12. I'm fairly certain, yeah. And you can test deepstack by just installing its docker (and following the gpu instructions, just 3 steps iirc) and "deepstack ui". Both in the unraid app store. The deepstack UI allows you to easily spit your own image to the AI and test it. Thanks for all your help
  13. COMMAND python3 /app/intelligencelayer/shared/detection.py COMMAND python3 /app/intelligencelayer/shared/face.py shucks, seems to be the same
  14. COMMAND python3 /app/intelligencelayer/shared/face.py COMMAND python3 /app/intelligencelayer/shared/detection.py there's 2 processes I guess
  15. nvidia-smi just says "python3" so I'm assuming that means it's impossible to track Deepstack?
  16. I'm working around this issue, I've decided. I briefly considered throwing a faster m.2 ssd in there, and a discrete ethernet adapter to skirt any driver/bridging issues but It turned out to be close to the cost of a used SFF 8th-gen intel 16gb/128gb ssd computer, so I'm just solving this issue by giving up on the VM and running my NVR on bare-metal. And I get to use quicksync, so my power usage will be way down. The VM was using 50-70% off my 6-core ryzen 3600.
  17. Love the plugin, it's working great! The only thing I'd like to request is the Deepstack icon showing up for the Deepstack GPU docker. The Plex icon works
  18. Good point. I just checked and my VM vdisk is in the "domains" share which is on "prefer" cache and it just always stays there as long as the cache is not full, correct? And my Windows 10 VM and its NVR software only use the vidsk in domains except for once a week when the NVR offloads some files to the array. And the usenet client uses a share that I've made that is cache: "yes" so it uses cache. Thanks for your help, these are important details that I missed
  19. Hello everyone, I have a Windows 10 VM running network video recorder software (Blue Iris) and it's receiving 450 mbit/sec at all times. I have a usenet client in a docker on the same unraid system and its speeds are awful (72mbit/sec) but when I pause the VM, speeds go back to normal (430 mbit/sec). At first glance, this seems to be a network issue, but I have a bare-metal Windows computer hooked up to the same switch, and when I run the usenet client on it, I can get full speed (430 mbit/sec) on usenet as well as 450 mbit/sec on the VM with the NVR. I have gigabit on the unraid box, it's a ryzen 3600 with an x570 asus motherboard and 32gb of RAM. I thought it might be the cache drive not being fast enough, but I did a benchmark and its random write was over a gigabit/sec so that can't be it. Is this some sort of wacky undetectable overhead in the VM's network stack? Would it help to assign the VM its own physical pci-e ethernet adapter? Is there some driver issue that could cause this? I'm using the redhat virtio driver.