evoactivity

Members
  • Posts

    3
  • Joined

  • Last visited

Everything posted by evoactivity

  1. Oh that's annoying lol I searched for the macvlan in the output after the first crash at around 2:30am, I didn't notice it logged that after the reboot. I've changed that. I've run xfs_repair on all the xfs disks now and this is output. root@Highrise:~# xfs_repair /dev/sdf1 Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... done root@Highrise:~# xfs_repair /dev/sdd1 Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 3 - agno = 2 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... done root@Highrise:~# xfs_repair /dev/sde1 Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 3 - agno = 2 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... done root@Highrise:~# xfs_repair /dev/sdb1 Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 2 - agno = 3 - agno = 0 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... done root@Highrise:~# xfs_repair /dev/sdg1 Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 3 - agno = 1 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... done root@Highrise:~# xfs_repair /dev/sdi1 Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 3 - agno = 7 - agno = 4 - agno = 1 - agno = 5 - agno = 6 - agno = 2 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... done root@Highrise:~# xfs_repair /dev/sdc1 Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 3 - agno = 6 - agno = 2 - agno = 5 - agno = 1 - agno = 8 - agno = 4 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 7 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... done I'm not familiar with this command so not sure if the output shows it found issues or if that output is normal. I'll see how long I am able to stay online now, hopefully this has fixed things.
  2. I have a feeling this is going to be hardware related but posting incase there is something in here I'm overlooking. The system completely locks up, unresponsive to ssh, shares go offline, webui doesn't load, just completely dead and have to turn it off by holding the power button. It usually get about 2 weeks of uptime until running into this. My BIOS is as up to date as can be. I'm going to run memtest on this system asap, but it's 3am so not right now lol shfs is being marked as tainted a lot. syslog-192.168.1.29 - crashes.log
  3. I've been running my plex container on the same hardware for years now, I have 8gb of ram installed and this has never been an issue, transcodes have always worked. I have recently added new drives and went from a 16TB array to a 36TB array. My memory usage never seems to go above 50%. When I look in htop after an OOM kill the process is still in the list. My transcode directory is mapped to my cache SSD I have added --memory=8G to extra parameters for the plex container This is all I have in the syslog Jan 10 06:21:09 Highrise kernel: [ 15581] 0 15581 1127 144 45056 0 0 prerunget.sh Jan 10 06:21:09 Highrise kernel: [ 16323] 0 16323 51 1 28672 0 0 s6-supervise Jan 10 06:21:09 Highrise kernel: [ 16324] 0 16324 51 5 32768 0 0 s6-supervise Jan 10 06:21:09 Highrise kernel: [ 16325] 0 16325 51 1 28672 0 0 s6-supervise Jan 10 06:21:09 Highrise kernel: [ 16327] 0 16327 51 1 28672 0 0 s6-supervise Jan 10 06:21:09 Highrise kernel: [ 16329] 0 16329 396 13 40960 0 0 crond Jan 10 06:21:09 Highrise kernel: [ 16331] 0 16331 6709 2580 94208 0 0 fail2ban-server Jan 10 06:21:09 Highrise kernel: [ 16332] 0 16332 58669 2566 143360 0 0 php-fpm7 Jan 10 06:21:09 Highrise kernel: [ 16360] 99 16360 58674 1903 122880 0 0 php-fpm7 Jan 10 06:21:09 Highrise kernel: [ 16361] 99 16361 58674 1903 122880 0 0 php-fpm7 Jan 10 06:21:09 Highrise kernel: [ 17682] 0 17682 1256 277 49152 0 0 prerunget.sh Jan 10 06:21:09 Highrise kernel: [ 17686] 0 17686 596 21 40960 0 0 sleep Jan 10 06:21:09 Highrise kernel: [ 18065] 99 18065 137693 16758 307200 0 0 deluged Jan 10 06:21:09 Highrise kernel: [ 18523] 99 18523 305818 503 192512 0 0 privoxy Jan 10 06:21:09 Highrise kernel: [ 18533] 99 18533 20396 10891 192512 0 0 deluge-web Jan 10 06:21:09 Highrise kernel: [ 16869] 99 16869 1209887 1197534 9723904 0 0 Plex Transcoder Jan 10 06:21:09 Highrise kernel: [ 18826] 0 18826 596 22 40960 0 0 sleep Jan 10 06:21:09 Highrise kernel: [ 24282] 0 24282 543 11 36864 0 0 run Jan 10 06:21:09 Highrise kernel: [ 25139] 0 25139 1053 677 40960 0 0 cache_dirs Jan 10 06:21:09 Highrise kernel: [ 25141] 0 25141 1053 677 40960 0 0 cache_dirs Jan 10 06:21:09 Highrise kernel: [ 25143] 0 25143 1053 677 40960 0 0 cache_dirs Jan 10 06:21:09 Highrise kernel: [ 25161] 0 25161 662 187 40960 0 0 timeout Jan 10 06:21:09 Highrise kernel: [ 25162] 0 25162 1044 578 45056 0 0 find Jan 10 06:21:09 Highrise kernel: [ 25176] 0 25176 662 187 40960 0 0 timeout Jan 10 06:21:09 Highrise kernel: [ 25177] 0 25177 1042 596 40960 0 0 find Jan 10 06:21:09 Highrise kernel: [ 25181] 0 25181 662 188 40960 0 0 timeout Jan 10 06:21:09 Highrise kernel: [ 25182] 0 25182 1011 273 40960 0 0 find Jan 10 06:21:09 Highrise kernel: [ 25267] 0 25267 940 741 45056 0 0 sh Jan 10 06:21:09 Highrise kernel: [ 25268] 0 25268 940 63 45056 0 0 sh Jan 10 06:21:09 Highrise kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/docker/4749f7790f7ed85cedcaa8f6f91a1f36a8267a138dbf248bf41425810255f8a3,task=Plex Transcoder,pid=16869,uid=99 Jan 10 06:21:09 Highrise kernel: Out of memory: Killed process 16869 (Plex Transcoder) total-vm:4839548kB, anon-rss:4790136kB, file-rss:0kB, shmem-rss:0kB, UID:99 pgtables:9496kB oom_score_adj:0 Jan 10 06:21:09 Highrise kernel: oom_reaper: reaped process 16869 (Plex Transcoder), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB Jan 10 08:00:16 Highrise crond[1734]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Jan 10 10:00:16 Highrise crond[1734]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Jan 10 12:00:16 Highrise crond[1734]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Jan 10 13:15:37 Highrise kernel: kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to [email protected] if you depend on this functionality. It looks like the OOM reports total-vm is around 4GB instead of 8GB but I don't really know why that would be. EDIT: So now I'm looking into this more, it might not be OOM. I am seeing in the syslog but the time doesn't match. It seems like the plex transcoder just stops transcoding after a few minutes and I have to stop and restart the stream for it to start a new transcode session. It sounded similar to this https://forums.unraid.net/topic/110169-plex-suddenly-having-problems-with-audio-transcoding-i-think/ but I am able to play the file for a while until it stops. I've deleted the codecs folder in my plex data as suggested by that thread but I still have my transcodes just stop after a while. The process isn't killed, it just seems to hang.