Jump to content

pyrater

Members
  • Content Count

    527
  • Joined

  • Last visited

Community Reputation

7 Neutral

About pyrater

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. So i had a hdd die (8tb). So i installed a new 3 TB drive and copied all the data off the emulated drive to the new 3 TB drive effectively making the dead drive empty. However the only way to remove the drive without replacing it is to do a new config. If i do this will it corrupt the parity forcing a new check? IE disk 3 is no longer disk 3 disk 3 died and the new disk 3 is now what disk 4 was....
  2. Thanx Johnnie for now i have manually edited the Database file of radarr to never automatically scan my stuff. Seems to be a temp fix. Link for others: https://github.com/Radarr/Radarr/issues/1826
  3. Almost positive its due to Radarr, not just to starting the app but Radarr is scanning my monitored movie files and it spikes my CPU from 30% to 60% and shfs from around 90.0 CPU% to 166%. Not sure there is anything "wrong" per say. I guess
  4. still searching, wasnt deluge afterall (edited)
  5. So i moved all my drives to XFS and i am still seeing high CPU use with SHFS. Please see attached. Is this just normal or can i further refine / tweak settings to reduce CPU usage. I assume since this is now all xfs it must be related to a docker mount point? icarus-diagnostics-20200414-1015.zip
  6. Yes it will make all writes to the array faster. read here: Best use case is to just use a cache drive but ymmv.
  7. unzip to the cache, else try turbo mode. I assume your being limited due to not only the unzip but also by write speed. Turbo mode is under settings - disk settings - Tunable (md_write_method): - and set reconstruct mode See if that helps.
  8. So i am getting alot of SHFS high cpu usages when scanning my media with sonarr/radarr etc. Based of my research it appear this can happen if your using a old Reiserfs drive (which i have one drive left with that the rest are XFS). I am currently moving my data to another XFS drive. Then I will stop the array, change the old now empty drive to XFS, mount format and boom. The question is what happens to the parity? Will unraid force a new parity check or am I good to go? The reason I ask is because this will be a much faster copy if i disable the parity (if unraid is just going to force a parity check anyway) but i also want to avoid another 18 hour parity check. Hopefully that question makes sense.
  9. .....Frank that may very well be it.... I am caching 83 TBs on 16GB of memory with 8GB of that dedicated to a plex transcode ramdisk. Is it possible to check this? top - 12:01:28 up 2 days, 22:15, 3 users, load average: 6.91, 6.34, 4.82 Tasks: 388 total, 2 running, 386 sleeping, 0 stopped, 0 zombie %Cpu(s): 90.3 us, 2.1 sy, 4.2 ni, 3.4 id, 0.0 wa, 0.0 hi, 0.1 si, 0.0 st MiB Mem : 15732.1 total, 156.5 free, 5446.3 used, 10129.3 buff/cache MiB Swap: 0.0 total, 0.0 free, 0.0 used. 6932.5 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 11058 nobody 20 0 1195292 359328 18088 R 370.0 2.2 23:59.63 Plex Transcoder 6044 nobody 20 0 17932 1444 672 S 2.3 0.0 1:08.15 EasyAudioEncode 5440 nobody 20 0 753832 50128 9584 S 1.7 0.3 4:13.90 Plex Transcoder 4001 nobody 20 0 2779608 1.4g 14600 S 1.3 9.1 37:09.63 Plex Media Serv 32029 root 20 0 839180 31592 17996 S 1.3 0.2 54:03.57 containerd 24305 root 20 0 1596456 348108 992 S 1.0 2.2 2370:41 shfs 11340 root 20 0 152380 33148 27360 S 0.7 0.2 0:00.02 docker 32013 root 20 0 1260828 73996 43840 S 0.7 0.5 34:43.80 dockerd 10 root 20 0 0 0 0 I 0.3 0.0 2:21.84 rcu_sched 2465 root 20 0 109104 7948 5184 S 0.3 0.0 5:00.19 containerd-shim 2610 root 20 0 109104 9960 4864 S 0.3 0.1 0:56.35 containerd-shim 3305 root 20 0 109104 9960 4800 S 0.3 0.1 1:01.88 containerd-shim 8343 root 20 0 105276 13532 7504 S 0.3 0.1 0:00.03 php-fpm 11292 root 20 0 6724 3204 2432 R 0.3 0.0 0:00.01 top 23070 root 20 0 349176 4260 3404 S 0.3 0.0 8:11.76 emhttpd 23836 root 20 0 0 0 0 S 0.3 0.0 2:05.27 unraidd1 23951 root 20 0 3792 2736 2464 S 0.3 0.0 0:47.52 diskload 31975 root 0 -20 0 0 0 S 0.3 0.0 8:15.51 loop2 32727 root 20 0 30276 17024 1988 S 0.3 0.1 0:24.80 supervisord 1 root 20 0 2468 1732 1620 S 0.0 0.0 0:24.17 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.04 kthreadd 3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp 4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_par_gp 6 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:0H-kblockd 8 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 mm_percpu_wq 9 root 20 0 0 0 0 S 0.0 0.0 0:31.88 ksoftirqd/0 11 root 20 0 0 0 0 I 0.0 0.0 0:00.00 rcu_bh 12 root rt 0 0 0 0 S 0.0 0.0 0:01.06 migration/0 13 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/0 14 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/1 15 root rt 0 0 0 0 S 0.0 0.0 0:01.17 migration/1 16 root 20 0 0 0 0 S 0.0 0.0 0:31.58 ksoftirqd/1
  10. Is there anyway to trouble shoot this plugin? I have it running but none of my drives spun down after 3 hours. The only thing being accessed is my SSD / Cache pool for plex.
  11. ty johnnie! For many years you have been a heavy hitter here, thank you as always!
  12. Is this bad or normal? icarus-diagnostics-20200407-0908.zip icarus-smart-20200407-0848.zip