OliverRC

Members
  • Posts

    13
  • Joined

  • Last visited

OliverRC's Achievements

Noob

Noob (1/14)

5

Reputation

  1. Thanks I have updated my reply with the steps required. Appreciate the swift response. I will outline the steps to make is clear to the next person
  2. Okay that sounds good, now I am just a little hesitant after destroying my cache the first time. Updated: Instructions to achieve this thanks to the help of @JorgeB 1. Stop Array 2. Change "Cache" / Pool name pool to 2 slots 3. Add drive to 2nd slot 4. Format 5. Let BTFRS operation run to completion (RAID 1 - the default) 6. Click "Cache" / Pool Name under "Pool devices" 7. Under "Balance Status" click the dropdown "Perform full balance" and select "Convert to single mode" 8. Let balance complete 9. Success, live long and prosper
  3. Objective: * Create pool with a 256 SSD and a 1 TB NVME * Have both drives in single pool addressable by shares * No redundancy required * Maximize space (e.g 1256 GB being the theoretical max ignoring FS and RAID implementations) How do I achieve this? I have tried and failed miserably once (thankfully had a backup of `/mnt/cache`). When I added both to a pool the UI reported the same drive space as the 256GB SSD. Trying to go back to single device BTRFS I managed to break everything. This BTRFS calculator indicates that the RAID 1 configuration means I cannot use ~744Gb of data. Now I am back with two single device pools, but that is "meh" as a share can only be allocated to a single pool. I'd appreciate some guidance before proceeding and having to spend 2 hrs restoring my server
  4. Amazing man, I suspected something had changed in the updated and my config was old. This saved me a ton of debugging!
  5. +1 and API for remotely managed server seems like it should be a core feature!
  6. Cool, did a spot check comparing the files in /mnt/disk{n} vs /mnt/cache vs /mnt/user and it seems that the user directory is only looking at cache and whatever is on disk must be old. Thanks for the advice /response @trurl
  7. It would seem as though the files on the disks are old. Perhaps this is why the mover doesn't want to move them? What course of action would be recommended? Go in an remove the old files?
  8. I have a number of files I am trying to consolidate on the Cache drive as a performance optimization (tyring to follow these steps) for Plex. The files currently are distributed across the Cache and Disks. I have stopped all containers, disabled Docker and given ample time for shutdown. Furthermore, when this didn't work I rebooted the system and made sure no container processes or VM (I have none) were running. Unfortunately not matter what I do it seems the mover is not doing anything. I've run it from the GUI, run it with logging enabled, and tried running it from Terminal. The cache drive has ample free space and I've tried everything to ensure the files are not being access and therefore locked. The only thing I can see is when drilling down through the share that some files are marked as "yellow" which I can only presume is some permissions. Could this be preventing them from moving? Does the disk indicator imply the file is duplicated across both Cache and Disk in the above screenshot? The mover completes it's execution, does not output anything that I can see and yet the files remain unchanged in my "/appdata" share
  9. @chris_netsmart I have been running now for a while since this post after disabling the DLNA server, with no issues. It looks like there might have been an issue with it. I've not needed to re-enable it so cannot confirm if updates have resolved it.
  10. I upgraded to 8GB of RAM and no dice. It looks like DLNA Server in Plex is eating up a huge amount of RAM. Then at some point the system freaks out and goes into page swapping madness and kills the CPU and RAM maxing them out to 100%. I understand what is going on here. top command
  11. Thanks for the response, during this time I was watching something on Plex (1080p). I am quite new to unraid, what do you think the bottleneck/root cause is? If it is waiting for I/O I assume this could be a case of the array being to slow to serve the video? That would be odd as I've had this hardware on a Windows setup and been able to run Ultra HD playback with no issue. It shouldn't be that demanding on disc reads to playback a 1080p movie.
  12. I have been running unRaid for a short time and recently done the update too 6.7.0. Unfortunately almost once a day the system seems to freak out and go to 100% CPU and 100% RAM. Only a reboot seems to sort it out. It will then run find for a few hours before starting again. Specs (From System Profiler): unRAID system:unRAID server Plus, version 6.7.0 Model:Custom Motherboard:Gigabyte Technology Co., Ltd. - H67M-D2-B3 Processor:Intel® Core™ i3-2100 CPU @ 3.1 GHz HVM:Enabled IOMMU:Disabled Cache:Internal Cache = 64 kB (max. capacity 64 kB) External Cache = 3072 kB (max. capacity 2048 kB) Memory:4 GB (max. installable capacity 32 GB) A0 = 2048 MB, 1333 MT/s A2 = 2048 MB, 1333 MT/s Network:bond0: fault-tolerance (active-backup), mtu 1500 eth0: 1000Mb/s, full duplex, mtu 1500 Kernel:Linux 4.19.41-Unraid x86_64 OpenSSL:1.1.1b P + Q algorithm:8949 MB/s + 13062 MB/s Uptime:0 days, 19 hours, 27 minutes, 53 seconds The dashboard shows 100% CPU across all cores and 100% (99% RAM) Oddly the Dynamix System Stats plugin only shows 15% CPU Usage I ran "top" in the terminal and this is the output %Cpu(s): 0.3 us, 12.1 sy, 0.0 ni, 0.1 id, 85.1 wa, 0.0 hi, 2.4 si, 0.0 st MiB Mem : 3866.2 total, 202.4 free, 3007.4 used, 656.4 buff/cache MiB Swap: 0.0 total, 0.0 free, 0.0 used. 25.7 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3614 root 20 0 0 0 0 R 16.9 0.0 24:46.13 unraidd 3894 root 0 -20 0 0 0 D 10.3 0.0 3:29.14 loop2 586 root 20 0 0 0 0 S 5.6 0.0 6:37.58 kswapd0 2732 root 20 0 0 0 0 I 5.0 0.0 0:17.00 kworker/u16:0-btrfs-endio 13752 root 20 0 0 0 0 I 2.3 0.0 0:37.84 kworker/u16:7-btrfs-endio 5321 nobody 20 0 2527920 427988 4 S 2.0 10.8 7:40.29 Plex Media Serv 4410 root 20 0 0 0 0 I 1.7 0.0 0:25.73 kworker/u16:11-btrfs-endio 17537 root 20 0 0 0 0 I 1.7 0.0 0:32.31 kworker/u16:1-btrfs-endio 23464 root 20 0 0 0 0 I 1.7 0.0 0:32.55 kworker/u16:3-btrfs-endio 5469 nobody 20 0 1884424 121616 4 S 1.0 3.1 2:30.94 mono 5990 nobody 20 0 2370020 192732 4 S 1.0 4.9 2:34.85 mono 6119 nobody 20 0 4049524 1.8g 0 S 0.7 48.5 10:53.89 Plex DLNA Serve 29387 root 20 0 0 0 0 I 0.7 0.0 0:39.92 kworker/u16:10-btrfs-endio 1206 root 0 -20 0 0 0 I 0.3 0.0 0:02.14 kworker/3:1H-kblockd 3447 root 20 0 281516 3580 2892 S 0.3 0.1 3:12.59 emhttpd 3798 root 20 0 1024204 22248 812 S 0.3 0.6 31:27.85 shfs 5775 nobody 20 0 1760460 46560 0 S 0.3 1.2 1:54.39 sabnzbdplus 6021 nobody 35 15 1710952 43756 0 S 0.3 1.1 1:21.96 Plex Script Hos 6610 nobody 20 0 301592 63836 0 D 0.3 1.6 0:11.73 Plex Media Scan 10845 root 20 0 8784 3240 2348 R 0.3 0.1 0:00.03 top 1 root 20 0 2460 1600 1500 S 0.0 0.0 0:10.33 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthreadd 3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp 4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_par_gp 6 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:0H-kblockd 8 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 mm_percpu_wq 9 root 20 0 0 0 0 S 0.0 0.0 0:01.44 ksoftirqd/0 10 root 20 0 0 0 0 I 0.0 0.0 0:24.20 rcu_sched 11 root 20 0 0 0 0 I 0.0 0.0 0:00.00 rcu_bh 12 root rt 0 0 0 0 S 0.0 0.0 0:00.04 migration/0 13 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/0 14 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/1 15 root rt 0 0 0 0 S 0.0 0.0 0:00.03 migration/1 16 root 20 0 0 0 0 S 0.0 0.0 0:01.78 ksoftirqd/1 I use the server for files and Sonarr, SABnzbd, Radarr and Plex. I know RAM @ 4GB is on the low side but I ran this setup on Windows 10 before and it was fine, I can't imagine unRaid needing more RAM then that although I plan on upping to 8GB. I've uploaded a diagnostics dumpy of when this happened. oliver-server-diagnostics-20190519-1207.zip