AeonLucid

Members
  • Posts

    7
  • Joined

  • Last visited

Everything posted by AeonLucid

  1. Issue is still occuring on 6.11.0-rc3.
  2. I understand, but I can easily comment them out, reboot and go back to stock. There is no intent to expose this to the internet or use the unraid remote access solution. I just want to use SSL with a local IP address, which is properly supported by SSL certificates. Yes I know that, which is why I am forcing the nginx configuration to use the IP address as server_name that I have configured inside my SNI certificate. authorityKeyIdentifier=keyid,issuer basicConstraints=CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment subjectAltName = @alt_names [alt_names] IP.1 = 192.168.1.2 There is no option to "provide your own self-signed" certificate. Which is why I had to patch the nginx script. It does, now. I want to use a local IP Address.
  3. Progress I have made some progress on my issue. What helped was understanding what iowait means and how it occurs. It happens when something requires io of a disk while that disk is under full load by another process, causing it to wait. This gave me the idea to split my cache pool into two. One cache pool (a) is used for appdata / system / domains (vms) Other cache pool (b) is used for all shares that will eventually get moved to the array, such as plex media / downloads Doing this has fixed my issue of plex (or other containers) becoming unresponsive when media is downloaded, because it is a totally separate disk. Remaining issue While this has solved some issue's, I still have issues with the mover. When files are moved from the cache pool (b) to my array, reading anything from the array of the disk that data is being moved to just does not work until the mover is finished. I tried to fix this by changing the mover CPU & IO priority with my own plugin (CA Mover Tuning fork). This seemed to have helped a little bit, but after the mover finishes I notice high usage across my entire array (updating parity?) which still causes lock ups (cpu iowait) until that is finished, and this puts load on all disks. I have already tried both available md_write_method, reconstruct write (aka turbowrite) finishes the mover faster but the parity process after still causes issues.
  4. Just found this plugin and it looks great, has anyone tried it on 6.10.0-rc2? Will it work? Also, am I seeing it right that this is not "tuning" the original mover but a replacement of it?
  5. Updating to 6.10.0-rc2 from 6.9.2 broke my SSL and SSH setup. My self signed certificate (for local SSL access) was overwritten by the new automatically generated certificate and I am now forced to use the hostname + (optional) local TLD while I wish to use IP address access. I am not turning off SSL so I can access my server with an IP address. At my first attempt to replace /boot/config/ssl/certs/Hostname_unraid_bundle.pem with my own bundle (signed by my own root CA so it is trusted) failed and it got overwritten again. Can I please just use my own stuff? Regarding SSH, prior to updating I did migrate to the new way of providing authorized_keys. The file /boot/config/ssh/root/authorized_keys does contain my public keys and I confirmed ~/.ssh/authorized_keys does aswell. However when I try to connect as before I get "Server refused our key". Edit: Using this comment by @maxstevens2 I created and put the following into my /boot/config/go file to disable the SSL certificate bundle overwrite and I also added something to get IP address access with SSL back. If you copy this, don't forget to replace the IP with yours. # Patch certificate bundle overwrite. sed -i 's/\[\[ \$SUBJECT != \$LANFQDN ]]/# Patched out by go script/g' /etc/rc.d/rc.nginx # Patch hostname redirect. sed -i 's/server_name \$LANFQDN;/server_name \$LANFQDN 192.168.1.2;/g' /etc/rc.d/rc.nginx Edit 2: Updating my SSH client fixed the SSH issues.
  6. I'm having some trouble with Unraid and I'm hoping that someone can assist me, it is related to high cpu_iowait on the cache pool. I have a share that is used by sabnzbd / sonarr / radarr / plex, containing all my media files. This share has "cache pool" set to "Yes", so the mover will later put them on the array. The appdata and system shares are both on the same "cache pool" (as the media), set to "Prefer". My array consists of all 8TB WD RED and my cache pool is a Samsung 860 2TB evo. Both are xfs encrypted. I noticed this issue first when I had 2x 2TB SSD with btrfs encrypted but in other threads of others with a similar issue people suggested to try xfs, which did not fix it. At this time, I am still on xfs encrypted for both my array and the cache pool. Sabnzbd has a volume /mnt/user/plex/usenet to /storage/usenet. - /storage/usenet/incomplete - /storage/usenet/completed Sonarr has a volume /mnt/user/plex/ to /storage. - /storage/media/TVShows/Someshow So this will move downloaded media from /storage/usenet/completed to /storage/media/TVShows/Someshow, and translated /mnt/user/plex/usenet/completed to /mnt/user/plex/usenet/completed. When I download a season with sonarr, I notice after a download completes in sabnzbd and gets imported, the `cpu_iowait` goes up to 30% or higher and docker containers start to freeze. Also, when the mover runs, moving all the episodes from the cache to array, it freezes everything and plex direct play stops working as well. High cpu_iowait in both case. This is not the only case in which the cpu_iowait goes up. For example when I write a 13GB file to the same share using samba, the same thing happens. If you are playing a movie on plex that is located on the array, playback stops and starts buffering. Every container becomes really unresponsive.
  7. I have a very weird issue with this container and the binhex-delugevpn container. I have always downloaded at full speed using nordvpn config files, but since a few days ago I have noticed that the speed randomly stays at exactly 1MB/s. It's not that big of a deal, but when this happens, my entire network has a ping of 1000ms. I have never had that issue when downloading at full speed. When I download a large file (10GB) to my cache disk using SSH, the speed is my max and the ping stays normal. Anyone know what could be causing this?