Leaderboard
Popular Content
Showing content with the highest reputation on 04/16/21 in all areas
-
Original comment thread where idea was suggested by reddit user /u/neoKushan : https://old.reddit.com/r/unRAID/comments/mlcbk5/would_anyone_be_interested_in_a_detailed_guide_on/gtl8cbl/ The ultimate goal of this feature would be to create a 1:1 map between unraid docker templates and docker-compose files. This would allow users to edit the docker as either a compose file or a template and backing up and keeping revision control of the template would be simpler as it would simply be a docker-compose file. I believe the first step in doing so is changing the unraid template structure to use docker-compose labels for all the metadata that unraid uses for its templates that doesn't already have a 1:1 map to docker-compose. this would be items such as WebUI, Icon URL, Support Thread, Project Page, CPU Pinning, etc. Most of the meat of these templates are more or less direct transcriptions of docker-compose, put into a GUI format. I don't see why we couldn't take advantage of this by allowing users to edit and backup the compose file directly.4 points
-
yeah i too have been playing with the alternative way of generating a token and it does seem more stable, so i have made the change for both port assignment and also wireguard, as both require tokens and both are failing, one failing when connecting to 10.0.0.1 and the other failing when connecting to metadata server on certain endpoints, so it def still does look like a pia outage of some type here, but the alternative method of hitting https://privateinternetaccess.com/gtoken/generateToken, seems to be working ok so i have switched over to that, just generating a test image now... edit - ok test went well, new prod image built, please pull down and try at your convenience.3 points
-
@bastl ..alles gut, sollte auch keine Kritik sein. Ich habe auch so meine Probleme damit, insbesondere mit FreeBSD/*Sense....diese Art zu denken kriege ich nicht in den Kopf. 😇2 points
-
It's literally on the post before yours. WITH PICTURES. Please, please please people, if you need FREE support for a FREE product. At least TRY to solve it yourself or read / search this forum. Cmon Sent from my Mi 10 Pro using Tapatalk2 points
-
New Update 2021.04.16 Added "Script to run before mover" text field. Enter the path to a custom script to run before mover starts. Added "Script to run after mover" text field. Enter the path to a custom script to run after mover finishes. These will ALWAYS run, even if the filters remove all possible files. i.e. "Mover not needed". I plan to change this in the future to only run if something is found. But that is months away at the earliest. Should probably stay away from spaces in the path/filename. I only tested with a simple "Hello World" script. next up, adding Ctime/Mtime option.2 points
-
Grundsätzlich wird es zwar funktionieren, aber wie weit die Performance reicht hängt sehr von Deiner Nutzung ab. Du musst immer bedenken, dass diese Hardware für Durchsatz im Bereich 1Gbps ordentlich gefordert werden wird. Diese Box hat nur 2x LAN und keinen Steckplatz für weitere. Damit ist der Durchsatz auf der LAN Seite schonmal beschränkt auf ein Interface...auch das VLAN Routing muss da durch, zusätzlich zum Internet-Zugang von Clients aus dem LAN. Das System wird mit 1Gbps I-Net also schon einen Flaschenhals haben. Ich würde eines mit Minimum 3 x LAN/Ethernet, besser 4+ empfehlen...für GBit Durchsatz auf allen Kanälen minimum einem echten 2GHz Kern je Port, plus Reserven für die Firewall. LAN Chipsatz ist hier ein Realtek....ich würde da immer auf Intel setzen...Performance/CPU-Last ist in der Regel da auch besser. Ob die CPU auch AES unterstützt habe ich nicht geprüft...für VPN usw würde ich darauf achten. Andere Plattformen gibt es wie Sand am Meer, in der Bucht, Amazonien, den 40 Räubern (alibaba) oder auch hier: https://www.ipu-system.de/ Ich habe dieses https://www.ipu-system.de/produkte/ipu662.html in der FeWo, sogar mit unRaid und OPNSense dann in einer VM. Edit: ich bin von meinem OPNSense setup daheim übrigens wieder weg. IDS/IPS brauche ich nicht und die Firewall-Regeln von BSD wollen nicht in mein Hirn....ich bin auf Mikrotik umgezogen (Linux) und nutze nun das Ding: https://geizhals.de/mikrotik-routerboard-rb4011-router-rb4011igs-rm-a1923183.html ... mit dem 10G Uplink in meinen zentralen Switch habe ich da keine Probleme und das Ding zieht Kreise um meinen "Eigenbau" auf Basis i3-8100 (hatte sowohl opnsense als auch RouterOS da drauf, mit 8GB und einer I350-Quad NIC...Am 1Gbps I-net Anschluss, bei Zugriff über ein VLAN waren 2 Kerne schon zu 70% beschäftigt, im RB4011 geht ein Kern um 2-5% nach oben). Stromverbrauch max 7W....der i3 lief nicht unter 12W. Das i3 System ist nun meinen 24/7 unraid Server ...der macht dann auch die Docker, wie PiHole, zerotier, OVPN, ...2 points
-
Turns out this is pretty simple to implement. It requires editing core files - which means it'll probably be lost one day on upgrades. I have no idea if this can be done with a community app, but this is at least a beginning for us who are used to docker-compose but still able to use the dynamix docker manager web interface. Tested this in version 6.9.2 I take no responsibility if you break something. Make sure you have a backup of this file before you begin. It would be great to see this included in the core of unraid since it's such a simple addition. Edit /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php Look for public function getAllInfo($reload=false) { and at the very end of the foreach add: if ($ct['Icon']) $tmp['icon'] = $ct['Icon']; if ($ct['url']) $tmp['url'] = $ct['url']; Look for public function getDockerContainers() { and inside the foreach, beneath the line containing $c['BaseImage'], add: $c['Icon'] = $info['Config']['Labels']['net.unraid.docker.icon'] ?? false; $c['url'] = $info['Config']['Labels']['net.unraid.docker.webui'] ?? false; Clear browser cache and reload the unraid web ui. Icons and webui links from unraid templates still work, and those from a docker container label also now work.2 points
-
The following script creates incremental backups by using rsync. Check the settings to define your own paths. Donate? 🤗 https://codeberg.org/mgutt/rsync-incremental-backup > incbackup.sh Explanations All created backups are full backups with hardlinks to already existing files (~ incremental backup) All backups use the most recent backup to create hardlinks or new files. Deleted files are not copied (1:1 backup) There are no dependencies between the most recent backup and the previous backups. You can delete as many backups as you like. All backups that are left, are still full backups. This could be confusing as most incremental backup softwares need the previous backups for restoring the data. But this is not valid for rsync and hardlinks. Read here if you need more informations about links, inodes and files. After a backup has been created the script purges the backup dir and keeps only the backups of the last 14 days, 12 month and 3 years, which can be defined through the settings logs can be found inside of each backup folder Sends notifications after job execution Unraid exclusive: Stops docker containers if the source path is the appdata path, to create consistent backups Unraid exclusive: Creates a snapshot of the docker container source path, before creating a backup of it. This allows an extremely short downtime of the containers (usually only seconds). How to execute this script? Use the User Scripts Plugin (Unraid Apps) to execute it by schedule Use the Unassigned Devices Plugin (Unraid Apps) to execute it after mounting a USB drive Call the script manually (Example: /usr/local/bin/incbackup /mnt/cache/appdata /mnt/disk6/Backups/Shares/appdata) How does a backup look like? This is how the backup dir looks like after several month (it kept the backups of 2020-07-01, 2020-08-01 ... and all backups of the last 14 days): And as it's an incremental backup, the storage usage is low: (as you can see I bought new music before "2020-08-01" and before "2020-10-01"): du -d1 -h /mnt/user/Backup/Shares/Music | sort -k2 168G /mnt/user/Backup/Shares/Music/20200701_044011 4.2G /mnt/user/Backup/Shares/Music/20200801_044013 3.8M /mnt/user/Backup/Shares/Music/20200901_044013 497M /mnt/user/Backup/Shares/Music/20201001_044014 4.5M /mnt/user/Backup/Shares/Music/20201007_044016 4.5M /mnt/user/Backup/Shares/Music/20201008_044015 4.5M /mnt/user/Backup/Shares/Music/20201009_044001 4.5M /mnt/user/Backup/Shares/Music/20201010_044010 4.5M /mnt/user/Backup/Shares/Music/20201011_044016 4.5M /mnt/user/Backup/Shares/Music/20201012_044020 4.5M /mnt/user/Backup/Shares/Music/20201013_044014 4.5M /mnt/user/Backup/Shares/Music/20201014_044015 4.5M /mnt/user/Backup/Shares/Music/20201015_044015 4.5M /mnt/user/Backup/Shares/Music/20201016_044017 4.5M /mnt/user/Backup/Shares/Music/20201017_044016 4.5M /mnt/user/Backup/Shares/Music/20201018_044008 4.5M /mnt/user/Backup/Shares/Music/20201018_151120 4.5M /mnt/user/Backup/Shares/Music/20201019_044002 172G /mnt/user/Backup/Shares/Music Warnings Its not the best idea to backup huge files like disk images that changes often as the whole file will be copied. A file change while copying it through rsync will cause a corrupted file as rsync does not lock files. If you like to backup for example a VM image file, stop it first (to avoid further writes), before executing this script! Never change a file, which is inside a backup directory. This changes all files in all backups (this is how hardlinks work)! Do not use NTFS or other partition formats, which do not support Hardlinks and/or Linux permissions. Format external USB drives with BTRFS and install WinBTRFS, if you want to access your backups through Windows. Do NOT use the docker safe perms tool if you backup the appdata share to the array. By that all file permissions are changed and can not be used by your docker containers anymore. Docker safe perms skips only the /mnt/*/appdata share and not for example /mnt/disk5/Backups/appdata!1 point
-
I managed to get mine working again by hashing out those values from your log & replacing the first one with the tls.skip_verify value below & adding true or false depending on your setup. #trusted_cert: "" #disable_verify_cert: tls.skip_verify: Hope this helps1 point
-
Good point. "VPN Tunneled Access" is one of the options in the "Peer type of access" dropdown, so it made sense at the time I renamed the thread to make it more clear1 point
-
Hopefully clocking the RAM slower will help. Many people do not realise that the CPU/motherboard combination can impose a lower safe clock speeds than the RAM is rated for, and that the number of RAM slots in use can also impose limits.1 point
-
Okay, so I upgraded again through the WebUI so as to provide new diagnostic configs. I updated, rebooted, and now everything is working okay. I'm not sure what happened the first time around, but thank you all for your help.1 point
-
According to your screen shot Cache is showing as sdd. The other on is showing as sdc1 point
-
The cause is corruption on the cache drive. Wait for the BTRFS god ( @JorgeB) to pipe in.1 point
-
1 point
-
It will error for USB drives not in the drive.h. Your drive is fine as its in the db. But if others use they should test first. Its simple to get devices added but a request needs to be submitted to the smartmontools team. There is a process to update drive.h but I dont think its in unraid will need to check.1 point
-
To clarify, you posted in the thread that explains how to connect to commercial VPN providers. Is that what you are trying to do? If you are trying to connect from outside your home into your Unraid system please post this question here: https://forums.unraid.net/topic/84229-dynamix-wireguard-vpn/ I don't know if the plugin supports your use case or not1 point
-
It sounds like you enabled Local SSL access. This is a requirement if you want to use the Remote Access feature of My Servers, but if you don't need Remote Access then Local SSL access is not required. To disable Local SSL Access, the first step would be to go to Settings -> Management Access -> Unraid.net and set Remote Access to No. Then on Settings -> Management Access set "Use SSL/TLS" to "No"1 point
-
Doh! I spent 2 hours looking for how to do that...got it and it works! Thanks!1 point
-
Ich hab eine alte ssd drin und die angegebene cpu. ssd ca. 1watt cpu ca. 15-20watt idle ca. 20watt Anschaffung mit Lüftern a 15euro 120Euro1 point
-
I would now rerun it removing the -n (no modify) flag to see if that has helped. You might want to also check if a lost+found folder has been created on the drive from files whose name could not be resolved.1 point
-
1 point
-
1 point
-
I don't see how this could work, just run another check using the GUI to make sure all is well.1 point
-
The fact that the drive is empty should be irrelevant to a parity check/rebuild - it simply processes every sector on the drive in turn regardless of its contents.1 point
-
Did you run the zfs_repair from the GUI or the command line? If the command line exactly what device name did you use? Was the parity check you ran correcting or Non-correcting? If correcting then the next one should show 0 errors. If it was non-correcting then you need to run a correcting check to get rid of the errors - and be aware that the correcting check will then show the same number of errors as unRaid misleadingly reports each correction as if it were an error in the summary (but syslog shows them being corrected).1 point
-
There is: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=4804211 point
-
That's normal with different size devices in raid1, used space is wrong, but unlike before free space is correct, and that's much more important.1 point
-
Das ist viel zu wenig. Kann es sein, dass eine der HDDs SMR nutzt? Falls ja: Ab in den Müll damit ^^ Wenn du sie ins Array einbindest, ist das auch völlig logisch. Du musst neben dem Cache Pool noch einen weiteren Pool hinzufügen. Den nennst du zB "vm", bildest ihn als BTRFS RAID1, erstellst darauf einen Share mit der Cache Option "Only" und legst dort deine VM-Disks ab. Jetzt hast du ein vom Array unabhängiges RAID ohne Flaschenhals (wenn man von der virtuellen Disk absieht). ZFS brauchst du also gar nicht. Alternative: Du lässt die beiden NVMe aus dem Array und Pool raus und weist jeder VM eine physisch zu. Dann fällt auch der Flaschenhals des virtuellen Disk-Treibers weg. Hat aber den Nachteil, dass man nur eine VM pro NVMe nutzen kann. Backups müsste man dann im Betriebsystem selbst erstellen. Also wie bei einem normalen Client auch. Es gibt sogar einen Trick mit dem man so eine NVMe in Unraid öffnen kann, wenn die VM aus ist:1 point
-
Mellanox ConnectX-3 don't run very hot, still need some cooling over them, or at least a case with good airflow.1 point
-
Thanks, I did some googling and I found someone similar issues and their conclusion was power also. I do have a 2-way Molex power splitter splitting the SATA fan-out power with a fan controller from Aliexpress. So it could be either of those causing issues. I've disconnected the fan controller for now and will monitor for further problems.1 point
-
In the GUI go to the Settings -> Management Access page and set the Use SSL/TLS option to "No".1 point
-
If your looking for a good pool that have lower payout fees, a bit less pool fee and is just awesome, try out https://monero.hashvault.pro/ (I don't have any affiliation with them, I would have if it was possible😋) They have 0.9% pool fee instead of 1% for Nanopool. Minimum payout is 0.001 XMR and you don't need to wait ages before changing it if you made a mistake. Lot's of cool graphics. You know that you get a reward and how much when a block is found... And many many more! Some changes from Nanopool for this CA App template. If you need to have a worker name to differentiate them add this in the extra parameters, the actual "Worker Name" won't work. --pass WORKER_NAME Since all pools use TLS encryption, also add this to extra parameters: --tls --tls-fingerprint 420c7850e09b7c0bdcf748a7da9eb3647daf8515718f36d9ccfdd6b9ff834b14 Pool addresses, higher port get higher difficulties. So choose the right one accordingly to your hashrate. If you choose a lesser difficulty, you will get paid less for the share found. choose the one that suit your rig. More info about difficulty pool.hashvault.pro:80 (~1 kH/s) pool.hashvault.pro:443 (~1 kH/s) pool.hashvault.pro:3333 (~2 kH/s) pool.hashvault.pro:5555 (~6 kH/s) pool.hashvault.pro:7777 (~12 kH/s) pool.hashvault.pro:8888 (~60 kH/s) * (1 kH/s = 1000 H/s) * You can check the actual numbers of miners under the "Ports" tab of the website. The first 2 pools recommend the same hash rate but port 80 have almost double the miners than port 443. I have found the following on Investopedia so do what you think is best for you. Happy Mining 😁1 point
-
Hallo mgutt. Herzlichen Dank für die Erklärungen. Das hat sehr geholfen! Nur zur Info: Ich habe bisher extern editiert und den fertigen Text hier einkopiert. Deshalb habe ich bei @mgutt auch keine Box zur Auswahl gesehen. Gute Nacht.1 point
-
I'd recommend starting your own thread here: https://forums.unraid.net/forum/51-vm-engine-kvm/ Be sure to upload your diagnostics (from Tools -> Diagnostics)1 point
-
issues with certain PIA endpoints, switch to another, sweden works but im sure there are others.1 point
-
I think some of the PIA servers are having issues with Wireguard? I usually use Toronto CA, but had to change to Ontario CA to get it to work.1 point
-
Hey @horphi, this is quite difficult to set up but possible. Ravencoin uses the kawpow algorithm which is a GPU algorithm. To use XMRig you will need to go on the latest-root tag and re-install it from CA to get the new GPU options, and configure those. Then you will need to follow the instructions I'm giving for @tsakodim below but set the Coin variable to an unsupported option like x instead. Then you will need to add --algo kawpow to additional arguments and update the pool & wallet details per the Nanopool website. PS. there is a typo in your Additional Arguments: --random-1gb-pages should be --randomx-1gb-pages Hey @tsakodim, At the moment there is a hidden variable for COIN. The --algo option wasn't working with the container in 6.10.1 so I hardcoded COIN instead. If you set COIN it will default to the most optimised algorithm for mining that coin. It supports monero, arqma and dero. Like I said to @horphi above, if you set it to an unsupported option (like x) you can effectively disable it and use --algo in additional options instead. Example: Edit the container > click Add another Path, Port, Variable, Label or Device > Set Config Type to Variable > enter the following and press Add. Here are my logs starting to mine Ravencoin: Driver installation finished. Project: xmrig Author: lnxd Base: Ubuntu 20.04 Target: Unraid 6.9.0 - 6.9.2 Donation: lnxd-fee 1% Driver: 20.20 Running xmrig with the following flags: --url=rvn-au1.nanopool.org:12433 --coin=x --user=84e8UJvXHDGVfE5HZDQfhn3Kh3RGJKebz31G7D4H24TLPMe9x7bQLBw8iyBhNx9USXB8MhvhBe3DyVW1LcuVAf4jBiADNLw.Unraid --randomx-wrmsr=-1 --randomx-no-rdmsr --no-color --algo kawpow --tls --keepalive --opencl * ABOUT XMRig/6.10.0 gcc/9.3.0 * LIBS libuv/1.41.0 OpenSSL/1.1.1j hwloc/2.4.1 * HUGE PAGES supported * 1GB PAGES disabled * CPU Intel(R) Core(TM) i5-10500 CPU @ 3.10GHz (1) 64-bit AES L2:1.5 MB L3:12.0 MB 6C/12T NUMA:1 * MEMORY 29.8/31.1 GB (96%) DIMM_A1: 8 GB DDR4 @ 2400 MHz KHX3200C16D4/8GX DIMM_A2: 8 GB DDR4 @ 2400 MHz KHX3200C16D4/8GX DIMM_B1: 8 GB DDR4 @ 2400 MHz KHX3200C16D4/8GX DIMM_B2: 8 GB DDR4 @ 2400 MHz KHX3200C16D4/8GX * MOTHERBOARD ASUSTeK COMPUTER INC. - PRIME Z490-P * DONATE 1% * ASSEMBLY auto:intel * POOL #1 rvn-au1.nanopool.org:12433 algo kawpow * COMMANDS 'h' hashrate, 'p' pause, 'r' resume, 's' results, 'c' connection * ADL press e for health report * OPENCL #0 AMD Accelerated Parallel Processing/OpenCL 2.1 AMD-APP (3110.6) * OPENCL GPU #0 05:00.0 Radeon RX 580 Series (Ellesmere) 1200 MHz cu:36 mem:4048/8186 MB * CUDA disabled [2021-04-12 09:27:58.454] net use pool rvn-au1.nanopool.org:12433 TLSv1.2 139.99.156.30 [2021-04-12 09:27:58.454] net fingerprint (SHA-256): "c38886efdee542ebd99801b75c75d3498d97978bbcdec07c7271cb19729e014f" [2021-04-12 09:27:58.454] net new job from rvn-au1.nanopool.org:12433 diff 600M algo kawpow height 1707112 [2021-04-12 09:27:58.454] opencl use profile kawpow (1 thread) scratchpad 32 KB | # | GPU | BUS ID | INTENSITY | WSIZE | MEMORY | NAME | 0 | 0 | 05:00.0 | 9437184 | 256 | 2884 | Radeon RX 580 Series (Ellesmere) [2021-04-12 09:27:58.454] net use pool rvn-au1.nanopool.org:12433 TLSv1.2 139.99.156.30 [2021-04-12 09:27:58.454] net fingerprint (SHA-256): "c38886efdee542ebd99801b75c75d3498d97978bbcdec07c7271cb19729e014f" [2021-04-12 09:27:58.454] net new job from rvn-au1.nanopool.org:12433 diff 600M algo kawpow height 1707112 [2021-04-12 09:27:58.454] opencl use profile kawpow (1 thread) scratchpad 32 KB | # | GPU | BUS ID | INTENSITY | WSIZE | MEMORY | NAME | 0 | 0 | 05:00.0 | 9437184 | 256 | 2884 | Radeon RX 580 Series (Ellesmere) [2021-04-12 09:27:58.540] opencl GPU #0 compiling... [2021-04-12 09:27:58.676] opencl GPU #0 compilation completed (135 ms) [2021-04-12 09:27:58.676] opencl READY threads 1/1 (222 ms) [2021-04-12 09:27:58.958] opencl KawPow program for period 569037 compiled (283ms) [2021-04-12 09:27:59.257] opencl KawPow program for period 569038 compiled (298ms) [2021-04-12 09:28:02.113] miner KawPow light cache for epoch 227 calculated (3149ms) [2021-04-12 09:28:02.113] miner KawPow light cache for epoch 227 calculated (3149ms) [2021-04-12 09:28:12.723] opencl KawPow DAG for epoch 227 calculated (10594ms) [2021-04-12 09:28:21.413] opencl accepted (1/0) diff 600M (297 ms) [2021-04-12 09:28:23.914] net new job from rvn-au1.nanopool.org:12433 diff 600M algo kawpow height 1707112 [2021-04-12 09:28:32.938] net new job from rvn-au1.nanopool.org:12433 diff 600M algo kawpow height 17071131 point
-
We're working on a design that lets driver plugins be automatically updated when we issue a release.1 point
-
@ich777 will update them when he awakes. He is on the other side of the world1 point
-
Problem with disk8 looks more like a power/connection issue, replace/swap both cables and try again.1 point
-
1 point
-
I replaced these lines in the '/mnt/cache/appdata/nextcloud/nginx/site-confs/default' file. (Adjust path to your appdata path, if it's different) location = /.well-known/carddav { return 301 $scheme://$host:$server_port/remote.php/dav; } location = /.well-known/caldav { return 301 $scheme://$host:$server_port/remote.php/dav; } location = /.well-known/webfinger { return 301 $scheme://$host:$server_port/public.php?service=webfinger; } location = /.well-known/host-meta { return 301 $scheme://$host:$server_port/public.php?service=host-meta; } location = /.well-known/host-meta.json { return 301 $scheme://$host:$server_port/public.php?service=host-meta-json; } with these lines # Make a regex exception for `/.well-known` so that clients can still # access it despite the existence of the regex rule # `location ~ /(\.|autotest|...)` which would otherwise handle requests # for `/.well-known`. location ^~ /.well-known { # The following 6 rules are borrowed from `.htaccess` location = /.well-known/carddav { return 301 /remote.php/dav/; } location = /.well-known/caldav { return 301 /remote.php/dav/; } # Anything else is dynamically handled by Nextcloud location ^~ /.well-known { return 301 /index.php$uri; } try_files $uri $uri/ =404; } Then I restarted the Nextcloud docker and the error was gone.1 point
-
That's the same thing, when you run a filesystem check on the GUI it runs xfs_repair (for xfs formatted disks).1 point
-
1 point
-
I found by accident another tweak: Direct disk access (Bypass Unraid SHFS) Usually you set your Plex docker paths as follows: /mnt/user/Sharename For example this path for your Movies /mnt/user/Movies and this path for your AppData Config Path (which contains the thumbnails, frequently updated database file, etc): /mnt/user/appdata/Plex-Media-Server But instead, you should use this as your Config Path: /mnt/cache/appdata/Plex-Media-Server By that you bypass unraid's overhead (SHFS) and write directly to the cache disk. Requirements 1.) Create a backup of your appdata folder! You use this tweak on your own risk! 2.) Before changing a path to Direct Disk Access you need to stop the container and wait for at least 1 minute or even better, execute this command to be sure that all data is written from the RAM to the drives: sync; echo 1 > /proc/sys/vm/drop_caches If you are changing the path of multiple containers, do this every time after you stopped the container, before changing the path! 3.) This works only if appdata is already located on your SSD which happens only if you used the cache modes "prefer" or "only": 4.) To be sure that your Plex files are only on your SSD, you must open "Shares" and Press "Compute" for your appdata share. It shows if your data is located only on the SSD or on SSD and Disk. If its on the Disk, too, you must stop the docker engine, execute the mover and recheck through "Compute", after the mover has finished its work. You can not change the path to Direct SSD Access as long files are scattered or you will probably loose data! 5.) And you should set a minimum free space in your Global Share Settings for your SSD cache: This setting is only valid for Shared Access Paths and ignored by the new Direct Access Path. This means it reserves up to 100GB for your Plex container, no matter how many other processes are writing files to your SSD. Whats the benefit? After setting the appdata config path to Direct Access, I had a tremendous speed gain while loading covers, using the search function, updating metadata etc. And its even higher if you have a low power CPU as SHFS produces a high load on single cores. Shouldn't I update all path to Direct Access? Maybe you now think about changing your movies path as well to allow Direct Disk Access. I don't recommend that because you would need to add multiple paths for your movies, tv shows, etc as they are usually spreaded across multiple disks like: /mnt/disk1/Movies /mnt/disk2/Movies /mnt/disk3/Movies ... And if you move movies from one disk to another or add new disks etc this probably cause errors inside of Plex. Furthermore this complicates moving to a different server if this maybe uses a different disk order or an smaller amount of bigger disks. In short: Leave the other Shared Access paths as they are. Does this tweak work for other Containers? Yes. It even works for VM and docker.img paths. But pay attention to the requirements (create backup, flush the linux write cache, check your file locations, etc) before applying the Direct Access path. And think about, if it could be more useful to stay with the Shared Access Path. The general rule is: If a Share uses multiple Disks, do not change this path to Direct Access.1 point
-
1 point
-
Hey folks, Sorry to revive this thread, but after a few hours I've finaly succeed to mount my HDDs on two HP H240 HBA (unRAID 6.8.3). Don't know if anyone already gave this tips but I leave this post here so that I can help someone who, like me, following the purchase of an H240 card will not be able to operate it directly because you have to reset and turn on the HBA mode. I came across this topic while looking for a solution to my problem and it gave me a fairly fruitful starting point in my research. So, you, little unRAIDnewbie who's looking to save some bucks and have plenty of SATA ports on your rig, be advised: There's really no big deal when you've got the right tool. I've seen a few people talking about the HP-SSA but this is not the one. You've must try the Service Pack for Proliant (SPP). For those who struggling to find the software to be used, I strongly recommend you googling this version .. "SPP2017101" (you'll know where to find it;) - Mount it with the HP USB creator included in ISO - Boot on the USB key - Wait a bit .. - Agree to the terms and select next, - At this point, you can choose to update the firmwire on the left, and manage your array on the right. - When you go the management tab, you'll see your card on the left side, click on it (don't mind the warning, it's only if an array is mounted). - Normaly there's only two option: Power Management / Switch to HBA mode - Select the obvious one, I've not tried the firmware update, feel free to do it. - Try to reboot the soft but if you got stuck after quite a time, hard reset it .. Et voilà ! Hope this can help a few 😃 Keko1 point