chris_netsmart Posted August 15, 2019 Share Posted August 15, 2019 I have a WD Black 250 GB High-Performance NVMe Internal SSD which is a about 6 months old at I have a few questions. whenI had a look at it this morning I discovered that I have used about 15% -Data units read180,339,340 [92.3 TB] -Data units written61,966,434 [31.7 TB] which is shocking as I only use it for my Unraid storage, plex, deluge etc. so my thinking is that some of my films / TVs are in MP4 or AVI format and not MVK which I think needs converting over to prevent transcoding with in Plex. so I have modify my Handbrack to point to two folders "Source, and Handbrack_out" and made both "Disk 1 Only with no Cache" if I do this, would it prevent and make my SSD live longer. Quote Link to comment
testdasi Posted August 15, 2019 Share Posted August 15, 2019 56 minutes ago, chris_netsmart said: whenI had a look at it this morning I discovered that I have used about 15% which is shocking as I only use it for my Unraid storage, plex, deluge etc. so my thinking is that some of my films / TVs are in MP4 or AVI format and not MVK which I think needs converting over to prevent transcoding with in Plex. so I have modify my Handbrack to point to two folders "Source, and Handbrack_out" and made both "Disk 1 Only with no Cache" if I do this, would it prevent and make my SSD live longer. First and foremost, don't panic. 15% over 6 months = you still have 3 years to reach 100% Even at 100%, your SSD is unlikely to catastrophically die. As far as I know, Intel is the only company that locks their SSDs in read-only mode when all reserve is used up. SSD tends to fail gracefully so you essentially will only lose capacity as more cells die. Note that only write is relevant. Read doesn't wear out your SSD. In terms of usage, what do you mean by "Unraid storage"? If you are using your 250GB SSD as write cache then you really need to think very carefully about what share needs Cache = Yes. Most of the time, you can bypass the cache. Depending on how much RAM you have, you can move transcode to RAM. To prevent Plex transcode from spamming your RAM causing OoM issues, create this script to run at first array start #!/bin/bash mkdir /tmp/PlexRamScratch mount -t tmpfs -o size=4g tmpfs /tmp/PlexRamScratch And then create a new mapping in your Plex docker to map /transcode to /tmp/PlexRamScratch and then change Plex settings to use /transcode for transcode files. (Hint: change the "size=4g" in the script to more or less depending on how many streams you need. I have found 4g to be sufficient for me (upto 5 1080p streams)) For Handbrake settings, you only need output as no cache to reduce write. (see above - read does not wear out your SSD). Finally I suggest you have a quick pick through this topic just for a bit more info on tips to make your SSD last longer (e.g. reduce write amplification) 1 Quote Link to comment
chris_netsmart Posted August 15, 2019 Author Share Posted August 15, 2019 thanks @testdasi for the reply. but I have one question first with the Plex Trancoding I have just looked within my Plex settings, and I see for the Host Path 2, that it is pointing to /tmp is this the same as type of thing as your script ? if not then can you point me to how to add the Bash script at boot up/ and with in Plex Quote Link to comment
testdasi Posted August 15, 2019 Share Posted August 15, 2019 (edited) 26 minutes ago, chris_netsmart said: thanks @testdasi for the reply. but I have one question first with the Plex Trancoding I have just looked within my Plex settings, and I see for the Host Path 2, that it is pointing to /tmp is this the same as type of thing as your script ? if not then can you point me to how to add the Bash script at boot up/ and with in Plex You need to install CA User Script plugin to create a script and schedule it to run at first array start. You can do it the command line way, the plugin just make it more user friendly. The CA User Script GUI is rather self-explanatory but if you need more help, you can ask in the dedicated support topic (just search the forum for Unraid CA User Script support). Once you get the script, schedule it and run it, when you edit your Plex docker template host path 2, you should be able to see PlexRamScratch in a drop-down when you click on /tmp. Just select it, which should update the path in the box and save the template. Your within-Plex setting looks correct. PS: you do NOT need the script to config Plex to use RAM for transcode temp (i.e. map /transcode to /tmp). It's just that Plex will have access to all your RAM and depending on load may cause you to run out of memory. What the script does is to create a limit to how much RAM Plex can use. Edited August 15, 2019 by testdasi Quote Link to comment
chris_netsmart Posted August 15, 2019 Author Share Posted August 15, 2019 thanks I have done the first part "The Script" but at the moment I can't reboot as my Unraid is being used but someone else. but I will do the second part later, and report back. thanks for your help @testdasi Quote Link to comment
testdasi Posted August 15, 2019 Share Posted August 15, 2019 (edited) 9 minutes ago, chris_netsmart said: thanks I have done the first part "The Script" but at the moment I can't reboot as my Unraid is being used but someone else. but I will do the second part later, and report back. Don't need to reboot Unraid. You just need to run the script. The schedule at first array start is for the next time you boot/reboot Unraid (i.e. so you don't have to rerun the script manually). Edited August 15, 2019 by testdasi Quote Link to comment
chris_netsmart Posted August 15, 2019 Author Share Posted August 15, 2019 silly me, I have done that and I will see how that goes. thanks for your help Quote Link to comment
brainbone Posted August 15, 2019 Share Posted August 15, 2019 I just use a second NVMe for transcoding and other stuff that makes more sense on a scratch disk. This saves some endurance and bandwidth on the main cache SSD, where I'd rather not have its appdata and domains mounts go down, and leaves ram open for more useful endeavors like VMs, dockers, read cache, etc. If you're confident you don't need a huge amount of space for transcodes / scratch disk, a 64GB (or even 32GB, if you don't have that many plex clients) intel Optane will have much higher endurance than a typical NVMe. However, I just use a pair of 500GB 970 EVOs -- one for a standard cache drive + docker/vm storage, the other for transcode/scratch. I'll likely need to replace them every 3 to 5 years. 1 Quote Link to comment
chris_netsmart Posted August 18, 2019 Author Share Posted August 18, 2019 Quick update. I have gone through and moved all the host path 2 for deluge from ssd and onto a differance HD. I have also modify plex to use the new ram folder. And lastly i have disable my handbrake docker for the time being. This hasn't stop the data usaged from clumbing but i feel that this has reduced the amouth that it was being used. Quote Link to comment
DZMM Posted August 19, 2019 Share Posted August 19, 2019 On 8/15/2019 at 11:13 AM, testdasi said: Depending on how much RAM you have, you can move transcode to RAM. To prevent Plex transcode from spamming your RAM causing OoM issues, create this script to run at first array start #!/bin/bash mkdir /tmp/PlexRamScratch mount -t tmpfs -o size=4g tmpfs /tmp/PlexRamScratch @testdasi I just tried this and got: root@Highlander:~# mount -t tmpfs -o size=4g tmpfs /tmp/PlexRamScratch mount: /tmp/PlexRamScratch: mount point does not exist. Is this right? Quote Link to comment
chris_netsmart Posted August 19, 2019 Author Share Posted August 19, 2019 3 hours ago, DZMM said: @testdasi I just tried this and got: root@Highlander:~# mount -t tmpfs -o size=4g tmpfs /tmp/PlexRamScratch mount: /tmp/PlexRamScratch: mount point does not exist. Is this right? I know this works as.i have it working on mine. Did you follow all the steps and install the CA user support. And within here you create a bash file. Quote Link to comment
chris_netsmart Posted August 19, 2019 Author Share Posted August 19, 2019 (edited) I have a ideal that after a bit of playing around I have discovered that my AppData is eating through my SSD. so I am thinking about is moving my Appdata from the Cache and play it onto a Unassigned Device, and only have my personal shares on the cache, as I am not to concern about data transfer speed so what do you all think about this, and would it be as easy as stop all dockers / vm etc and copy data to third location Turn off Array copy data from third location to new location " Unassigned Device " re-point the links in all my dockers restart Update. I think i found a solution https://forums.sudo.fail/t/unraid-guide-advanced-unassigned-appdata/67 Edited August 20, 2019 by chris_netsmart Quote Link to comment
DZMM Posted August 19, 2019 Share Posted August 19, 2019 5 hours ago, chris_netsmart said: I know this works as.i have it working on mine. Did you follow all the steps and install the CA user support. And within here you create a bash file. I tried initially just running at the command prompt, but just tried in a script and the same result: #!/bin/bash mkdir /tmp/PlexRamScratch mount -t tmpfs -o size=8g tmpfs /tmp/PlexRamScratch exit Script Starting Mon, 19 Aug 2019 18:04:19 +0100 Full logs for this script are available at /tmp/user.scripts/tmpScripts/zzzz test873821079/log.txt mount: /tmp/PlexRamScratch: mount point does not exist. Script Finished Mon, 19 Aug 2019 18:04:19 +0100 Full logs for this script are available at /tmp/user.scripts/tmpScripts/zzzz test873821079/log.txt I have a lot of RAM but occasionally I get OOM errors, so a bit more protection would be nice. Quote Link to comment
Squid Posted August 19, 2019 Share Posted August 19, 2019 Did you copy / paste for the forum? That occasionally adds in extra characters. With user scripts, you can make sure you're running yesterday's update, then edit the script and save it right away 1 Quote Link to comment
DZMM Posted August 19, 2019 Share Posted August 19, 2019 3 hours ago, Squid said: Did you copy / paste for the forum? That occasionally adds in extra characters. With user scripts, you can make sure you're running yesterday's update, then edit the script and save it right away I double-checked in notepad++ and then pasted after updating plugin - worked ok. Thanks Quote Link to comment
Squid Posted August 19, 2019 Share Posted August 19, 2019 1 hour ago, DZMM said: notepad++ There's some setting that you have to enable in Notepad++ in order to see them Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.