Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. 2 things that come to mind: Unassigned Device should be mapped in docker as RW/Slave, not Read / Write. If you install the Fix Common Problem plugin, it will flag all dockers with wrong type for you so you know which one to change. NTFS doesn't work really that well with Linux in general. If you are going to use a drive as a temp drive for Unraid, format it as XFS.
  2. So how did you do it? Would be very useful for me to be able to interact with the GUI but I don't know how to do it with Python.
  3. You can have a short list of a few models and search the forum for build reports, issues etc. That's the best way to do it due to the number of models out there.
  4. Assuming you have a realistic expectation to claim warranty with Samsung for enterprise SSD (that assumption cannot be assumed!), it comes down to how much write you expect to do. The PM983 is rated for 1.3 DWPD for only 3 years = 0.96TB * 365 * 3 = 1051TB TBW. The 970 Evo has a 5-year warranty but only for 600TB TBW (= 600 / 5 / 365 = 0.329 DWPD). In other words, do you expect to write 600TB over 5 years or 1000TB over 3 years? Also Note that the PM983 is 22110 and the 970 Evo is 2280. Not all motherboards support 22110. I have both the PM983 and 970 Evo. They have the exact same ID under the devices tab suggesting they have the exact same controller. I can't say for sure that they are fundamentally the same; however, given the same controller and similar price point, my hunch is the PM983 is simply the 970 Evo with a hard 4% over-provisioning (which is why 960GB vs 1TB) and a redesigned form-factor. With regards to WL, you have no control over that. It's the manufacturer algorithm. The best you can do is, where possible, split write-intensive and read-intensive into different drives.
  5. Write amplification can theoretically explain the 400 TBW (or at least a major part of it). Plex transcode and downloads writes are mostly random, which increases write amplification (all other things being equal). Essentially when a random write is required, the controller must pick a block to write within a reasonable amount of time (remember SSD is all about response time, even more so with NVMe). If it can't figure out a good block to pick, it has to erase some blocks, move data around, rewrite data that has already been written etc.. So to reduce WA, you need another factor (i.e. no longer "all other things being equal") to come into play. The 2 most common factors are over-provisioning and trim. The former relies on the controller to be aware of the over-provisioned space to write to and the latter makes the OS aware of "good" area to write. Both has the same commonality, which is available free space. Over-provisioning forces an area on the chip to always be free and trim basically marks free area as "good". Wear leveling also increases WA. WL essentially moves data out of a less-used area into a more-frequently-used area to free up the less-use areas for write i.e. so blocks are more evenly worn out. That's why static data on the SSD inadvertently increases WA but the degree of which depends very much on the WL algorithm. So when you put 2+2, it's obvious that the more free space and the less static data, the less WA there will be. As I mentioned 512GB SSD is not at all that large. If you use it for temp data, it will eventually run out of idea as to what blocks are "good" to write to, which requires trim to reclaim. People typically run trim weekly, which isn't often enough if, for example, one writes 1TB of data daily. You can also over-provision the SSD yourself (which I call soft over-provision) by simply leaving as much free space available as possible. Some controllers are smart enough to detect such free space and use it automatically as over-provisioned space. Even assuming a dump controller, probability alone can make the difference. Your last sentence is the reason why I said I think you approached the issue from the wrong angle. Without paying an arm and a leg for enterprise-level solutions (e.g. SLC SSD - do they even sell these any more?), you really can't do better than Samsung SSDs in terms of endurance rating. By the way, it sounds alarming but certainly don't be alarmed. Your 960 is more likely than not to still last a while. I have tried to purposely run an SSD to the ground and it is way harder than people think, as long as it's not an Intel SSD.
  6. Docker stop can be used to stop a specific docker docker stop docker_name
  7. I think you are approaching the issue from the wrong angle. If you look at those "write-intensive" enterprise SSDs in the market, they all do the same thing - have hard over-provisioning. That's why "read-intensive" will have 4TB but "write-intensive" will have 3.84TB or something like that. So a better approach would be to rethink your config. 512GB is not a lot of space and when you add static data (e.g. your VM vdisk, docker img, docker appdata etc.), you don't have much left to over-provision for write activities. So a better approach, for example, if you get a new 512GB SSD, you can consider mounting your current 960 as UD and use it for temp data such as Plex transcode Download temp files Such temp data does not need RAID1 protection. Moving it out to the 960 will reduce write activity on your new SSD and give it more soft over-provisioning space for any inevitable write that must be done i.e. prolonging its lifespan. Also remember to trim your SSDs. Trim is good. While I can't recommend "good" SSD for write intensive workloads, I can dis-recommend (there is such a word?) QLC SSDs. They are absolutely terrible for write-intensive workloads.
  8. Have you included the cost for a Plex Pass? Hardware transcoding needs a Plex Pass IIRC. Also note that if you use Unraid Nvidia, you MUST only update using that build i.e. not using the official version. I thought the warning was obvious on the support topic but apparently some people never read the instructions and shout loudly when things stopped working.
  9. You can have the eth on different bridges and they should show up on the docker settings option.
  10. CA Userscript plugin + write a bash script with docker stop, sleep and docker start. You can control the sleep time to any arbitrary length.
  11. You mean changing the "Cache" box to Yes in the share settings in the Unraid GUI? That should be safe as long as your mount scripts are all /mnt/user.
  12. A few more notes: To run the script from desktop, you can use Putty and a bat file to automatically SSH into the server and execute the script. Normally, the SSH password is in plain text in the bat file which is a security risk so you might want to set up a public key login (can't remember how I did it but remember there was a post on the forum so you will need a bit of searching). Win10 VM generally co-operates with virsh stop but I have found OSX VM to be less co-operative. So what I have done in the past is to lengthen the wait time (i.e. longer than a typical shutdown) and then use virsh destroy to guarantee the OSX VM shutdown (virsh destroy is equivalent of turning off the power) If VM name has space then use double quote. You can also use uuid. So assuming the typical shutdown time is 10s then I'll add 5s to the wait time just to be safe. Script will be something like this: #!/bin/bash virsh stop "VM1 name" sleep 15s virsh destroy "VM1 name" virsh start "VM2 name"
  13. Sorry but your question is very ambiguous. What kind of issue? What kind of "use"?
  14. What is your split folder settings for the Share? Krusader may very well be moving stuff under the same folder which based on split folder settings should be on the same disk and when you did things manually, you happened to pick stuff that can be split. Btw, don't use Krusader for large data transfers. I have found Krusader (and MC) to cause fragmentation on my drives so I have switched to command line cp / mv instead.
  15. Start a new template with Hyper-V off. I have found editing an existing template somehow doesn't turn off Hyper-V properly. I'm sure there's a tag I miss somewhere but starting a new template is way faster.
  16. Optimal largely depends on your personal preferences. For example, 2x10TB is fine but parity in a 2-disk array is a mirror (because of math) so some may prefer to go 2x5GB data + 1x10TB parity, or 3x5TB etc. What we can say is your 2x10TB is not wrong. 1x500GB SSD is generally sufficient as long as you don't do a lot of downloads - temp storage can eat up a lot of cache spaces very quickly (i.e. it doesn't get cycled quickly enough) and it gets very annoying when the disk is full. You can add any disk to the array at any time as long as the new disk is NOT larger than the parity. And you can spin down the NAS disks that are not in use to save power (note though that repeated spinning up/down is just as detrimental so you will need to find a middle ground). +1 to what itimpi said. You need the right hardware to support what you are trying to do. But don't go get the latest generation high-end stuff either (e.g. Ryzen took a year to get teething issues bedded down).
  17. Using vfio-pci.ids. I think SIO showed it in one of his videos. I basically stub all my devices that need to be passed through. Found a post which he showed an example of what it should look like.
  18. LSIO cuz those guys are nice. Binhex cuz he's nice too (but I had some weird "why-only-me?" problems with his Plex docker a long time ago so hence I picked LSIO). I avoided the Limetech and Plex official ones because of the lack of "specialisation", for a lack of better terms. Plex guys won't fix problems raised here. LT guys should be focusing on fixing Unraid and not Plex stuff (and I believe the LT Plex docker has been discontinued for quite a while now).
  19. Some potential fixes: Disable Hyper-V (start a new template, don't edit your current template) Is the 2080 in the primary slot (i.e. does Unraid stuff shows up at boot via the 2080 display)? If so, swap it with the 970 (i.e. have the 970 as primary for Unraid to boot with) and try again - make sure you passed through the right card after the swap. vfio-pci stub the 2080 Boot Unraid in legacy mode (and your VM in OVMF mode)
  20. Paragraphing is your bestie. How good are you with command lines? Do you have a good fast reliable Internet connection? If so, have you considered using a 3rd party cloud storage (hint: start with G and ends with e) for your purposes i.e. instead of a ton of HDDs? There is a dedicated topic in the forum on how to set things up to use rclone + Unraid + bash scripts to almost seamlessly combine Cloud and local. Reason for the suggestion: it sounds like you probably need at least 8 HDDs, which will run you into the thousands, which will buy you years of cloud storage, which is great for "save once, read once and then almost never ever again" kind of data, which sounds like most of your storage needs. Now if you need the data local then there's nothing really wrong with what you are considering. I would recommend avoiding ITX motherboards because of the limited number of SATA ports and PCIe slot. Also note that the Node 804 airflow isn't that great when filled up with HDDs. As for RAM, 32GB is usually good enough but it has nothing to do with 64TB of storage. You can run 16GB RAM (or heck 8GB) with 64TBs of HDDs. The reason is more because you want VMs, each of which will need its own reserve of RAM.
  21. What do you mean by "cannot get the drivers to stick"? Error code 43?
  22. Do you know you can create a copy of your container? Just change the container name (e.g. if Plex is your main working one then just use Plex-test for your test one) and you can do whatever you need to test (and certainly compare test version vs working version) and once done, just edit the working one with the new working test config. I have done that many times.
  23. I don't think the OP posts in anyway "reassure" that it can be done. Code 43 is Nvidia anti-consumer way (which is their MO) to force people into buying expensive Quadro cards to use in VMs. Once you understand that, (most of the) potential fixes will become rather obvious i.e. just hide any potential clues to the Nvidia driver that it is running in a VM. If Hyper-V is on, it probably is in a VM ==> disable Hyper-V. Note: if you are going to disable / enable Hyper V, start a new template! Editing an existing template sometimes doesn't disable Hyper-V correctly - been there, done that. If a card is loaded on a machine and then switched to a different machine without a proper reset then it's probably in a VM ==> boot host in legacy mode (so no option rom is loaded), vfio-pci stubbing / use a dedicated primary GPU for the host (so the passed-through GPU isn't loaded on the host), dumping vbios (so the card is reset properly) Nvidia will update their drivers to wise up to workarounds ==> use older driver Some cards just don't like some inexplicable things about the VM ==> boot VM in OVMF, use Q35 machine type (which has better PCIe support - note will need some additional manual xml lines to run in full PCIe x16 until the new qemu version in included in Unraid) Those are the "TL;DR". For the longer read:
  24. Too many variables that can go wrong. First and foremost, that path looks fishy. Why do you change it to /User/Appdata/Tmp? What local path is that folder mapped to? And if you decide to give up on Duplicati, welcome to the club. I gave up on it years ago.
×
×
  • Create New...