JonathanM

Moderators
  • Posts

    16148
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Not universally. In Unraid, the root folders on each disk or pool can be exported as user shares. I recommend watching spaceinvader one's Unraid videos on youtube, he has a load of very informative content.
  2. Short answer, no, each array drive is a single volume using all available space. Long answer, Unraid is linux based, and the paths are going to be completely different between a windows and linux install of plex. Disclaimer, I am an emby user, so I don't have first hand experience with this, but here is supposed to be a guide of sorts. https://support.plex.tv/articles/201370363-move-an-install-to-another-system/
  3. Yes, but I had a hard time understanding the OP question, it was clear on one part, but the "whenever I add files parity runs" had me questioning what was really intended or desired. I figured more reading would help the OP get where they wanted to go.
  4. No. This is just a test to see if the motherboard is not resetting properly.
  5. The basic storage functions will transfer fine, as long as both the old and new boards pass the drive ID's through identically. The only issue there is hardware RAID or drive enclosures that can modify the ID's. Any hardware that was passed through directly to VM's or containers will no longer be present and cause errors.
  6. Turn PSU off, press power button on tower to drain residual power, turn PSU on, wait a beat to allow the motherboard to wake up, then press the power button. Probably not the USB stick, but the motherboard USB controller.
  7. That sounds more like a motherboard issue. If it happens again try a cold boot, actually remove power from the motherboard with a power switch on the PSU or removing the AC supply.
  8. No, the only changes made to the flash would be in the config folder. Replacing that folder with default files and adding your specific key file that belongs with that GUID will reset everything to new.
  9. contact support. We can't do anything about keys here on the forum. @SpencerJ
  10. You don't. If it turns back on unattended, how will you know if it is safe to turn back on, there may still be events happening that it would be better to stay off until things blow over, maybe literally. It's best practice to monitor the server startup, so you can intervene if things aren't going well.
  11. It's a good idea to do a non-correcting parity check after doing a disk rebuild. Rebuilds don't "check their work" by reading what was written to the rebuilt drive, it's assumed if a write completes without error, it wrote correctly.
  12. Depends. There are other things you can tweak with regards to memory, cache pressure and such, and honestly Unraid is tuned for best performance with smaller amounts of RAM and may not make the best use of more than 64GB of RAM. I don't have the luxury of owning any systems with more than 32GB right now, so I must leave hands on research as an exercise for the reader.
  13. It will kill performance if run too often, as caching data is what speeds many things along. As a clean up tool run when performance isn't a priority, or before starting a task, it should be fine. It does prove to some extent that you are over committing the memory you have for optimum performance, so more RAM would help if you really need to reserve that much RAM for VM use. I would try reducing the VM RAM allocations and see if it hurts or helps the VM performance. RAM caching by the host is one of the things that can really speed up a VM, and if you deny the host that RAM it can hurt the VM speed.
  14. After a minute of googling (as in, no real research) I found this which may or may not do something in Unraid, haven't tried it, so use at your own risk, it was billed as a "linux" solution. This apparently a. clears speculative data that was cached b. consolidates the in use memory. If you are game to try this, execute at a point where the VM would fail to launch. To repeat, I HAVE NO CLUE IF THIS WILL DO BAD THINGS TO UNRAID.
  15. Perhaps out of unfragmented memory. Some operations require contiguous blocks, and over time more and more addresses can be tied up and unable to be reallocated, even if the total amount free is plenty.
  16. If the stock scheduling doesn't give you the flexibility you need, maybe look into the tuning plugin?
  17. Don't do that. You need to leave resources available for the host (Unraid) to emulate the motherboard and other I/O. At the very least leave CPU 0 available for Unraid. Since you only have 4 threads, I'd only use CPU2 and CPU3 for the VM, maybe try with only the last thread for the VM and leave the other three for the host. That may also be too much, depending on how much RAM the system has. If the physical box has 32GB, 8 for the VM should be fine. If it has 16 or less, reduce the VM to 4096. The more resources you tie to the VM, the slower the host is going to run, which in turn slows the VM way down. Give the VM the absolute minimum and add a little at a time until performance doesn't increase.
  18. Just the reverse array -> cache, or cache only The advantage of having the cache primary and array be a secondary with a move to cache setting is that if you ever accidentally fill the cache and the minimum free space is set correctly the excess data will go to the array, then when the cache has room, the mover will put the data back on the cache. Cache only will give an out of space error when it gets below the minimum free space set.
  19. Yes, the parity array is great for mass storage, very bad for random I/O, especially random writes. SSD or NVME is a must for vdisks.
  20. Yep, that would be why the VM is dog slow. vdisks should be on fast pools, not parity protected array disks.
  21. All support questions for specific containers should be posted in their thread, not spread out across the forum. That way people can easily see what others have asked, and the answers they received. Many problems have already been asked and answered.
  22. At any point did you format a drive? If so, you erased all the existing files.
  23. 1. What drive is the VM using for vdisk or passthrough? 2. Try changing the RAM to 8GB