Jump to content

JonathanM

Moderators
  • Posts

    16,723
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. try sudo apt install speedtest-cli then just speedtest-cli
  2. It's designed to be mounted in an enclosure, with at least a power supply fan helping to move air. Hard drives want consistent temps, 40c is fine if they don't vary much from that, but constant cycling from 25 to 40 is bad. Either leave them spun up so they stay consistent, or cool them better so they don't experience wild temp differences every time they spin up.
  3. Overclocking (XMP is overclocking) is not recommended for any computer that you care about data integrity.
  4. Have you tried youtubedl-material recently? You are describing exactly what I use.
  5. Maybe this? https://www.globalonetechnology.com/bc393a.htm Might be better to purchase a supported controller instead of playing licensing games.
  6. In your use case, setting a free space 3 to 4 or more times the size of your largest file would probably be wise, along with making sure the split level didn't restrict placement. Moving your currently growing files from /mnt/disk1/share to /mnt/disk2/share may be a good idea.
  7. If you are expanding existing files, then you could run out of space. All existing files will be read and modified in place, only new files will be evaluated for placement on other disks, based on share settings. What concerns are you talking about?
  8. Are ALL the other openings in the case either taped off or have a fan actively EXHAUSTING air? If you have any openings in the case besides your drive cages that are allowing outside air to get in you won't get adequate drive cooling.
  9. Yes. Also you can check the GUID with the usb creator and see if it's a random string of multiple different characters, which means it should be fine, at least for licensing.
  10. I reverse proxy everything through SWAG. As long as you can access the service through http://<UNRAIDIP>:<SERVICEPORT>/optionalsubfolder you can reverse proxy through https://service.publicdomain.com/optionalsubfolder
  11. Unclear if you understand this, so I'll just say it. You can't have 18TB data drives with a 12TB parity. So, I'll outline what is possible with the drives you listed. 1. Replace parity with one of the new 18TB, rebuild parity and do a subsequent parity check. 2. Rebuild one of the 6TB drives with the old 12TB parity drive, do a non-correcting parity check. 3. Rebuild the other 6TB drive with the other new 18TB drive, do a non-correcting parity check. What I've outlined keeps the array data shares and applications available the entire time, albeit at reduced performance which is unavoidable. If at any point there are errors, those need to be resolved before moving on to the next step.
  12. Move the folders currently in /mnt/user/disk2/Vids into /mnt/user/Vids first.
  13. In the past I've had ReiserFS filesystems that take close to 18 hours to replay and mount after an unclean dismount. One of the many reasons to migrate to a modern filesystem.
  14. That may be some of the issue. When you leave an open slot, all the cooling air will go through it instead of flowing over the other drives.
  15. As discussed many places, covering the pins with kapton (polyamide) tape is the cleanest non permanent solution.
  16. How would you improve this, keeping in mind that the licensing should be able to stay completely offline if desired? The only alternative I see is a rather expensive move to something like sentinel hasp type dongles. I must admit I've never gotten a quote on that type of system, but I've got to believe the per unit cost would be rather significant, since I typically see it protecting $1K and up per seat products.
  17. No. The only restriction is no data drive can be larger than either parity drive. Having a 12TB and a 10TB parity at the same time is not a problem, you are just limited to 10TB data drives until you upgrade the second parity.
  18. That was probably the issue. Memory ballooning isn't well implemented, and it doesn't work like you think it would in any case. Much better to assign the smallest fixed amount of RAM that gives decent performance and let the host use the rest to keep the emulation running as smooth as possible. Many people assume that the best running VM is achieved by giving it the max possible resources, when in reality you want to tie up the LEAST possible resources bound to the VM and allow the host the best possible speed to emulate the VM's running gear. Remember all the VM's I/O is running through the host, so you want the most possible speed there.
  19. You've got the wrong end of the stick. Unraid must manage array disk partitions itself, so it can keep the novel parity protection scheme it uses in sync. For devices not in the array or pools, you can use the Unassigned Disks plugin with the companion destructive mode plugin.
  20. Since this support thread is specifically for the VPN container version, you may get better help posting in the thread for the container you are actually using. Click on the container icon, select the support option, and go from there.
  21. The only reason to enforce low TDP is thermal dissipation limits. Reducing processor TDP will NOT reduce overall power consumption, in fact many times it will increase total consumption over time as all the other pieces that consume power must stay active longer while the processor is artificially throttled. Are you limiting the airflow or surface area of this build for some reason?
  22. No. I suspect you have not completed the container firewall port modifications. I suggest reading through binhex's VPN FAQ on github.
  23. Probably path mapping issues. Make sure the paths in the app are NOT the Unraid paths, but the container paths. If you have /data mapped to /mnt/user/downloads, you must use /data
×
×
  • Create New...