Jump to content

JonathanM

Moderators
  • Posts

    16,691
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Why even bother backing up the VM's vdisk? It's a very simple procedure to do a clean pfsense reinstall that automatically restores a backup file. https://docs.netgate.com/pfsense/en/latest/backup/automatically-restore-during-install.html All you need to do is make a backup of the xml for the VM in unraid, and do a backup from the pfsense GUI whenever you change something.
  2. Not all SMART errors are equally problematic, and many don't even correlate with the drive failing. What exactly are the errors? Tools, diagnostics, attach the zip file to your next post in this thread if you want someone to look the situation over for you.
  3. Each drive is independent, so only the involved drive(s) is lost. Parity will emulate 1 drive per parity slot, so if you have the max of 2 valid parity drives, you can have 2 drive failures and the array will present the failed drives as normal by mathematically combining the binary content of all the remaining drives. If you then have a third concurrent failure, you will lose the content on the three failed drives, but the remaining data drives will all still be readable after reconfiguring the array to only include the intact data drives. Parity drives don't contain readable content, only the bits needed to complete the parity equation. Or, as a last resort, any array data drive can be pulled and read in any system capable of parsing the file system used, be that XFS or BTRFS.
  4. The file paths could be an issue, I'm not familiar with how the plex db keeps track of file locations.
  5. Could you post a screenshot of the entire MAIN GUI tab?
  6. You don't HAVE to get 2 more drives unless you want to. As an example, if you get a new 12TB drive, you could remove the 8TB parity, set the new 12TB up in the parity slot, and put the old parity drive in data slot 2. That would give you 16TB of usable space, and if you later got another 12TB, you could just add it to the array.
  7. Wow, this is turning into quite the valuable resource! If you can, maybe the graphs could have a view option to feel less busy by widening the normal ranges into a light background, with the current upload as a single color line. The end result would be a black and white greyscale graph, with a colored line either passing through the normal white range or deviating into the grey or black regions. The graph would start as pure black, each additional data point would add a brightness level divided by the total number of lines plotted. So if all the curves happened to pass through the same point, it would be pure white, and if none of the curves hit that point, it would be black. The first curve would be pure white, 2 curves would be 50% unless they intersected into pure white, et. cetera. If you only have a handful of curves it probably wouldn't be a very useful, but when you have 100's of data points it should start to resemble something intuitively useful.
  8. The higher data speeds of the larger drives can reduce the parity check times by some margin, but yes, the parity check overall time is largely determined by the size of the parity drive. There is a plugin to pause and restart the parity check on a schedule, so you can run the check over multiple low usage time periods. My main server typically completes a check over a period of 3 days, starting after midnight and pausing at 6am.
  9. Are you sure the CPU is rated for that? I suspect that while the memory can handle the speed just fine, the motherboard / CPU can't, at least not without stability issues. https://forums.unraid.net/topic/46802-faq-for-unraid-v6/page/2/?tab=comments#comment-819173
  10. Faster, lower overall power consumption, less physical space needed, typically cheaper after you consider the cost of the extra slots for lower capacity drives, as more SATA ports can get expensive quick if you need another HBA and / or case. Just some of the reasons to go with fewer drive count of higher capacity drives.
  11. Wild shot in the dark, is the container running with privileged mode on?
  12. Any chance this will motivate you to provide automatic endpoint failover / rotation in some form? Not a formal request, just wondering. I can envision a rather complex routine involving multiple ovpn, periodic rate testing of each, and connection to the best one.
  13. Does the CPU go back to mostly idle if you disable the plugin?
  14. Type "diagnostics" at the console, attach the zip file it creates to your next post in this thread.
  15. Are you running the file integrity plugin by any chance?
  16. At the very least, a topology of your network may be helpful. Why is pfsense involved at all in a local transfer? And if it's not local, it should be. There are plenty of VPN options both built in to Unraid (wireguard) and pfsense that would bring your PC and Unraid into the same network space.
  17. @ich777 is working on that, there seems to be a working example, but it's very fresh, as in the last few days.
  18. If you use it for a SAS HBA controller you can add a SATA SSD to the external drive cage with almost no speed penalties, assuming you get a decent SAS card.
  19. Definitely not, it's meant for permanent insulation on electronics, especially where there could be some movement. I see it as the ideal solution, barring the availability and motivation to use a 3D printer. Even with a replacement protective housing, Kapton should still be used on the bare circuit. Kapton and double sided foam tape is the electronic equivalent of duct tape and baling wire. I was applauding your usage, not mocking it.
  20. That looks fine to me, Kapton tape is a perfectly cromulent encasement. 🤣
  21. I've got bad news for you. Stucco and wifi are a bad combo, the wire lath that the stucco uses and the amount of moisture that stucco absorbs pretty much kills wifi. Your best option is to drill through the floors and walls for ethernet and run wires behind trim pieces.
  22. Did you replace the corrupted database with a backup before starting the container with the new settings?
  23. Try changing the container path mapping from /mnt/user/appdata/PlexMediaServer to /mnt/diskX/appdata/PlexMediaServer where X is the disk number where appdata actually resides, if it's on the cache use /mnt/cache/appdata/PlexMediaServer
×
×
  • Create New...