JonathanM

Moderators
  • Posts

    15650
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Depends on which container you are running. Click on the NC icon in the Unraid GUI and select support. This thread is not the support area for any specific container, and really shouldn't have been in general support to begin with.
  2. Nah, it's an enhancement to an already completed feature request. 😀
  3. I agree, that's why I moved it. The situation could be improved for single parity though. Adding dual parity just to replace a drive without losing single drive failure protection seems overkill, but that's where we are.
  4. Technically this isn't complete, depending on how you look at it. Having dual parity allows you to replace a single data disk while keeping single parity protection, but there could be an option to clone the still functioning replacement disk with just normal single parity in place and do the physical replacement in software any time while the clone is valid. The size limit of the second drive would necessarily be limited by parity, and the extent of space beyond the current partition would need to be zeroed and maintained until the smaller drive was removed, allowing the file system to be expanded to the new space. Functionally this would be the equivalent of allowing a modified RAID1 volume as a single array disk member, where each individual data disk could be protected as a RAID1 as well as being a member of the traditional array. @limetech, any chance this could be considered? It would give those folks with limited or no backups and single parity a safer way to replace array disks, as well as offering multiple hardware redundancy for more critical single volumes in huge traditional Unraid arrays. The only downside I can think of besides the amount of work needed in the md driver is the amount of explaining needed to get the concept across.
  5. Considering that Community Apps was an optional plugin until just recently, I'd say a maintained plugin fulfills the feature request. If the plugin isn't kept compatible with new versions of Unraid, then the feature request would be unfulfilled. Yeah, I know I didn't answer the question. Requests that already existed should definitely be moved to the completed section, that way an intelligent reader should be able to infer that the listed feature is already available. Newly posted feature requests for the same item can just be merged into the existing completed thread.
  6. LOL, I can never remember which way to merge either. I'm assuming you meant to merge the other way, retaining the original title, not your test title.
  7. Awesome, thanks! @primeval_god, queu up some more!
  8. Make sure you have the proper credentials stored in windows credential manager. If you have no or incorrect stored credentials, windows will try to use your current windows user and password, once that happens unsuccessfully any further login attempts will fail.
  9. Can you confirm we are allowed to merge threads in both feature requests and completed?
  10. No. What you are describing would require the parity drive to be larger than any single data drive by a huge margin. Text is stored as ascii codes, which require 8 bits per character. So a 1 in a text file takes up 8 bits to describe what takes up 1 bit currently. Same with a single 0. Plus, all the bytes required to maintain the file system that contains said text file.
  11. zip it. Text compresses extremely well.
  12. Currently moderators are not able to move threads to subforums. I didn't realize that when I made the request. Could you please investigate? Thanks!
  13. forum report. top right of first post, three dots, "Report" reason, "Please move to completed" or, "Please merge with http://forums.unraid.net/topic/(blahblahblah)"
  14. For ease of administration I recommend naming and setting up a unique instance of a database container for each instance. Unless you are a database wizard and feel more comfortable managing a single db container with multiple sites using it, the advantages of multiple containers far outweigh any downsides. The additional storage requirements for each identical container is practically nothing, they each share layers, so the only additional storage is the database and config files, which can live in a uniquely named folder in appdata. The ability to blow away an entire database that's misbehaving without effecting other sites is very handy.
  15. Thanks! @primeval_god, when you have the time and inclination, feel free to report posts that should be moved or merged, and us moderators will work through your reports whenever we can.
  16. Why is that surprising? That power supply is probably wasting at most 8 to 10 watts at those power draws, maybe a good bit less.
  17. If you allow drives to spin down in the extreme cold, there is a risk of failure when they spin back up at those extremes. You need to insulate the server so that the case and drive internal temperatures stay moderate, or risk multi drive failure. Drives don't like extremes, too cold is just as bad as too hot.
  18. Try asking in the dedicated support thread for the container, you can access it by clicking on the icon in the webgui and selecting the support item.
  19. Feel free to report topic posts that are complete, or reference duplicate requests where the threads should be merged. @SpencerJ, if you could please create the "Completed" subforum along with unscheduled and boneyard , it would provide a good destination.
  20. This question really should be in the dedicated support thread for the binhex-delugevpn container, so please post follow up questions there after reading binhex's helpful FAQ that he references in his sig https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md
  21. Make sure you are using SAS connections between the disk enclosure and your server. USB and eSATA aren't ideal, and may cause other issues.
  22. Use the "Stop" button first, to make sure the array stops cleanly before clicking the shutdown button. Unattended shutdowns always risk being unclean, if there are processes keeping disks mounted there is only so much the shutdown process can do to force the matter.
  23. Done. I don't have a huge volume running through, so it may take a while to get results, especially since my instance really doesn't hang all that often, maybe 1 in 20 if that.