Jump to content

JonathanM

Moderators
  • Posts

    16,691
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Looks like it's not a single container, but a whole suite of containers. That complicates things quite a bit at the moment. If you want to learn, a good starting point would be docker-compose which is a command line tool that gets installed manually on unraid.
  2. Genuinely not trying to pick on you, but your loosely tossing about two different paths makes me think you may not understand how important case sensitivity is in linux. What you posted in those two posts are different paths to linux, but windows would see them as the same, causing much confusion. Be very careful to keep your upper and lower case letters consistent in linux, else you will find yourself wondering where your data went.
  3. A noble thought, but I don't believe @Djoss has any control over the program, only the packaging into an unraid friendly container. Bugs in the program should be addressed here. https://github.com/jc21/nginx-proxy-manager/issues
  4. I have something similar set up on purpose / accidentally. Think of it as extra security through obscurity. I block the admin page from WAN access, so the only way I can open it is locally with the IP directly, but when I access the domain it redirects. So I do the same thing, hit submit, retype, go.
  5. After you press submit and get the page error, change the address in the browser tab to your http:// ip address /admin and hit enter. Don't open a new tab or anything, use the tab with the page error.
  6. I recommend keeping a hardware pfsense box or even just a plain old router preconfigured to power up and put in place to keep the internet live. Unifi doesn't need to be running for normal wifi, so all you need is a router, pretty much any old pc with a pcie and onboard ethernet can be a backup pfsense box. I don't. I use CA appdata backup plugin for container appdata, and VM's are backed up with guest native backup software. The destination for those backups is an unraid box at another physical location. Definitely. How that is accomplished can be addressed so many different ways, discussed in multiple locations on this forum. Yes and no. The config folder holds everything needed, but you have to keep the license key file with the physical stick. Backing it up is semi manual for now, hopefully there will be an automatic solution soon. For now, you click on the word flash on the main tab of the GUI, and click "flash backup".
  7. Midnight Commander, you type mc into the console. Windows explorer may get hung up on the required appdata permissions, depending on what you are moving.
  8. Technically, there is no "correct" location, it can be placed anywhere you have permanent storage mounted. However, if you set up unraid fresh with no existing folders, it will be defined as /mnt/user/appdata, with a setting of cache : prefer, which will be placed on disk1 if you have no cache pool defined, or cache if you do. Some containers don't play well with the /mnt/user path, so explicitly defining it as /mnt/cache/appdata if you have a cache pool works well. You don't have to move things, you can leave them where they are, if you want to move the pihole appdata folder to live with your other containers you certainly could, just stop the container, move the data using MC, edit the container to point to the new path, done. If you choose to move all your other containers, you would be moving a whole lot more stuff around. If it were me, I'd just move pihole in with the others.
  9. http://amzn.com/dp/B00QVTVB84/ Tape that to the inside of your case. No chance of bumping or accidental removal. http://amzn.com/dp/B00B4G2YXA/ That drive should be very reliable. It ticks all the boxes, sturdy, good heat dissipation, USB2. I'm running the 16GB version in a couple of my servers.
  10. Have the new parity drives been tested over their full capacity by either the manufacturers utility or some other method? If not, I'd suggest doing a parity check. If they have already passed a full scan while in their current positions, then just wait until your normal monthly parity check.
  11. All references to /mnt/user/Downloads must have the same container mapping for all containers that pass information between them. If container A requests a file be downloaded to /data, container B must also reference /data, not /downloads to be able to find the file seamlessly. Change the mapping and application references for any containers that don't match to a matching container path. Doesn't matter which container(s) you change, but all containers that use /mnt/user/Downloads must internally see the same path, whether that's /data or /downloads doesn't matter as long as all of them are the same.
  12. Near the top right of the GUI there is a question mark inside a circle. That toggles the help text on and off in the GUI.
  13. Your mapped path to /mnt/user/Downloads is different. You told nzbget to access it at /data, but you told lidarr to access it at /downloads. Both sides of the mapping must match.
  14. To properly size a UPS, you need 2 pieces of data. Power draw, and desired runtime. You haven't provided either. To find power draw, you need to somehow measure the actual usage during a parity check, using something like a kill-a-watt. Be sure to include all the equipment needed to support the server, many people also plug their modem / router / switch into the same UPS. To find desired runtime, you need to time how long it takes for your server to shut down from a typical state with all services and VM's running, then roughly double that time, so as to be sure the server can shut down before the UPS gets below 50% power remaining. Once you know draw and runtime, it's simply a matter of looking at the data sheets for the UPS models in question to find the one that will provide the time you need at the power output.
  15. Read the help text on the cache options. That may clear things up for you.
  16. Leave it empty. Having a spare slot makes troubleshooting and recovery SO much easier.
  17. Some containers have routines built in that can reset permissions to their needed state. Some don't care as much as others whether their permissions are mucked with. Long story short, no single answer. You'll just have to experimentally see what's working and how to fix the ones that broke.
  18. I'm far from an expert on this, but my understanding is that the drive initially couldn't read the contents of those sectors, but after a fresh write, it was then able to read them. I would be extra vigilant with that drive, since you don't have a definite reason WHY this happened. My guess is that there are large regions that just aren't good at keeping bits intact long term. With frequent rewrites of those areas, it may be ok, but stagnant storage may be risky. It could also be marginal power, where the write cycles to those zones previously weren't "forceful" enough. I would think that would have other effects as well, but who knows. Hard drives are analog devices that return binary data. When the analog voltage levels get too close to the margins that define 1 vs. 0, strange things happen.
  19. Unraid can handle it, provided the hardware can. VERY few motherboards can provide enough separable resources to set up 4 independent passthrough VM's. I think perhaps the linus tech tips youtube channel may have covered something similar. Above and beyond all that, my gut feeling is that you would be better off with low power thin clients or media players at the 3 auxiliary locations and only do passthrough on a single VM. My reasoning is that the power requirements for the hardware capable of 4 passthrough VM's is likely to be much higher. I think your power / cooling / physical isolation / vibration resistance is going to be the major challenge.
  20. When troubleshooting, make sure ALL of your shares that have export yes are set private. Not just the ones you are trying to make permanently private. Windows has a nasty feature of only allowing one set of credentials per server, so if any of your shares allow access without correct credentials, it won't even try any other credentials, even when manually entered.
  21. BTW, there is a much longer answer if you already have valid parity and want to rearrange things. The only gotcha for you at the moment could be share definitions that use includes or excludes referencing disk numbers will need to be updated, or any specific disk references instead of /mnt/user locations.
  22. The question not asked is seldom answered. When you did the new config to build parity, you could have put the data disks in any logical slot you wanted. Stop the current parity build, do another new config, this time put the data disks where you want them.
  23. That's been the normal thing to do for years now. If all your drives are active a large portion of the time, monthly may not be strictly needed, but it shouldn't hurt. The issue we are trying to avoid with monthly checks is that drives can silently fail, and you want to keep tabs on their health. If a drive in unraid is never read, it may not spin up for months at a time assuming no reboots. It would really be bad for multiple little used drives to fail to perform flawlessly when one of your heavy use drives unexpectedly fails. It's also a good general server health thing, if a parity check ever comes up with any errors, it's a huge red flag that something went terribly wrong, and if a drive had failed while those errors were present, the rebuild would be corrupt. The only time it's "normal" to get parity errors is after an unclean shutdown, but that, in my opinion is something that went terribly wrong. A server should never be shut down improperly if things are working as they should.
  24. When you are bulk copying data, caching slows down the process as a whole, because the data is transferred twice. Once to the cache drive at line speed or close to it, the second time from the cache drive to the main array. When you are dumping mass amounts of data, more than will fit on the cache drive, it's faster just to disable caching, turn on turbo write, and fill the disks as you see fit. Also, if you are spanning multiple disks with your shares, you have to be very careful when copying mass amounts at once, if your share settings (allocation, min free, split, inc / exc) are wrong, it's easy to overrun one of the disks, because some copy methods write a directory tree first before filling in the files, and if the folder already exists on a specific drive it will try to be used.
×
×
  • Create New...