Jump to content

JonathanM

Moderators
  • Posts

    16,708
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. If you already have the folders with data copied to the same disk slot numbers and the same users set up it should work AFAIK.
  2. Because nothing is mapped there. Think of containers like tiny virtual machines. They can only access their own folder structure, which is NOT shared with the host, unless you map the location. Whatever you put in the "Backup Location" line will show up in the /media folder inside the container. If you want to point the /media folder to /mnt/remote/Computer_Images that's what you need to put there. Right now you have it pointing at /media in unraid, which exists only in RAM and will be gone when you reboot.
  3. Put the 2 256GB in one cache pool, put the 1TB in another cache pool, assign the system and appdata shares to the 1TB cache pool
  4. If you are correctly using a VPN tunnel, the WAN IP and ports on your router don't apply. They simply aren't used. At all. This container if working properly does all the needed port forwarding and updating that is needed.
  5. No. Instead of putting the SSD in the parity array, just add it as another cache pool.
  6. I think you will be pleasantly surprised by how well pfsense runs with only 1 core and 2 or 3GB RAM. I'm not familiar enough with hass to recommend resource allocation, but I doubt you will run into issues giving it a single core as well. I'm not saying you won't appreciate more power, especially as you start loading down the server with as of yet unanticipated functionality, but I think for your core use what you have will work just fine.
  7. I have exactly this setup, a pfsense VM and a windows VM with Homeseer. My advice? Keep a relatively current hardware based pfsense box available, as well as figuring out what's needed to migrate your pfsense xml backups from the VM to the hardware. Not because you will need it often, I've not booted my pfsense hardware in over 2 years, but because when you do need it, you don't want to be experimenting under pressure, and until everything is set up and smooth you may want to boot back and forth between the 2, hardware and VM, and it's nice to have continuity of network availability. The trial version of Unraid CAN NOT BE SUCCESSFULLY USED AS A HOST FOR YOUR MAIN ROUTER. The licensed version removes the need to have a network connection to start, so no issues with a paid for license, but you can't start the array with a trial unless you have active internet.
  8. Looks decent, you don't need to add a disk controller until you run out of onboard ports, 7 or 8 depending on whether you populate the M.2. Since that case doesn't appear to easily support that many drives, I doubt you will need a disk controller unless you move to a motherboard with fewer onboard SATA. That motherboard appears to advertise how overclock friendly it is, but DON'T overclock a server.
  9. If you are willing to wait for a drive recovery service... Send them all three drives, tell them they are members of a larger RAID pool and you only need one of the three, any one will do, but it must be bit perfect end to end, you can't use just the files if you only have one recovered. IF they can't just revive 1 drive fully, or you don't want to wait and try to use the array to recover, then you can have them pull the files off all three and put the data back by copying it. Once you do a new config and start using the array, you lose any chance of attempting recovery with any 1 of the three drives, you must have the file content of all three to get all your data back. This would have been less painful if 2 out of the 3 were your 2 parity drives, as they don't have any files, so you would just need to recover the 1 data drive.
  10. Did you have the controller settings Override inform host set properly in your previously working version? You MUST keep that set properly to use a containerized unifi controller.
  11. Unless you are a masochist, I wouldn't do that. Only use fixed tags with this container unless you like randomly losing functionality.
  12. With only 8GB to work with, running 2 VM's is going to be a delicate balancing act. You can try increasing the RAM dedicated to the VM's in small increments, benchmarking real performance as you go. Only give the VM's enough to see a reduction in performance increase, if you know what I mean. Synthetic benchmarks won't mean much, you need to test real world loads. You may find one VM needs more than the other to perform its duties well, but only give the VM's what they absolutely need to do their job. The host will use all the RAM you can spare to keep the VM's emulation running as fast as it can. Think of the host as creating a motherboard, controllers and disks out of thin air for each VM, the more resources you give the host, the faster those emulated computers can run. Personally I'd not try running 2 Windows server VM's with less than 32GB RAM total in the host, giving each VM somewhere between 4 and 8GB depending on what it was running.
  13. try reducing that to 2 each. the host needs resources.
  14. Your call, but I prefer the granular approach, one named script per function, that way it's easy to change schedules or whatever. Or just use # to disable or enable script lines. It's not like it takes much space either way.
  15. Is the drive slot only disabled, or also not mounting? Tools, diagnostics, attach the zip file to your next post in this thread.
  16. A disabled drive should be rebuilt. A non-mountable drive should do the check file system directions. The two conditions are distinctly different, but can happen at the same time if parity was not valid when the slot was disabled.
  17. If you copy you don't technically need to do the CA backup, but it would be a good idea to have set up anyway. You can use mc at the console for copying, it's almost as easy as krusader, it's a double pane file explorer, just text based. I haven't played with the multi pool feature yet, but I suppose you could probably remove all the pool drives after you made the copy, then add back just the 2 new drives to the original pool and it should work. IIRC, as soon as you start the array with no pool devices, you can then add previously configured devices into the pool and they will retain their data. As long as you don't see the ominous warning you should be ok.
  18. CA Backup / Restore? I suppose you could set up another pool since you are on 6.9.2, if everything is using /mnt/user paths you wouldn't need to change anything, just copy from one pool to the other.
  19. Follow the directions for completely replacing the cache, by using the parity array as a temporary location for the data using the mover.
  20. That, or put a line in the go file to copy the file from /boot/powerout to /etc/apcupcupsd and set permissions to execute. However you are comfortable handling it.
  21. Unraid extracts the OS from archives on the USB into RAM on every boot. If you don't store the script on a drive it won't survive a reboot. /boot is the USB flash drive, however since it's FAT32 normal linux permissions don't apply so you will need to copy it elsewhere to set execute permissions. The user scripts plugin handles this for you if you wish.
  22. Try https://forums.unraid.net/topic/60143-support-clowryms-docker-repository/ or https://github.com/haugene/docker-transmission-openvpn/issues
  23. Or just scroll to the top of the page and click on the recommended post.
  24. Which transmission container are you talking about? There are individual support threads for each version, since you are doing things in a non-standard way you will need to manually figure out which thread to post in.
×
×
  • Create New...