Jump to content

JonathanM

Moderators
  • Posts

    16,691
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. I believe it's a security feature, so that the VERY vulnerable IPMI port can be physically separated from the general LAN. I can definitely see a use case for a different switch and wiring infrastructure for all the IPMI ports in a rack of servers.
  2. As long as you disable the Docker and VM services, then yes. It's not enough to just stop the containers and VM's. The docker and VM's tab should be gone from the GUI during the move.
  3. Depends on the specific motherboard. Some models share ports, others are completely separate. From your description I'm guessing your model doesn't share duties, so you will need 2 connections. I have X10SL7-F in one of my servers, I use the 2 LAN ports in a PfSense VM, a 10GB card for Unraid, and the IPMI port is also connected. So I have 3 copper and 1 fiber connection on that specific machine.
  4. I think the issue is security, and verifying that everything is secure almost requires an outside authority to verify. Yes, you can do self signed, but I'm not sure how bitwarden would react to that. My bitwarden is currently used in my home, on our phones (WAN), my office (WAN), and a few tech savvy relatives. So, not so much other side of the world, but definitely not all inside the LAN or VPN. I'm pretty sure most people use bitwarden on mobile devices, password security is especially important for those.
  5. You need to do that same du investigation in /mnt/user/appdata, since it's not just on the cache drive, it's scattered around the entire array.
  6. https://support.plex.tv/articles/202529153-why-is-my-plex-media-server-directory-so-large/
  7. One or more of your containers is keeping the working data that should be in other shares in the appdata share. You need to determine which subfolders in appdata are taking up so much space, and then we can look at the configuration for those apps.
  8. What I do is have the cAdvisor container starting first, with a 300 second wait. cAdvisor is a handy dashboard for container stats, it doesn't consume hardly any resources, and is handy for troubleshooting. If you absolutely can't install cAdvisor for some reason, then set your plex container not to auto start, and manually fire it off after the array comes up.
  9. Which version was the pool created on?
  10. Apparently they were posted while I was typing my response. You are right, the only way forward that I can think of is put the drives back where they will mount and go from there.
  11. Without diagnostics, my best guess is that the partition starting location was mangled by the enclosure. Try disconnecting 1 of the unmountable drives and see if the emulated drive mounts ok. If so, then rebuilding on the same drive should fix it. If that works you would need to rebuild one at a time. It should go without saying that your data is at risk here, so I hope you have good backups.
  12. If the VM has a proper address from the router, and browses the internet perfectly fine, then the solution has to be in the Ubuntu firewall. Nothing to do with Unraid.
  13. Keep in mind the container time delay you set is how long the system waits to start the NEXT container on the docker page. So to delay the plex container, set a time delay on the container immediately above plex. You can change the start order by dragging the containers up and down.
  14. Depends. There was a bug in some versions of Unraid that didn't set the RAID level properly, causing the drives not to be fully redundant. If you search the forum johnnie.black has some posts on verifying and fixing it. Safest option would be to back things up before attempting a drive replacement.
  15. Make sure the VM is bridged to your network and not just bridged to Unraid's internal NAT network. Don't use virbr0. Your VM should be able to get an IP from your router's DHCP and act just like a physical machine plugged in to your switch.
  16. This exact question has been asked and answered on almost every other page of this thread. It gets old having people continue to ask before reading through the thread. It was not my intention to be annoying, I was just answering the question.
  17. Not practical to shrink it. As long as it passes a long self test and the SMART numbers look good, I'd use it as the parity drive. All drives in the parity array are needed to rebuild a failed disk, the only thing special about the parity drive is that it contains no data, so if it fails, no loss. If a data drive fails, all the other data drives must be read perfectly along with the parity drive to recreate the failed drive. So, I'd much rather have a failed parity drive than a failed data drive. Less risk. You can't put a data drive larger than either of the parity drives in the parity protected array.
  18. Switch step 2 and 3 and you should be fine. You can't fully unassign a disk slot until you do the new config. If you complete these steps as written with the swap of step 2 and 3, parity will rebuild with the parity disks assigned at that point. Since you would rather build parity with the new drive, you would save a bunch of time by switching the parity drive out at the same time. Parity has to be rebuilt anyway when you remove a drive, so why do it twice. So, stop the array, set the array to NOT auto start, power down, do the drive 7 and parity drive physical removal and add the 10TB parity, power up, set a new config keeping current assignments, go back to the main page, assign all the changed drives into their final slots, build parity. After parity is built, then you can do the drive replacements the normal way on the 2 and 3TB drives.
  19. From the first page of this thread. https://forums.unraid.net/topic/84601-support-spaceinvaderone-macinabox/?do=findComment&comment=784480
  20. All the remaining drives must be accurately read to recreate the missing disk, not just the parity drive. The rebuild will be limited by the read speed of the slowest drive / interface in the parity protected array.
×
×
  • Create New...