Jump to content

JonathanM

Moderators
  • Posts

    16,740
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Theoretically a larger parity drive would be fine, especially if you precleared it before the clone operation to ensure the remaining space was for sure zeroed. However, this isn't something I've even come close to experimenting with, hopefully JorgeB with all his test arrays can say it will work for sure.
  2. I don't know. I've seen some people attempt it before, they may have been successful. Perhaps searching the forums may produce results, but I'm personally very uncomfortable putting my server at risk of being directly connected to WAN, so I've always kept a physical link. That's why I asked if you were directly passing multiple ethernet ports directly through to the VM. I know for sure that way works, and works well for me with some caveats. It's not officially supported, as Unraid expects to have WAN access during the boot process, so some plugins and services may not work or need tweaked to function.
  3. I'm not familiar with using an interface shared with Unraid, I passed through 2 ethernet ports entirely to the VM, Unraid has no access to those two ports, one is connected to WAN, the other connected to the same switch as my Unraid ethernet port. I wanted as much isolation as possible so a misconfiguration or other issue couldn't accidentally allow my server to directly be connected to the internet. Plus, if the VM is down, it's easy to spin up my hardware pfsense box, and since it uses the same config, there's no change as far as Unraid is concerned, it still gets internet through the switch.
  4. Are you passing through a supported network card with multiple ports?
  5. I wasn't aware there was an installer for the plus version. Link?
  6. Probably. Is it a good idea? Probably not. I currently run pfsense ce in a VM on one of my Unraid boxes. I keep current backups and can quickly migrate to a bare metal install if needed. I would never consider NOT having a bare metal router firewall available on demand.
  7. Probably a chinese knockoff counterfeit, whether that bothers you or not, whatever. You won't get support directly from LSI. Says they are for connecting a SAS controller to SATA drives, which I think is probably what you wanted.
  8. Be sure to compare the results and see if the quality matches your expectations.
  9. Try using mc. The text editor built in there is more friendly than nano or vi.
  10. Try 500GB, see if available space reduces by .5TB. 5GB is too small to see a difference.
  11. Try creating a very large empty file, and see if the actual used space increases. Maybe make the file half the total size of the remaining space on the drive. https://www.windows-commandline.com/how-to-create-large-dummy-file/
  12. New config removes the ability to recover a failed drive until parity has been successfully rebuilt.
  13. Technically either method is ok, since you are keeping the same number of drives. However... since you already went through the trouble of emptying two of the drives, there is no need to rebuild them from parity, so the only concern I see is if one of the 3 4TB data drives decided it was time to die. Just remember to rejigger your drive exclusions after everything is in and formatted. Also, the format command is all inclusive, so make sure before you press the button that ONLY the empty drives are listed as unmountable. A clean format is probably better than rebuilding the empty filesystem from the old drives anyway. Yes, there is a significant difference. The only question I have is, are you absolutely positive all the drives except the 1 and 3TB to be removed are in perfect shape?
  14. Attach diagnostics to your next post in this thread.
  15. https://forums.unraid.net/forum/76-deutsch/
  16. If the container mapping is done properly, you don't need remote path mapping.
  17. Post the docker run for radarr, sonarr, and qb.
  18. There's your problem. It's got to be /data in both containers. The mapping from /mnt/user/... to /data has to be identical in both containers.
  19. This appears correct, is it still identical? Problem is, I don't use this app, so I'm not familiar with all the possible locations for paths in the config. Your nested paths would be what would happen if the app appended data to the end of the config path instead of referencing it literally as /data
  20. Sounds like you missed the opening / when you changed the path to /data
  21. Cloning disks with poor health is better handled by ddrescue instead of dd, it's purpose built to get every last bit of good data transferred to a new drive. Problem is, this is going to be a lengthy process to get to a point where we even know IF we can get anything usable with parity emulation. Earlier in the thread there was a sense of urgency about getting the server up and running regardless of data loss on some of the drives. Where are we on the balance of time vs value of all array drive data? Is there another PC available other than the server available to work on data rescue? Maybe pull all the current parity array drives out, add the 14TB as disk1, put the cache pool back, and work from there?
×
×
  • Create New...