Jump to content

JonathanM

Moderators
  • Posts

    16,729
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. I agree this is needed, but I suggest not relying on the host VNC for daily use, rather set up something like nomachine in the client, and relegate the host VNC to situations where you must have baremetal type management.
  2. ECC memory should issue a code that shows up in the logs, or if it's bad enough it should hard lock the machine to keep from corrupting more data.
  3. https://www.linuxfordevices.com/tutorials/linux/screen-command#4-Listing-active-sessions
  4. Where? Yeah, it's a decent concept, and could be workable with enough thought and effort, but... is easy enough to avoid, and I'm guessing you won't do it again. The effort to implement is way down on the list of time / cost / benefit for improvements to Unraid. I'd rather see them work on documentation.
  5. NO. Only enable it on often used problem shares. Enabling it on everything will likely result in worse performance, and drives that never spin down. It works by reading the directory listing at frequent intervals, so that when a listing is asked for, it will likely be in RAM. If you ask it to read too many items, they won't all stay in RAM, and reading them again will keep the drive spun up as it's read over and over again.
  6. Well, that's a noble sentiment, and it could be true, but it's more likely false. Unraid's realtime parity means that when a drive is missing, it's immediately emulated. So any writes, even just to metadata, or closing an open file, will happen on the emulated drive and be out of sync with what's on the physical drive that was removed without being unmounted. Blessing and a curse. Be careful with array drives. Don't accidentally remove drives without stopping the array. 🤣
  7. https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself Yes, the documentation is a work in progress. Unfortunately that's probably not going to be fixed quickly. However, you can always ask questions, we seldom bite. 🙂 Theoretically the most up to date docs should be linked in the Unraid GUI itself, the bottom right corner blue Manual
  8. Why stay behind? 6.10.2 stable is the newest right now.
  9. Thank you for this comprehensive guide. I have a couple of thoughts, questions. First the docker internal address dhcp scheme is somewhere in the 172 network by default. That may be confusing to some, and may need some care to avoid issues. Second, if things don't work for some reason, and configurations are reset to default, there is a very high chance of Unraid having the WAN connected port become eth0, exposing your server naked to the outside world. Are there some precautions that you could implement, perhaps trying to ensure the WAN port passed through is NOT eth0 when Unraid is booted with a completely wiped VFIO and network config? I'm not even sure that would help, with the defaults being so permissive for ease of setup. I personally pass 2 motherboard ports through to my PfSense VM with one WAN and one LAN connected to a managed switch, and have my Unraid talk to that same switch through a 10Gb PCIe card, and the only reason I'm semi secure doing that is my internet requires a full static setup to talk on the WAN. There is no dhcp offered on the modem, so even if I accidentally connect directly to it, nothing can connect. I guess what I'm saying is, here be dragons, don't attempt it if you don't understand it.
  10. Depends on your specific use case. I'd set up a second pool, with only the new NVME, and move whatever I used the most to it. If you daily drive your VM(s), put your most used one there. If you mostly do serve media using a container, move that container's appdata. I'd recommend analyzing your everyday uses, and whatever you do the most, see if you can make the extra speed worth it.
  11. USB connections to parity drive arrays are not recommended for multiple reasons, mainly the lack of stability. USB tends to drop and reconnect for no apparent reason, and normally that's just handled relatively seamlessly, but for Unraid all the drives in the parity array must maintain a rock solid connection, if a drive drops even temporarily it will be disabled. I don't think a NUC is a good candidate for what you are trying to do, unless you can figure out a way other than USB to connect the array drives.
  12. I'm not familiar with ZFS, but the only issue I see with your process is making sure none of your containers or configs references /mnt/disk1. Check that before you proceed. If you set up with all stock /mnt/user references, then you are good to... 1. Assign one of your new 18TB drives as parity. Let it build, then do a correcting check. Zero errors is the only acceptable result. Assuming no errors... 2. Assign a second new 18TB drive as Disk 1. Let it rebuild, then do a non-correcting check. Zero errors, etc... 3. Create a new pool, call it cache. Assign the two original array 256GB SSD's to it, format it. Verify the resulting BTRFS RAID1 is healthy. 4. Disable docker and vm's services, not the containers themselves, make sure there isn't a DOCKER or VMS items listed in the menu, the words should be gone, not listed here. 5. Run the mover. If everything was left stock, all the appropriate shares should transfer themselves to the new pool named cache. You can check by going to the shares tab and compute all, it will tell you what lives where. 6 Assuming the system, domains, and appdata shares are all on the "cache" pool now, re-enable the docker and vm services. At this point all your stuff should be working exactly as it was when we started, with the exception that you have a new pool name cache, and disk 1 is 18TB with a bunch of free space. 7. Add another new 18TB as Disk 2. Let it clear, then format it. 8. Create shares for the data you want to migrate from the ZFS pool. 9. Copy as much as you can fit to the 36TB free space on the main array At this point you have a choice, purchase another 18TB drive to keep both the Unraid main array parity protected and the ZFS pool intact while you copy the rest of the data, or degrade one of them by dropping either the parity drive or one of the ZFS array drives. I am not familiar enough to tell you which is the safer choice if you degrade one, my advice is to have enough drives to keep both redundant. After you have figured out how you are going to proceed, purchasing another drive or degrading one of the arrays by dropping a drive, you will add that drive as Disk 3, and complete the data copy. 10. Create another pool with 2 members, call it whatever you want, "transfer", "scratch", "cache2", pretty much anything besides cache. Assign it to whichever shares you want it to work with. 11. After everything is working as desired, do whatever is necessary to dissolve the ZFS array, and keep those disks unused until needed, to replace failed drives, or as additional space when the main array drops below 18TB free. I don't recommend adding more drive than you are actively using, it's a waste of drive hours and electricity. What I have outlined is by far not the quickest method, I tried to keep everything as safe and simple as possible, with verifiable progress and steps that can be undone or redone if things aren't working as planned.
  13. Does it act the same on the release version 6.10.2? Collect diagnostics after the shares are gone and post in the general support area, be sure to attach the diagnostics zip file. This is unlikely to be a bug.
  14. Still needs explanation. I'm running several different Unraid boxes with VM's some with hardware passthrough, none are using UEFI boot.
  15. Are you using GPU transcoding? Did you update and make sure the GPU drivers were installed after the update?
  16. Every bit of every drive in the parity array, filled with data, empty, doesn't matter, is part of the parity equation. The upshot of this, is that if an empty drive fails, rebuilding a full drive that fails before the empty drive is rebuilt is impossible, because the bits on that empty drive are still part of the rebuilding process. So, best practice to reduce points of failure is to only add drives that you need to actually hold data, plus a margin of empty space on each drive. Personally only I keep enough array space empty as my largest data drive, for example, if I was using 6TB drives I wouldn't add array drives until I had less than 6TB free across all the drives. That way if you needed to, you can empty your largest data drive by moving the data to the others, at least temporarily for organizational reasons, plus file systems need some free space for checks and stuff. So, yes, all available drives CAN be installed right away, but I would strongly recommend NOT doing that. All drives will fail. Predicting when is more art and mentalism than science. Always be sure ALL your array drives are healthy, replace promptly when needed, keep good backups disconnected from the array to allow recovery from accidental deletion or corruption.
  17. ZFS is not a part of Unraid yet, so I'd advise asking in the thread on this forum for the third party plugin that enables ZFS.
  18. Enable the built in server. Open the syslog and find the command that enabled it, corresponding to the time you enabled the server. Put that line in a script file and run it after the array boots. User scripts plugin works fine for this.
×
×
  • Create New...