Jump to content

JonathanM

Moderators
  • Posts

    16,267
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. No harm in doing both at the same time.
  2. Shutdown server, replace drive, boot server, select replacement drive in parity slot, start array to rebuild parity.
  3. The screenshot you posted says "Additional Requirements: MySQL"
  4. Mounting the drives normally will write some data. If you were to manually mount the drives read only it should still stay 100% valid.
  5. If you do the upgrade in maintenance mode, so the data drives are not mounted, the old parity drive would stay valid.
  6. You must change the listening port in the application itself, not in the container config.
  7. Are you sure your config location is mapped correctly?
  8. You should post in the support thread for the mover tuning plugin.
  9. You are supposed to post in the existing topic for the specific container, typically reached from the support link on the dropdown in the GUI. The support thread will usually have troubleshooting steps as well as others who may be experiencing the same symptoms and already figured out the issue, so you may not even need to post, just read the first and last several posts in the thread.
  10. You can cancel that check and do your tests to hopefully fix the issue. Just make sure to do a correcting parity check after things have been sorted out.
  11. If you have any power splitters in use, those can definitely cause these kinds of strange issues. So can a weak PSU that is on the edge of just enough power, but not fully failed yet.
  12. Depends on what you intend. It will trigger a rebuild of parity based on the current content of the data disks. It will permanently remove the possibility of rebuilding a disabled emulated data disk. If you want to rebuild a data disk do NOT do a new config.
  13. You could set up your new array with your current single drive cache pool reassigned as a single array disk1, that would take care of the array disk requirement. Now that you have more fully described your setup, I have a 10,000ft overview of what I would recommend. 1. Move your current Unraid setup intact (Boot USB license stick, array drives and cache SSD) to the new board. Make sure it's functional, like you said in #5 in the OP you would need to redo any CPU and board hardware specific stuff. 2. Add your new SSD's into a newly defined pool, name it as the final primary storage, the name doesn't particularly matter, but you will be referencing it pretty much from now on, so something like mainzfs or otherwise descriptive would work. 3. Set the new ZFS pool as primary for all desired shares that are currently on the array, set the array as secondary storage, and mover action from array to ZFS pool. 4. For shares that currently live on the SSD cache pool, set the array as primary storage, and the cache pool as secondary, mover action from cache pool to array. 5. Disable vm and docker services in settings. NOT JUST STOP THE CONTAINERS AND VMS, you must disable the services. There should be no VM or Docker tabs on the GUI when done. 6. Run mover. Wait. For many hours. Check if mover is still running. Wait more hours. 7. After mover is done, for any shares that were on the cache pool and should now be on the array, set the ZFS pool to primary, the array as secondary, mover action from array to ZFS pool. 8. Run mover. Wait. Hopefully not nearly as long this time. 9. Set all shares to ZFS pool as primary, secondary to none. Verify all files now are on ZFS pool. 10. Execute new config, preserve all, apply. Go to main page, remove all the current array assignments, remove cache pool, assign cache pool ssd as disk1 in the array. Leave ZFS pool alone. 11. Start array, format array disk1 if needed. 12. Verify Docker and VM services are pointing to the ZFS pool correctly, start the services. There may be nuances and dragons in each of these steps. Don't assume, ask questions. All my instructions ASSumed you are using user shares as intended, if you were actually pointing to specific disk paths those will need to be amended to the corresponding user shares at some point in the process. Exclusive mode should also be enabled after everything is on the new ZFS pool.
  14. Normally the docker system and container appdata as well as VM files all reside there. Where are yours?
  15. Each pool is a single destination, no matter the number of individual drives in the pool. You will still need a drive assigned to a disk slot in the parity array for now, this can be a spare USB stick if you want, but it's required to have a drive assigned to the main array. This requirement may (will?) change in the next major version of Unraid. Do you currently have any pools defined?
  16. Have you tried all the different connections on the cards? Sometimes only one output will go live.
  17. htop doesn't show iowait normally, the dashboard includes it.
  18. I got the impression you weren't using the array for your SSD's, except for the one you just mentioned, 870 evo.
  19. How did you adapt those instructions to apply to Unraid?
  20. Did you copy your license key file back into the config folder?
  21. On that main GUI page click on each blue disk link text and set the disk format type. BTW, formatting in UD before adding them to the array is not recommended, much better to allow them to be formatted after they are assigned to the array.
  22. Realtime parity calculation and updates for all writes.
×
×
  • Create New...