JonathanM

Moderators
  • Posts

    16094
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. See here. https://forums.unraid.net/topic/48383-support-linuxserverio-nextcloud/
  2. Why are you still on the RC when the stable for 6.12 is out?
  3. The shutdown status is recorded on the flash drive, so if that's not connected Unraid has to assume the shutdown was unclean.
  4. How is it urgent? There is a workaround. Not a showstopper, data loss, or server crash.
  5. Keep in mind that IP range is not meant for internal private use, those are owned by Microsoft, and you may run into issues if you use that range internally.
  6. https://forums.unraid.net/topic/46802-faq-for-unraid-v6/#comment-511923
  7. You can use the current version of Unraid with a trial version, assign all the drives as data drives, that should get you back in. If you know the email address you used to obtain your original license you should be able to contact support with your info and get your license back. In any case, your drives are probably using ReiserFS as itimpi said, and you need to copy that data to different drives with a newer format, as ReiserFS is not being updated and will be discontinued in the near future.
  8. That worked, thanks! I added a few steps, but the end result was the same. I first selected auto as the file system type, and as expected, it imported the existing BTRFS pool. Then I did the erase and reset. BTW, the three way default after I did the erase and selected ZFS was RAID0, don't know if that's intended.
  9. Also, do you want to leave this report open until the GUI is fixed?
  10. So, what would be the best way to reset? 🤣 Well, at least I uncovered it early. Too bad I didn't try it during the RC cycle.
  11. Parity is NOT valid, unless you successfully completed the dd to zero the drive. I was under the impression you gave up before it completed.
  12. Maybe order matters? Already had 3 device pool that originally had the 3 SSD's in a BTRFS 3 way, but slots were empty. Clicked on first device, changed format to ZFS, 1 group 3 members, applied changes Assigned 3 devices to the 3 empty slots, confirmed ZFS still was selected Started array, formatted pool. I guess to recreate try first setting up a 3 way BTRFS, format it, stop the array, unassign pool slots, start/stop array to commit change, change first pool slot to ZFS, assign disks.
  13. Attempted to change a formerly BTRFS 3 way mirror to ZFS, "format" fails, error in log is... emhttpd: cache: invalid profile: 'mirror' 3 0 I have NOT wiped the devices and tried again restarted and tried again I wanted to leave things as is in case more info was needed to help deal with the original issue.
  14. You let parity rebuild, correct?
  15. Did you kill the original script? You may need to manually kill it using htop at the console.
  16. What items from that post did you implement? Perhaps you should set up the syslog server and post the crash log from that along with the diagnostics zip file.
  17. Looks good, assuming after the array is stopped and started again disk2 shows unmountable as predicted.
  18. I would make a current backup in case something goes wrong, but yes, theoretically it should keep the data.
  19. Soon™ is feeling more and more imminent. Possibly in the next few days. Perhaps it may even be time to start up the Soon™ 6.13 series thread with whatever speculations and rumours you have heard.