JustOverride

Members
  • Posts

    180
  • Joined

  • Last visited

Everything posted by JustOverride

  1. 🙃 I thought this was implemented but when I went to actually use it today I see that it isn't. I guess we wait now.
  2. This seems like a great option for unraid. Should be added to unraid by default IMO. I haven't added it yet just because I can't be bothered to stay on top of it to ensure it works well. For example Nextcloud is already pushing its luck with the random issues it has that requires manual updates, etc.
  3. I updated without backing up the USB, restarted without stopping the array...lolyolo Updated without issues, no parity check, (everything went as expected). Thank you team!
  4. Not sure what kind of black magic you guys did but my Unraid server runs about 50w lower on idle. I read somewhere that someone mention they were having lower power usage too from this update. I'm still on 6.12.3 and my system is running pretty solid (I will upgrade once I'm ready to mess with ZFS, need a few new hard drives too). Anyway, great job Unraid team!
  5. I can't find this location and/or file: "Upon boot, if all PCI devices specified in 'config/vfio-pci.cfg' do not properly bind, VM Autostart is prevented. You may still start individual VMs. This is to prevent Unraid host crash if hardware PCI IDs changed because of a kernel update or physical hardware change. To restore VM autostart, examine '/var/log/vfio-pci-errors' and remove offending PCI IDs from 'config/vfio-pci.cfg' file and reboot." I found the second one, the config file. Where can I find this? (Don't have a var folder in the USB, and the 'logs' folder has other type of logs) So, I went and deleted the 'config/vfio-pci.cfg' file. Hopefully this will fix it, If it doesn't I'll edit and update. 1. If the system knows about it (by throwing the error) why doesn't it just fix it? 2. Please see one again.
  6. Getting this email when I start the server: Event: VM Autostart disabled Subject: vfio-pci-errors Description: VM Autostart disabled due to vfio-bind error Importance: alert Please review /var/log/vfio-pci-errors
  7. Upgraded from 6.11.5 to 6.12 without problems. Everything seems to be working fine... ... but how do I reset the dashboard layout, I know I read it sometime before but I forgot. Doesn't seem to be something that is clearly visible. found it.. its the blue wrench at the main panel.
  8. Thanks for the additional info. Now I know for sure I won't be touching that until it is fully released. 🙃 IMO, this update should be pushed back until it is ready for full ZFS support.
  9. 1. I think so, but also depends on your use case. It is a RC not a stable release so I wouldn't put production data or important data while using a RC. 2. I actually would like to know this too, but I assume no as there has been no mention of it from what I've read.
  10. Awesome! Thank you for the update, can't wait to start using this. Looks like I may need to upgrade my server now.
  11. Just checking back to report the issue. Seems like the slight OC I had on the CPU started to catch up to it. During Plex schedule tasks which are a bit CPU intensive caused the CPU to thermal lock. Once I turned the OC off now it reaches 83c (which is still a lot) but continues to run without issue. Looking to upgrade the cooler now, maybe just update the whole system as it was just an old gaming computer that became the server kinda thing. Any recommendations?
  12. I have copied the syslog from the USB's log folder.
  13. Ok, Turned that option to Yes. Meanwhile, here are the logs for this mornings crash using the previous settings I posted. Let me know if these help any. Also, curious to know where in the logs do you look? I was checking out \logs\syslog.txt
  14. Is that it? It doesn't let me select the local syslog folder, the only option is '<custom>'.
  15. Randomly Unraid becomes unresponsive. VM's weren't running, just Dockers but nothing new recently. Unraid cannot be reached from the web, cannot check physically as it is running headless. However, connecting a keyboard to the system to attempt a blind shutdown does not work (keyboard does not get power) and I have to forcefully reset/shutdown.
  16. Could you provide me with directions on how I would go about using rsync?
  17. So, asking for a friend... What is the best way (verifying data integrity) to move the data out of Unraid into another data storage system like Truenas or just back into a regular disk? Like what is the actual safe process, because technically there are a bunch of ways of achieving this. My assumption is: 1. Buy a new hard drive as large as the array. 2. Run pre-clear to ensure it is clear and to pre-test it. 3. Turn off dockers/VM's, have nothing using the Array. then.. ?4. Add the hard drive as a UD and copy everything from the array '/unraid/user/' using Krusader. ?4. Add the hard drive in another system, and then copy everything using robocopy (with a 10G nic). 5. Wipe the USB and array drives and use a new system. 6. Move data back into new system.
  18. On the report after the parity check (green notification), it says it took longer than it actually took. This has happened twice. The schedule is to start at 8pm.
  19. I was having this issue for the longest time (so I had given up on opening the page at all). Removing all the pings, save, reboot, reassign, fixed it. Thank you.
  20. In THAT case, whatever uses the array would go offline and not work obviously for your particular scenario. But what WE'RE asking is for the VM's/dockers that remain in a stand alone or cache drive. There could also be a way to mark a VM/docker as 'keep in cache/alive' or w.e so that it is not moved to the array. Plenty of ways to implement it really. (Edit) You know when you really look at it, what we want is to be able to keep our internet going while doing w.e that requires the array to go down. Pfsense, ubiquity, homassistant? etc. That's what we're really asking here. These things run mainly in RAM too so I'm not seen what the big problem is.
  21. Same exact issue with me. Cannot get it going again.
  22. Upgraded to 6.11.1 from 6.11 without issues. Thanks LT team!
  23. (user tab) Group settings for users when selecting shares. Basically each created group has pre-set user access for the shares which will make it easier to assign access to users. (share tab) Groups for selected Disks (included, excluded). Perhaps have this also be the same groups as the 'Spinup group(s):'. This will just make things easier to manage, and make these changes in one place (groups) and have it apply in their respective places automatically.
  24. When updating dockers there is a 'please wait' message even though everything is done. Not an actual issue, it just can be a little confusing. PS: Would be nice if we add some color text to completed/errors?