Jump to content

itimpi

Moderators
  • Posts

    20,126
  • Joined

  • Last visited

  • Days Won

    55

Everything posted by itimpi

  1. We would need the diagnostics when the drives cannot be seen for them to be of any use in trying to diagnose the issue.
  2. You are likely to get better informed feedback if you post your system’s diagnostics zip file taken while running the 6.10.x release. If you are running VMs and passing through hardware have you checked whether the device ids have changed? Your symptoms suggest that the disk controller is not available to Unraid.
  3. Y Probably because you have not updated to 6,10.1 stable and so your Unraid version is too old.
  4. The drive order is irrelevant to Unraid do something else is hoping on. Do you have a locally attached keyboard/monitor so you can see what is going on? If so what is showing there? One possibility is that you have disturbed the network cable?
  5. There is the manual backup option available if you click on the drive on the ‘main tab. Automated backups can be done via the My Servers or CA Backup plugins.
  6. It is up to you when you add the second parity disk. I would definitely recommend rebuilding the failed disk BEFORE upgrading the OS. You want the system to be as stable as possible before attempting anything as major as the OS upgrade. Normally it is fine but you want to minimise any variables if any problems ensue.
  7. Under Settings->Docker with advanced view enabled (Docker Custom Network Type). I think the array needs to be stopped to change it.
  8. How are the shares in question being mounted? If done via Unassigned Devices plugin then it should handle unmounting automatically during a shutdown. Are you sure you do not have a telnet/console session active with the current directory being on the array as this will also stop the array shutting down correctly.
  9. Your syslog is full of crashes on macvlan. This can normally be fixed by upgrading to 6.10.1 and using iplan instead under docker. later you start getting btrfs errors on the loop2 device which is the one holding the docker image file so this is corrupt and will need recreating. Not sure if there are also problems at the btrfs level on the cache pool.
  10. It is normally an order of magnitude faster than accessing the main parity protected array, so not sure what performance you were getting to give this impression.
  11. Doing this is almost certain to result in your server being hacked so is strongly discouraged. If the Unraid server is in the dmz then ALL ports can be accessed from the internet. What release of Unraid are you trying to use? What URL are you trying to use to access the URL. What setting do you have for SSL (you might need to look in the ident.cfg file on the flash drive).
  12. You should enable the syslog server to get some log information that survives a reboot in case it happens again.
  13. You want the Split Level to be set to allow unlimited Split Levels.
  14. Yes, although you should ideally also do the same with the changes.txt file.
  15. Easiest thing is to download the zip for the 6.10.1 release from the Unraid download page and then extract all the bz* type files overwriting the ones on the flash.
  16. You click on the pool on the Main tab; scroll down to the Balance section; select the Single option from the dropdown and then start the Balance as described here in the online documentations accessible via the ‘Manual’ link at the bottom of the GUI.
  17. Are you using the 6.10.0 or the 6.10.1 release? The latter has a fix for the UNRAID partition on the flash not being partition 1 which it looks like it could be your problem?
  18. your syslog is being spammed with messages of the form: May 20 11:59:45 AJRFCOUNRAID emhttpd: *** bug: 5366 /mnt/user/appdata ssdcache May 20 11:59:45 AJRFCOUNRAID emhttpd: *** bug: 5366 /mnt/user/virtualmachines ssdcache According you diagnostics you have shares configured to use a pool called ‘ssdcache’, but this does pool not exist. That in itself is not necessarily a problem as then Unraid would simply write to the array instead, but think you have something (probably a docker) configured to use /mnt/ssdcache so this location is getting created in RAM which starts getting used, and then anything written to it will get lost on reboot. The only SSD device I can see seems to be part of the array as disk5 which is not what you want as having the SSD as part of the array will significantly impact its performance due to the overheads of updating parity for all writes
  19. Have you set the BTRFS profile to Single and then run the required Balance?
  20. If you have a single device pool the XFS seems to be more stable. If you have a multi-device pool then btrfs is your only option.
  21. You could simply try removing all the current vfio bindings and then redoing them?
  22. I would suggest you post your system's diagnostics so we can see if there is anything obvious that might be causing the excessive time estimate.
  23. Are you passing any hardware through to the VM? If so it is worth checking to see whether the Id's for it changed on 6,10,0 as it has a much newer Linux kernel.
  24. You are likely to get better informed feedback if you post your system’s diagnostics zip file. If a disk shows a red ‘x’ then thus means that a write to it failed and Unraid has stopped using it (put it into a ‘disabled’ state). If you have sufficient parity drives then Unraid will be emulating the drive and showing its contents using all the other array drives plus the parity drive to work out what should be on that drive. The normal to clear the disabled state is to rebuild the physical dtive to match the emulated one. Do the ‘emulated drive(s) show the expected content, and do they have a list+found folder? Your answers plus the diagnostics would help with deciding if this is the best next step.
  25. Yes. i have always wondered why a dummy file that does nothing is not included as standard.
×
×
  • Create New...