Jump to content

itimpi

Moderators
  • Posts

    20,696
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. What Manual edit? Is this something available in the GUI that I am missing?
  2. I use a SSD that is not part of any pool so it remains mounted even when the array is stopped. Mounting/unmounting of that drive is handled via the go/stop files. On the basis that the GUI is not going to change do you think it would be wrong to add the information to the online documentation on how to manually set up a path to a disk that UnRaid is not controlling for syslog purposes (with a warning on getting it wrong might have dire consequences)?
  3. The “prefer” setting means move files from array to cache as long as there is room on the cache. The “yes” setting means new files are initially created on the cache and then later moved to the array (as long as they are not currently in use).
  4. Have you checked that none of the ‘appdata’ share is on the main array? If it is that would explain such a huge difference.
  5. That suggests that the flash drive is not mounted at /boot as it should be and as a result the /boot location is just in RAM if you go into the console then you can try the ‘df’ command to see what (if anything) is mounted at /boot.
  6. At the moment you cannot have more than 2 parity drives (although I wonder if that will increase in the future?). At least with UnRaid each drive is a self-contained file system so even if you have more drives fail than you have parity drives all other drives will still be readable.
  7. I already log off the array but I had to manually edit the config/rsyslog.cfg file on the flash drive as the web GUI does not allow entering the path manually. Probably the GUI should allow this to avoid the manual edit. I have a small ‘spare’ SSD in the system that is set up so I can boot Windows on the Unraid server if needed. I mount this drive on system startup via an entry added to config/go on the flash drive and unmount it on system shutdown by an entry in the config/stop file. I write the syslog to this drive via the syslog server. It could just as easily be an unassigned device plugged in via USB that can then be read on another system if needed.
  8. Once you have a VPN in place it can be used to access anything on the Unraid server. If configured correctly then anything on the home LAN can be accessed. It is normally mentioned in the context of the main page as that is not hardened against attack from the internet. WireGuard is an alternative to OpenVPN and is built into Unraid. It has the advantage that it runs even when the array is stopped.
  9. it appears that the 'Domains' share (presumably holding VM vdisk files) is taking up most of the space on the cache. I notice that the 'Unraid Data' share has a Split Level setting of 1 - is that intentional as it is very restrictive? It is not clear why mover is not moving the contents of that share from cache to array when run which you say is happening? Turning on mover logging; trying to run mover manually from the Main tab; and then getting new diagnostics might help with working out why.
  10. Was there a reason you were not using a VPN link (via WireGuard on Unraid) which would be much more secure?
  11. Assuming you mean a parity drive then you can add (or remove) parity drives at any time.
  12. You can already do this using the syslog functionality as described here in the online documentation that can be accessed via the Manual link at the bottom of the Unraid GUI.
  13. If it got disabled then a write to it failed for some reason. If you are sure the drive is healthy you can clear the disabled state by rebuilding onto the drive as described here in the online documentation accessible via the ‘Manual’ link at the bottom of the Unraid GUI.
  14. As long as the controller is a HBA (such as the recommended LSI ones) and not a RAID controller there should be no issue. Unraid recognises disks by their serial number and does not care where they are connected as long as the serial number is passed through on the connection.
  15. recreating the Docker.img and it’s contents with previous settings is a trivial and quick operation. It is the appdata contents used by each docker container that is what tends to really matter.
  16. Was there already a partition present which did not fill the drive? I think that can also cause this sort of symptom.
  17. This should just mean that the VM service is set as disabled. Going into Settings to re-enable it should get the tab back.
  18. This step was wrong. When you tell Unraid to format a drive it creates an empty file system on the drive (which wipes its contents) and updates parity to reflect the fact that you have done this When you tried to format you would have got a big pop-up explaining this but it sounds as though you told Unraid to proceed anyway. It if's possible some disk recovery software (such as UFS Explorer on Windows) might be able to recover most of the drive contents but this is by no means certain.
  19. Have you checked to see if there are any other partitions on the drive?
  20. The problem is that both drives are being reported with the same drive identification string whereas for correct operation UD needs them to be different (typically including the drive serial number). This is not uncommon with some USB docks that do not pass the drive identification through to the host and it is best to avoid using such docks with Unraid if at all possible.
  21. Just z few points of clarification If you are using the default High Water allocation method and reasonable values for Minimum Free Space and Split Level in the share settings then there is no reason you should be running into disk full issues on one drive with other drives relatively empty. If a drive fails it will take the same amount of time to rebuild that drive regardless of whether it is 1% or 99% full.
  22. No idea unless it was not really new I notice that even running that test made the number or Pending Sectors go up significantly.
  23. The diagnostics you ported earlier had this in the SMART report for disk6 197 Current_Pending_Sector -O---K 100 100 000 - 12600 so I would say that disk is very sick and needs to be replaced. The syslog also backs this up with lots of errors being reported for that drive.
  24. A parity check is expected if you the server froze and you had to use the power button to restart the server as you did not do a tidy shutdown. if you are getting it when you use the reboot button from the GUI then that indicates something is stopping the array from being stopped cleanly. To get help with providing feedback you should set your set up your syslog to be persistent after a reboot so that you can provide the log covering a crash and also your systems diagnostic’s zip file (obtained via Tools->Diagnostics).
  25. Not sure why they are reported as different sizes, but in the syslog I see Jul 31 18:46:46 Tower kernel: mdcmd (1): import 0 sdc 2048 1953513560 0 Samsung_SSD_860_EVO_2TB_S597NJ0NB20380F Jul 31 18:46:46 Tower kernel: md: import disk0: (sdc) Samsung_SSD_860_EVO_2TB_S597NJ0NB20380F size: 1953513560 Jul 31 18:46:46 Tower kernel: md: disk0 new disk Jul 31 18:46:46 Tower kernel: mdcmd (2): import 1 sdb 64 1953514552 0 Samsung_SSD_860_EVO_2TB_S597NJ0NB18397B Jul 31 18:46:46 Tower kernel: md: import disk1: (sdb) Samsung_SSD_860_EVO_2TB_S597NJ0NB18397B size: 1953514552 Jul 31 18:46:46 Tower kernel: md: disk1 new disk which suggests they are not exact;y the same size. You could try swapping them over and see if UnRaid then lets you start the array.
×
×
  • Create New...