Jump to content

itimpi

Moderators
  • Posts

    20,789
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. You approach would work, but at the end you would have parity2 but no parity1 drive. Some people prefer to not be in that situation.. Another option is to replace the 8TB parity drive with the 16TB one and let Unraid build parity on that. Keep the old parity drive intact until that finishes just in case something goes wrong. When you have rebuilt parity on the 16TB drive follow the standard process for adding a new drive to the array to add the old 8TB drive as a data drive.
  2. The diagnostics seem to show that parity, disk2 and disk3 are all offline and there is no SMART information for any of them. Is there anything these drives share? at the very least you should carefully check the power and SATA cabling to the drive. Could you have a PSU issue?
  3. What version of Unraid are you running? Nerdpack is incompatible with recent Unraid versions.
  4. With your settings new files go to the primary storage location but existing files are left where they are. To fix this you need to temporarily set the array as secondary storage; the mover direction to be array->cache; disable the docker service and then run mover. on completion you can remove the secondary storage setting and re-enable the docker service. You can also set Settings->Global Share settings to allow Exclusive Shares and now that all the appdata files are on the cache drive the appdata share will get the performance benefit of running as an Exclusive Share.
  5. Not necessarily - there can be lots of reasons a drive is disabled other than the drive itself being faulty - with the most common being power or sata cabling related. When you say the drive passed SMART - was this the extended SMART test which is normally a good test of the drive? You should post your system's diagnostics zip file in your next post in this thread to get more informed feedback. It is always a good idea to post this if your question might involve us seeing how you have things set up or to look at recent logs.
  6. since you have dual parity the system will be able to successfully handle 2 'failed/disabled' drives - the one being rebuilt and the one with a new red 'x' although you are not protected against another drive having problems until the current rebuild completes.
  7. If you can boot in gui mode then you can try to see if you can access an internet address.
  8. Could not see any sign of the crucial drive in the diagnostics. Make sure nothing has worked loose.
  9. Did you actively restart the server or did it do it by itself? Just asking as the delay between the syslog-previous and the new syslog is only a minute or so, and automatic reboots are almost invariably hardware related.
  10. Have you tried following the procedure documented here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page. The Unraid OS->Manual section in particular covers most features of the current Unraid release.
  11. The syslog in the diagnostics is the RAM copy and only shows what happened since the reboot. It could be worth enabling the syslog server to get a syslog that survives a reboot so we can see what happened leading up to the crash
  12. You can change the global utilisation warning and critical levels under Settings->Disk Settings, and can set custom values to override the defaults for any drive by clicking on it on the Main tab.
  13. Just to check the obvious, have you made sure that the SMB setting for User Shares you want visible on the network is not set to Export=No (the default). You need one of the other settings to see them on the network.
  14. It is simply saying that disk1 is over the utilisation warning threshold currently set for that drive and is nothing to do with the shares.
  15. If you do not have a lost+found folder on the dtive then that normally means all data has been recovered with no data loss.
  16. Something not quite right in your explanation as mover direction can only be to/from array.
  17. no idea why that change should have any effect as they are 2 different views of the same file - one going via the User Share view and the other via the physical device view.
  18. The syslog in the diagnostics are those from RAM and start from 21:42 which is when I assume you booted the server. To get a syslog that survives a reboot so we can see what leads up to a crash you need to enable the syslog server.
  19. Never a good sign, but sometime explained by the power connections to the drive not providing sufficient current.
  20. If you physically removed the disk then Unraid should be emulating it using the parity drive. Since have single-parity then it should be running a read-check rather than a proper parity check to make sure all other drives can be read correctly. However that will mean you are currently unprotected against another drive having issues. Assuming the contents of the emulated disk1 look OK, then when it is replaced that content will be what is rebuilt onto the replacement.
  21. According to your diagnostics disk1 is the problem as it appears to have dropped offline. The parity disk has 1 reallocated sector which is not a problem but is unusual in a new disk so worth keeping an eye on that.
  22. That looks like it might be macvlan related which is known to cause a server to eventually crash on the 6.12.x releases. You should either switch your docker containers to to using ipvlan networking, or follow the instructions in the 6.12.5 release notes to continue using macvlan.
  23. The previous settings were stored on the flash drive which is why you cannot edit the container settings. You can create a new docker.img file and then reinstall your docker containers with the same settings as used previously so they can pick up their existing appdata files.
  24. Have you checked that the share settings do not have Export=No in their SMB settings (which is the default as that will mean they are not visible on the network.
×
×
  • Create New...