Jump to content

itimpi

Moderators
  • Posts

    20,696
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. Without diagnostics taken after this has happened (and before rebooting) it will not be possible to do anything other than guess ar the cause.
  2. Do any of your docker containers refer to disk1 explicitly in their mappings? It should always be to /mnt/user/appdata/
  3. You should have been able to get a trial key for that release. Perhaps a screenshot of exactly what happened at that point might give a clue as to why.
  4. At this point I would suggest you use the New Config tool and use it to set up the array without disk1 or disk3, and build new parity based on those drives. You could also take the opportunity to add another drive at this point if you so desire (perhaps to hold any data you can recover from disk1). At that point you would have the array in a protected state, albeit without the data from the ‘failed’ disk1. in terms of recovering data off disk1 then I suggest trying disk recovery software such as UFS Explorer on Windows to see if it can recognise any data on the old disk1. UFS Explorer is not free but you only have to pay for it if you want to actually recover data - not if simply scanning the drive to see if it will be able to recover any data on the paid-for version. There may be other free tools that can also do this, but UFS Explorer is the only one I have personal experience of (and have paid for). However if the drive really has physically failed then such software will not be able to do anything with it. You may want to wait a while to see if anyone else chimes in with some way forward that I have not thought off.
  5. A disk must be all zeroes before it can be added to the array without affecting parity. The ‘clearing’ process is Unraid doing this by writing zeroes to every sector on the drive at the time you add the drive. The other alternative is to write zeroes to every sector on the drive BEFORE attempting it to the array. This is known as Pre-Clear (currently best supported by the Unnassigned Devices Preclear plugin) and can be done while the array is being used for other purposes. If you have done this then the pre-clear writes a special signature to the drive so that when you add the disk to the array then Unraid will realise that the disk is all zeroes and add it and make it immediately available. The pre-clear process can also be used to carry out a confidence check on the drive before using it in Unraid.
  6. I know, but Unraid will not be able to do any recovery because as far as it is concerned you have 2 disks failed (it is treating the unassigned disk as failed). As I said your only chance is whether you can do any sort of recovery on disk1 outside Unraid.
  7. That was not sufficient to stop Unraid acting as if it was present Unraid would have been emulating the disk it thought should be there and you would be running with the array unprotected since you only have single parity. At this point your only chance of recovering your data is if the ‘failed’.drive has not physically failed but was marked as disabled for some other reason. If that is the case then you can probably get data off it. If it HAS failed then you will have lost its contents unless you can get them back from your backups.
  8. If it happens just make sure you get diagnostics before rebooting. You can then post them to get some informed feedback.
  9. Without a log any reply is just a guess One reason that seems to cause these symptoms is the shfs process (which supports user shares) crashing. Can be a good idea to run memtest if it occurs with any frequency.
  10. Yes (unless you have activated the syslog server).
  11. I must admit I had not looked at the dates when I look now the latest log entry seems to be Feb 18th which is a bit suspicious. I would have expected to see some recent entries from your attempts to get the new drive to be recognised? Maybe it is worth rebooting the server and then getting new diagnostics that are up-to-date?
  12. Do not think this is the case but I am not sure there are thousands of attempts so it seems unlikely but I could be wrong. If you look at the syslog files in the 'logs' folder within the diagnostics you. will see what I mean.
  13. That seems a not uncommon problem. It is as though on some systems the bz* type files are not written out correctly.
  14. It appears that you do not have the option to run Manual checks in increments set. In that case the plugin is meant to resume the check when the disks cooled off, but it did not. If that option WAS set then it would be correct to pause until the next increment window happened. Hopefully the diagnostics with Testing logging mode set will allow me to pin down more exactly what happened. If not I will have to build in some additional logging to get to the root cause.
  15. The diagnostics show multiple attempts to log into your server from a variety of internet addresses. Do you have your server exposed to the internet (or in your routers DMZ)? Unraid is not hardened enough to be directly exposed to the internet. If you want to be accessing Unraid from the internet then you should be using either the My Servers plugin or be securing the link using a VPN (such as the built-in WireGuard) or an equivalent. Because of the numerous ongoing messages showing attempts to log into your server from the internet i find it is not practical to try and find any relating to your drives.
  16. I do not think it is related to the spindown (at least not directly). In the diagnostics posted I can see the parity disk getting detected as being too hot (at 57C), a pause issued and the drives and subsequently start cooling down. When the drives are spun down they are treated as cooled down so at that point a resume should be issued. This is designed behaviour as the plugin assumes that drives do not spin down in a normal parity check (maybe this assumption will need revisiting) unless the check has gotten beyond the drive size. The problem appears to be that the plugin has lost track of the fact that the pause happened due to drives overheating, so it does not issue a resume even after it thinks they have cooled down. I suspect there must be a bug somewhere for this to happen. Having said that it may simply due to the fact that you are currently outside the time slot allocated for running increments so that the observed behaviour is actually correct. Is there any chance of repeating what you did , but with this time having the plugin Testing mode of logging active. That should allow me to pin down exactly why the plugin is not issuing a Pause.
  17. Pre-clear is never required, (it is an optional step that some users like to do to act aa a stress test on the drives) so something else is going on. You are likely to get better informed feedback if you post your system’s diagnostics zip file.
  18. The diagnostics show continual resets on disk1 and disk4 which explains the excessive time. I would carefully check cabling to these drives.
  19. That screenshot shows 2 drives missing or failed but only single parity so Unraid cannot do the parity swap.
  20. In reality 3gb is more than any HDD drive can transfer regardless of how it is connected so it does not matter. It is only when you get into NVME SSDs that a single device can transfer at this sort of speed.
  21. Sounds more sensible. The time is not related to the number of disks (unless your disk controller cannot handle the throughput), but the size of the largest parity drive.
  22. The MyServers plugin is good for getting remote access to the Unraid GUI with minimum effort. If you want remote access to anything else (e.g. Docker containers, other devices on home LAN) then WireGuard can support this securely.
  23. If you want to access the server remotely you need to use something like the built-in WireGuard VPN server to secure the link, or use the Remote Access feature of the My Servers plugin.
  24. Have you tried booting a fresh install on another USB drive to check it at least boots? If that works that would point to either a problem with your current flash drive or a setting somewhere in the ‘config’ folder on that drive. If even a fresh install will not boot that suggests some other hardware error.
  25. This would work. however another approach might be to do use Tools->New Config and set up the array with the drives you intend to keep and then start the array to build parity based on the new drives and format them. You could then mount the old drives as Unassigned Devices and copy their content to the new array.
×
×
  • Create New...