Jump to content

itimpi

Moderators
  • Posts

    20,186
  • Joined

  • Last visited

  • Days Won

    55

Everything posted by itimpi

  1. You should provide your system’s diagnostics zip file so we can see what is going on.
  2. You must have changed the default format for array disks under Settings->Disk Settings as normally it is set to xfs.
  3. That means a write to it failed (as described here in the online documentation) - not that the disk itself is necessarily faulty. In fact looking at the diagnostics it appears it is probably fine. You would have been fine if you had simply assigned the new disk as disk5 in place of the 'failed' disk5 but you made a mistake when you added i as disk6 as Unraid now thinks it should always have a disk6. I notice that you also have a disabled parity2 disk which from its SMART information also looks OK. Do you know if the disk5 was disabled when you did this? This is relevant as if it was you were only formatting the emulated disk so the contents of the physical disk5 may still be intact. If you formatted the physical disk then its contents are probably lost unless you run specialist disk recovery software such as UFS Explorer on Windows. Do you have backups? Not sure of the best way forward at this point - hopefully someone like @JorgeB will have an idea.
  4. It should just be a case of stopping the array and restarting in Normal mode. At that point everything should be OK, but if not ask for advice.
  5. If you carried out the earlier listed step to check that the contents of the emulated disk looked OK then I would expect the final result will be fine. The advantage of doing this in Maintenance mode is that nothing else can write to the array while the rebuild is running which maximises speed. The disadvantage is that you cannot use the array in the meantime and until you return to normal mode cannot see what the contents of the disk being rebuilt will look like. I will edit the instructions to make this clearer.
  6. As was mentioned millions of RAM related errors in syslog. I would not be happy until the cause of those is tracked down and eliminated.
  7. Handling of disabled disks is covered here in the online documentations accessible via the ‘Manual’ link at the bottom of the GUI.
  8. Have you checked Settings->Unassigned Devices for the SMB Security Settings. As a result of a recent change in UD you may find that it is now set to No (I.e. not visible on the network).
  9. I do not think that the 6.10.3 release GUI should be any slower than the 6.10.2 one. However you do seem to be getting a lot of messages along the lines of Jun 22 15:44:17 SERVER nginx: 2022/06/22 15:44:17 [error] 7227#7227: *10219 limiting requests, excess: 20.134 by zone "authlimit", client: 172.17.0.6, server: , request: "GET /login HTTP/1.1", host: "192.168.7.11" Jun 22 15:44:17 SERVER nginx: 2022/06/22 15:44:17 [error] 7227#7227: *10220 limiting requests, excess: 20.134 by zone "authlimit", client: 172.17.0.6, server: , request: "GET /login HTTP/1.1", host: "192.168.7.11" Jun 22 15:44:17 SERVER nginx: 2022/06/22 15:44:17 [error] 7227#7227: *10221 limiting requests, excess: 20.134 by zone "authlimit", client: 172.17.0.6, server: , request: "GET /login HTTP/1.1", host: "192.168.7.11" Jun 22 15:44:18 SERVER nginx: 2022/06/22 15:44:18 [error] 7227#7227: *10230 limiting requests, excess: 20.091 by zone "authlimit", client: 172.17.0.6, server: , request: "POST /login HTTP/1.1", host: "192.168.7.11" In the syslog which is not normal. It might be worth trying again but ensuring you first close all browser windows/tabs you currently have open on the GUI prior to the upgrade.
  10. If you are using an Unraid 6.10.x release then the easiest thing to do is to install the Dynamix File Manager and use that instead of Krusader as it can be run from the Unraid GUI. All Unraid releases also have the Midnight Commander (‘mc’ command) that provides a simple file manager you can run from the Unraid command line.
  11. Yes! It is covered here in the online documentations accessible via the ‘Manual’ link at the bottom of the GUI.
  12. Cannot see any attempt to format disk6 in those diagnostics. All I can see is it being partitioned.
  13. The diagnostics showed you successfully formatting disk5. If this is a different disk then post new diagnostics.
  14. You are likely to get better informed feedback if you post your system’s diagnostics zip file.
  15. Mover does a lot of checks so it is never going to be as fast as moving the files with a traditional File Manager. Unraid 6.10.x series now has the option to have the Dynamix File Manager installed so that you can manipulate files from the GUI, and this is what I would recommend to use to move files between pools as it would be much faster than using mover (even if mover supported pool->pool moves).
  16. I thought from you statement that it had not yet completed. As was mentioned then an automatic parity check after a rebooot suggest that you did not have a clean shutdown of the array. The plugin would have informed you if it thought that an unclean shutdown had been detected and that was triggering an automatic check. You might want to read this section of the online documentations accessible via the ‘Manual’ link at the bottom of the GUI.
  17. Have you made sure that you have a sensible Minimum Free Space setting for the cache to avoid it getting completely filled which can cause problems?
  18. This is standard Unraid behaviour. if you want the ability to resume parity checks from the point they had reached then install the Parity Check Tuning plugin and enable this option in its settings.
  19. Unassigned devices shares never show up under User Share settings. you might want to check the SMB security setting under settings->unassigned Devices. A recent change means they may now default to not being visible on the network
  20. Glad you solved it A bit annoyed I did not spot that myself when checking the share settings in the diagnostics (sometimes one misses what should be obvious)
  21. It looks like disk1 dropped offline for some reason. The syslog also shows errors occurring on disk2. In both cases it looks like it could be cabling related. I would suggest powering down the server, carefully check the power and SATA cabling to the drives, reboot the server and then get and post new diagnostics. BTW: The diagnostics also include SMART reports as standard so normally no reason to provide them separately.
  22. The web gui is always accessed via eth0 so as long as that only has one ip associated with it you should be ok.
  23. Yes. In Unraid each array disk is a free-standing file system independent of any other drives in the array.
  24. There is nothing relating to parity on the data drives so removing the parity drive will have no effect on them. Much more likely the difference is due to the version of Unraid used to format the drives. Newer Unraid releases use a later XFS version which has a larger overhead when formatting a drive.
  25. If you have set a share as Secure (as you have for the share anonymised as B———e) then you MUST have an Unraid user setup with rights to that share. You must then use the credentials set up for the Unraid user to access the share. The alternative is to have the security on the Unraid share set as Public so that any user can access the share (using ‘guest’ mode access).
×
×
  • Create New...