Jump to content

itimpi

Moderators
  • Posts

    20,778
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. I think so. I believe that 6.12 will have a more sensible default value.
  2. Do you know what subnet the new location is using? If it is different to the old location then you might want to delete (or rename) the config/network.cfg file on the flash drive and rebooting which will then use DHCP.
  3. The licence is tied to the flash drive, not the server, so if you have the licence file for that flash drive it will remain valid on the new server.
  4. This normally means the flash drive has either previously been used for Unraid or it does not have a unique GUID.
  5. I think it is quite likely that the flash dtive is only updated after all drives (including pools) are unmounted successfully.
  6. No. Just make sure you have a sensible Minimum Free Space setting for the cache pool to protect it against overfilling.
  7. Good point - I am not sure. I think the cache drives staying busy may stop them being unmounted and cause issues. If that is happening I guess the problem then becomes why that is happening.
  8. If you can successfully stop the array before the timeout kicks in you should not get an unclean shutdown unless Unraid for some reason is unable to record the array stop on the flash drive.
  9. I assume that since you could do an ifconfig you were able to log in at the console? If that is so you can use the ‘diagnostics’ command and the results will be written to the ‘logs’ folder on the flash drive. Posting the resulting zip file gives us some idea of what is going on.
  10. You should also simply try stopping the array to see how long that takes in case the timeout for that is not long enough.
  11. File system corruption can cause User Shares to misbehave, and btrfs file systems (as used on the cache pool) seem to be prone to corruption if the drive gets too full.
  12. According to the diagnostics the system was rebooted just after 7pm and then it contains: May 21 07:13:37 NAS emhttpd: unclean shutdown detected which was why a parity check was started. This suggest that the system did not successfully go into S3 sleep and was instead forcibly shutdown. I suspect that the array is not stopping successfully when you thought you were doing an tidy shutdown and instead timeouts kick in so Unraid does a forced shutdown. Then when you restart it is an unclean shutdown and thus the parity check is initiated. You might find this section of the online documentation on troubleshooting unclean shutdowns to be relevant. As is mentioned there setting up the syslog server might also help by getting logs that survive the reboot.
  13. No - Converting to a new file system requires reformatting the drive which wipes any existing data.
  14. This will initiate steps 14 and 15 that are documented in the online documentation for parity swap procedure.
  15. Why would you think this? Step 2) assigns the new parity2 drive, and step 3) assigns the old parity2 drive as disk5. These are both before you start the array.
  16. You do steps 1) to 3) and then start the array for Unraid to do 4) and 5).
  17. You have got the actions that will happen in the wrong order! What happens is: the current contents of the 10 TB drive that was parity2 is copied to the new 18 TB parity2 drive. when that completes then the old 10 TB parity2 drive that is now in the disk5 slot is rebuilt with the contents of the emulated disk5 drive.
  18. I find it easy to spot the orange dot at the start of each entry with unread items in it, so the font does not really matter.
  19. You have not shown the top level where all the bz* type files live. If you download the zip file for the release from the Unraid site you can see exactly what files should be on the flash drive. The only thing that should vary is whether the EFI folder has a trailing ~ character. If not then UEFI boot is activated. If you have trouble finding a USB 2 drive then a USB 3 should work fine. No point in getting a large capacity one as Unraid needs about 4 GB. Ideally you want to get one that has a metallic casing as USB 3 drives often get hot so good cooling can then help with lifetime as heat is bad for electronics. If you can it is a good idea to use a USB 2 port (many motherboards no longer have external USB 2 ports but DO have them available via an internal header).
  20. The permissions on the appdata folder are determined by the Docker Containers and not by Unraid itself so it is not a good example to use for typical Unraid permissions. Changing these permissions can have adverse affects on the containers. The permissions containers set may also preclude the files being editable at the SMB level. If you have the Dynamix File Manager plugin installed then it can edit text files anywhere within the Unraid file system including within the appdata share.
  21. It has been known for some time that going through the fuse system used to support User Shares imposes an overhead that can limit maximum performance. The 6.12 release has an optimisation for what are currently known as "Cache Only" User Shares that by-passes the fuse system if a share only exists on a pool thus achieving the equivalent to the above without the user having to actually change their settings.
  22. This could simply mean that the card was nor properly seated in the expansion slot rather than the card being faulty.
  23. those commands are wrong and will result in the error of the superblock not being found. You need to add the partition number on the end when using the ‘sd’ devices (e.g. /dev/sde1). Using the ‘sd’ devices will also invalidate parity. If doing it from the command line then it is better to use the /dev/md? type devices (where ? is the disk slot number) as that both maintains parity and means the partition is automatically selected. it is much better to run the command via the GUI by clicking on the drive on the Main tab and running it from there as it will automatically use the correct device name and maintain parity.
  24. You need to stop running in Maintenance mode and restart the array in Normal mode and the disk should mow mount fine.
  25. You need to add -L to the options and remove the -n option
×
×
  • Create New...