Jump to content

itimpi

Moderators
  • Posts

    20,703
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. Much simpler (and faster) would be to follow the procedure for Reformatting the drive, although you may then want to run a parity check in case at some point you have managed to get parity out of sync with your current drives.
  2. This is an inherent quirk of the underlying Linux system in that if it thinks the source and target are on the same mount point it implements move by first attempting a simple rename (for speed) and only if it fails does a copy/delete get run. Since this is fundamental Linux behavior I am not sure that there is anything Unraid can do about it. Mover is specifically written to leave files for a share with Use Cache = No alone even if they exist on the cache as there is a Use Case for that behaviour. A possible solution might be to introduce yet another option for the Use Cache setting, but users already have problems with the number of options already allowed for. The ‘workaround’ is to either use a copy/delete operation yourself or to make the target folder have a Use Cache=Yes setting so that mover will later transfer the file to the array. You often see this behaviour when using a docker container to automate downloads. If you set the drive mappings for the container to have different internal mount points then the version of Linux inside the container will implement its own ‘move’ as a copy/delete so the issue does not arise at the Unraid level. However it is still often more convenient to simply set the target to a location set with Use Cache = Yes.
  3. That should have made no difference as lines starting with # are comments. Still if it worked for you ....
  4. Actually it is nothing to do with fragmentation. In Unraid each array disk is a discrete filing system, and a file must fit onto the disk as it is never split across disks. Once Unraid selects a disk for a new file it will not change its mind, and will give an ‘out-of-space’ error if the file does not fit on the drive. By setting the Minimum Free Space to be larger than the largest file you intend to write you ensure that you do not get this out-of-space error.
  5. The default port is 80, but with v5 you could change it to something else by providing as a parameter to starting up emhttp in the config/go file on the flash drive.
  6. Even with v5 the web GUI is built in even though it is not as feature rich as the v6 GUI. If for some reason you cannot get to the GUI at the moment a reboot should fix that as long as your flash drive is not corrupt. Unraid v6 is 64-bit only so you need a 64-bit capable processor. No idea off-hand if your CPU is 64-bit but that should be easy enough to check.
  7. You can also download a UEFI compatible version from the memtest86 web site
  8. If you are using an NVidia GPU and the Nvidia Unraid build then they do appear on the Dashboard if you have the GPU Statistics plug-in installed. I do not know if the plugin can handle any other type of GPU.
  9. That suggests some sort of problem with the flash drive dropping offline or not being recognised during the boot process. you should provide your system diagnostics zip file (obtained via Tools >> Diagnostics) taken when the problem has occurred to give us a chance of doing more than guessing at the reason.
  10. You might want to check what value you have set for Minimum Free Space for the shares in question? It is not obvious, but the larger of the global share setting and the individual share setting is used to decide if the file should go to the cache. It is also a good idea to use the suffixes rather than entering just a numeric value (especially as that value is not an absolute number but the number of KB) as it can be easy to get the number of zeroes wrong. There has been a feature request raised that the User Share value for Minimum Free space should not apply to the cache, and that the cache should only use the value set under Settings >> Global Share Settings, but I have no idea if that is going to happen.
  11. The CRC error counts are stored internally on the drive and never reset. You basically want them to stop increasing.
  12. user includes files/folders for a User Share that is on the array drives or the cache whereas user0 only includes files/folders for a User Share that are on the main array drives (omitting any that are on the cache). Limetech have announced that user0 is deprecated and may be removed in a future Unraid release.
  13. I have had no trouble using USB Creator with Sandisk Cruzer Fit on my Windows 10 system.
  14. I do not know of any standard Linux capability for handling the undelete.
  15. Edit the title on the first post to add (SOLVED).
  16. Repeating the copy will not create duplicates. You mention that your first attempt at a rebuild failed (I assume that is what you meant by reactivate?) so why do you expect a second one to work?
  17. Unraid never moves files automatically between drives. In your scenario you will start getting fails with ‘out-of-space’ errors.
  18. No. As was said parity has no idea of files or file systems, just the bits that should be on the drive. This one of the reasons that parity does not care if you have a mix of file system types and encrypted/unencrypted drives in the array - they are all just bit patterns as far as parity is concerned. Parity treats formatting just like it does writing files, it is updating sectors on the drive to have new contents (the control structures for the empty file system) and parity is updated to match these new bit patterns.
  19. That means that the USB stick does not have a unique GUID, and it needs to have a unique GUID to meet Unraid's licencing requirements.
  20. In the worst case there is a Config file in the Nerdpack plugin’s folder on the flash drive that you could edit to say whether that particular package should be installed.
  21. That looks good as no significant corruption is being reported. You now want to run with the -n option removed to run a repair rather than a simple check, and if asked for (which is expected) add the -L option. Despite the scary sounding warning message it virtually never causes any data loss, and if it does it just relates to the file being written at the point of failure. After that repair run completes you stop the array and restart in normal mode and now the drive should mount and show all your data intact. BTW: Do you have backups of any critical data? You should never rely on the Unraid protection capabilities against disk failure as your only backup strategy as you can lose data in many more ways than hardware failure.
  22. Did you also try rebooting? I suspect that would cure the problem as the package would not be re-installed after the boot.
  23. Put the array into Maintenance mode and run a file system check/repair on the drive. it is worth pointing out that if a drive was showing as being emulated and as ‘unmountable’ before starting the rebuild then it will also show that state at the end. It is quite likely that at the point the disk failed some minor file system damage occurred and parity is reflecting this. The rebuild process is not aware of the data on the drive being rebuilt - it just works at the raw disk level making the physical drive match the emulated drive.
  24. What brand of NVME drive? Some of them are much slower than you might think You should attach your system diagnostics zip file (obtained via Tools >> Diagnostics) to your next post if you want any sort of informed feedback.
  25. 1) 2) Most people go for a fixed IP and set it up at the router end so the Unraid network settings can be left alone. However you need to go with whatever is best for your particular network. 3) Makes it easier for new users to get going I assume.
×
×
  • Create New...