remotevisitor

Members
  • Posts

    412
  • Joined

  • Last visited

Everything posted by remotevisitor

  1. When a GUID is blacklisted, it is added to the blacklist built into every subsequent releases of unRaid and so will no longer work on any future release of unRaid. So yes it would still work in an old machine (until you find you need to upgrade to a newer version of unRaid for some reason).
  2. You want the "parity swap" procedure ... https://lime-technology.com/wiki/The_parity_swap_procedure This procedure copies your existing parity to the new (larger) disk. When that is done it rebuilds the failed data drive onto the disk that was the old parity disk. Make sure you understand what this process involves as you must ensure that you do all the necessary steps .... ask for help from the experts here if you are unsure or need further clarification on the steps involved.
  3. I think the ‘homes’ share names is something specific to the samba implementation .... https://forums.linuxmint.com/viewtopic.php?t=77063
  4. Users on unRaid do not have a home directory ..... in your case there is one share called ‘homes’ which is ‘/mnt/user/homes’. Users are used to control access to shares and/or files within a share depending on the permissions set on the share, directories and files.
  5. When you perform your next test, if/when you get the ‘Unmountable. No file system" message and then prompted with the Format button, look and see if there is a warning icon next to the Format button and click on it. You should be prompted with a message that Formatting a disk will lose all data on the disk and is not something you should do when attempting to recover a disk .... this is all from memory as it is some time since I last formatted a disk in my unRaid server. If things have changed since I last formatted a disk, hopefully someone more knowledgeable about the current state will correct anything I have stated wrong.
  6. That command line option hasn’t been supported for some time. instead go to Settings -> Identification -> Management Access and change it there.
  7. ‘Dynamic File Integrity’ plug-in.
  8. Strange. My help describes them as the very first thing in the help:
  9. Have you tried turning on the help (button near top right of page ) when on that smb settings page as the setting have brief descriptions in the help?
  10. The -fprintf option is printing directory to the file specified with /boot/backup/dirlist_$(date +"%Y%m%d".txt)" so piping the standard output into sort will not do anything. The "%h/%f\r\n" is the 2nd parameter to the -fprintf command and controls what is printed: %h is the containing directory of the item being output. %f is the final filename (or directory if printing directories) of the item being output. \r is the <carriage-return> character. \n is the <new-line> character. The default output of find is the -print command which effectively does the 1st, 2nd and 4th of those ("%h/%f\n"). I assume that the output file is intended to be read on windows where text lines are expected to end with <carriage-return><new-line> characters, while on Linux the end with just <new-line>; hence the need to include the <carriage-return> character in the output. I think the form of command you are looking for is: find /mnt/disk* -type d -printf "%h/%f\r\n" | sort -o /boot/backup/dirlist_$(date +"%Y%m%d".txt)
  11. You are lucky .... I have a mixture of 4TB, 6TB and 8TB disks (admittedly still on Supermicro SASLP-MV8 cards) which currently take around 27 hours for a parity check. I know my brother who has a similar setup has improved his times a bit by moving to an LSI card. i am just in the process of upgrading a 4TB data disk to my first 10TB data disk (+ 10TB parity) so expect my times to increase a bit more; which might finally make me decide to make the move to an LSI card as well. I have previously had issues with my 6TB disks and the SASLP-MV8 cards with the disks dropping off line, which I found I could work around by setting the 6TB disks to not spin down; so move to an LSI card should help remove the necessity to keep them spinning. This matches an observation some time ago, by Squid (if I remember correctly), that some of the Marvell controller issues appear to be related to specific disk firmware.
  12. One issue with these growing disk sizes is that the parity check time is moving into the 2 days timescale. I keep hoping the ability to break the check into partial runs so that it could be performed overnight over multiple days would become availabile.
  13. How big is the file you are trying to copy to the share? It it bigger than 512MB?
  14. There is one case where this will not show the problem ..... if a program has a log file open and continues to write to it, but another program (or user) has deleted the file; the directory entry for the file is removed (so doesn’t show up in the ls or du output) but the file continues to exist (and possible grow) until the program closes it or the program is terminated.
  15. I am away from my Unraid system so I cannot check, but I think maybe the solution to your problem is relate to the slave option, see
  16. I think the problem is related to the way docker handles volumes. From the docker documentation (https://docs.docker.com/storage/volumes/) it states: "Volumes use rprivate bind propagation, and bind propagation is not configurable for volumes." And rprivate bind propagation is defined as: "The default. The same as private, meaning that no mount points anywhere within the original or replica mount points propagate in either direction."
  17. The parity disk must be as large (or larger) than the largest data disk. it doesn’t matter how many data drives you have (other than Unraid license limitations), just what size is the largest data disk.
  18. I guess one question is when a shutdown condition of 20% is set, is the condition monitored to initiate the shutdown performed by the UPS itself or by the software on Unraid? The Original poster also hasn’t mentioned what make/model of UPS is being used, which might be relevant if the problem is actually in the UPS firmware not correctly supporting %usage left. Suggestion .... do you have another system which you can install the UPS manufactures official software and plug the UPS into that and set a % condition and see if it works as expected ... this would help to eliminate the UPS itself as being the problem.
  19. The fact that there is a limit with what appears to be such an arbitrary number suggests to me that it is possibly an implementation limit where a bit mask value detailing which drives are used by a share have to fit into a 32-bit value in an existing field in a file system data structure, hencethe size of the field cannot be changed, with the extra 4 bits used for something else like an error value. This is just pure speculation on my part on a possible reason for the limit.
  20. Your current suggestion also requires the replacement disk to be the same size as the disk it is replacing. if the replacement disk is larger you would also have to arrange for the additional space to be zeroed otherwise parity would be invalidated.
  21. dd if=/dev/zero of=/dev/sdX bs=1G where X is the drive letter of the drive to clear. This of course assumes you have removed it from the array.
  22. I wonder if it is related to Spectre/Meltdown mitigation changes in windows. i remember a link to a short presentation on a thread about Spectre was posted recently on one of the unRaid forums explaining why they can show up as increases in cpu usage even though the changes are not actually an increase in cpu usage but rather stalls accessing memory due to the more aggressive flushing of the memory caches. update: this is the item I mean:
  23. Probably because the description is used in the standard Linux /etc/passwd file which uses colon as a field separator.
  24. Ah. The dangers of scripting when you are not at a machine where you can check it out. Add the sort command to the command lines doing the find, eg find . -type f -print | sort >list1 This will ensure the contents of the files being compared by the comm command are sorted.