JoeUnraidUser

Members
  • Posts

    207
  • Joined

Everything posted by JoeUnraidUser

  1. Is there a documented process of converting the main cache from BTRFS to a ZFS cache? Thanks for any help.
  2. The script trims the Docker logs in the "/var/lib/docker/containers" directory. The script doesn't trim any log files in the "/var/log" directory. You could easily adapt the script to trim the logs in the "/var/log" directory. I am not sure why you are seeing different usage numbers.
  3. ".ISdr1c" is a dot file so the script would not see it. Try substituting the following line with the "." Infront of the "*": for dir in "$source"/.*/
  4. Glad you got it fixed. I know some other people had some problems with PUID and PGID being set to 0 which is "root". A PUID of 99 is "nobody" and PGID of 100 is "users" which is the correct way to do it. I also like to set UMASK to 000. If you have any permission problems in the future and need to run the commands to fix them, you don't need to stop your dockers. Just let the commands run and use your dockers as normal.
  5. Is there going to be an automated way to convert the encrypted XFS drive to an encrypted ZFS drive or will we have to move everything off that drive and then move it back again?
  6. To fix the file ownership and permissions for my Dockers I do the following: chown -cR nobody:users /mnt/user/appdata chmod -cR ug+rwX,o-rwx /mnt/user/appdata I set the following settings in each of my dockers to fix ownership and permission problems: PUID = 99 PGID = 100 UMASK = 000 PUID of 99 equates to "nobody", PGID of 100 equates to "users", and UMASK of 000 allows for full access. To fix the ownership and permissions of my array I do the following: chown -cR nobody:users /mnt/disk[0-9]* chmod -cR ug+rwX,o-rwx /mnt/disk[0-9]* To fix the file ownership and permissions on my entire server I do the following: chown -cR nobody:users /mnt/user/appdata /mnt/disk[0-9]* chmod -cR ug+rwX,o-rwx /mnt/user/appdata /mnt/disk[0-9]* You do not have stop your Dockers during the process of fixing the ownership and permissions of the array or the entire server.
  7. I'm not sure why the move didn't work. I used the plugin back when it first came out. I know I did a move and the system didn't crash. I sat there and waited for the move to happen. Maybe something happened to the process. I think the process died in the background during the move. Some files were moved to the new drive, some files were duplicates, and some files were not moved. I just tried the plugin again and I verified the plugin did move the files correctly. So, whatever problem I had back then did not occur again.
  8. It could be a permission setting in your docker configuration. If there is a setting for UMASK, set it to 007. The UMASK would only affect the files created by the docker application. If you created the folders from Windows, I am not sure about why the permissions were set that way. Make sure in the share settings that SMB User Access is set to Read/Write. I have not used Krusader before, however, I was looking at the documentation and it does have the ability to set permissions of files and folders that you create so maybe you could have accidently created them with those permissions.
  9. I have actually had the problem of Dynamix File Manager producing duplicate files on disks in the past. So, I stopped trusting it a while ago. Does it work correctly now?
  10. I am assuming your share is "/mnt/user/Media". If it is not, substitute the name of your share in the command. Run the following from a terminal to fix your file ownership and permissions: chown -cR nobody:users /mnt/user/Media chmod -cR ug+rw,ug+X,o-rwx /mnt/user/Media
  11. I'm not sure about the renaming of files to "[conflicted]", however, never use the "mv" command unless you are really sure what you are doing. It just causes problems. In the future use the program "mc" from the command line to move files between disks and it does not give these problems. You should check it out. Just type "mc" from the command line and it will bring up a very intuitive interface to copy, move, or delete files. Hit tab to go from side to side. You can also right click on files or directories individually to select them instead of moving, copying, or deleting all of them. If you do have problems of duplicate files on disks in the future, use the following script to check for duplicate files on your disks, however, I don't think that is your problem at this time since they do not have the exact same names. UNRAIDFINDDUPLICATES.SH At this point I would do the move again with "mc" and then after that is completed, I would get rid of the "[conflicted]" files or save them off to another directory until you feel comfortable that all your files have been moved.
  12. Is there any advantage to converting the single array drives from XFS to ZFS? If we do convert them to ZFS, would they be faster, slower, or the same speed?
  13. I can't remember but I must have done a New Config to remove the bad drive from that disk assignment and assigned the new disk to that disk assignment and I do remember doing a parity check after adding the disk.
  14. I just replaced the bad drive with the new one and there was no need to rebuild since I had moved all the data off the emulated drive. I just ran a parity check and moved the data back over from the other drives.
  15. The problem is that I tried the method of rebuilding drives from parity twice over the years and both times after hours of waiting it got all the way near the end and failed. The only luck I have had with parity is when hard drives have failed, I was still able to copy the emulated data off to other drives.
  16. I would like to rearrange some of my disk assignments. If I were to do new New Config and assign disk to different disk numbers will it leave that data intact on those disks or will it clear them when I assign them?
  17. I guess I could leave the 6TB as parity and just dump the data of 1 of the 3TB drives to a usb drive and replace that 3TB drive with a 14TB drive and run preclear on it. Then dump the data from the other 3 3TB drives to the 14 TB and replace those drives with the remaining 3 14TB drives and run preclear on them. Then as a final step unassign the 6TB drive from parity and assign 1 of the new 14TB drives to parity that way I will have parity the whole time. Do you know if it will try to do a parity check each time I remove and add hard drives?
  18. Ii already took me about a week and a half to do the SMART tests on the 4 drives one at a time. But from what you are saying, I guess it is worth it to suck it up and take a couple more days to finish it off properly. Thanks for your advice. It just seems like it has been forever since I got the drives and haven't even been able to use them yet. The next step after that is going to be migrating all the terabytes of data which is going to take forever. It's going to be a juggling act to do that since I'm already maxed out in my case with 12 hard drives. So, I am going to have to remove and add drives back and forth to migrate the data. I was hoping to just add each drive one at a time. But then I would have to do the preclear on each drive one at a time.
  19. Do you have to do a preclear on a brand new hard drive that you add to the server? I just bought some new drives and did the extended SMART test and they passed. It took almost 3 days to run the test on each drive, I would hate to have to wait for a preclear on each drive.
  20. Script to trim white space from the end of each line of text files. #!/bin/bash # Trim white space from the end of each line of text files. if [ $# -eq 0 ] || [ "$1" == "--help" ] then printf "Usage: trimWhite <files>...\n" exit 0 fi for file in "${@}" do printf "$file\n" perl -pi -e 's/\s+$/\n/' "$file" done trimWhite
  21. Script to trim Docker logs to 1 Megabyte. #!/bin/bash size=1M function trimLog { file=$1 temp="$file.$(date +%s%N).tmp" time=$(date --rfc-3339='seconds') before=$(du -sh "$file" | cut -f1) echo -n "$time: $file: $before=>" tail --bytes=$size "$file" > "$temp" chown $(stat -c '%U' "$file"):$(stat -c '%G' "$file") "$temp" chmod $(stat -c "%a" "$file") "$temp" mv "$temp" "$file" after=$(du -sh "$file" | cut -f1) echo "$after" } find "/var/lib/docker/containers" -name "*.log" -size +$size 2>/dev/null | sort -f |\ while read file; do trimLog "$file"; done trimDockerLogs
  22. I use this script daily to backup my flash drive to a zip file and it deletes the backups that are over 30 days old. #!/bin/bash source /root/.bash_profile backup="/mnt/user/Backup/Flash" mkdir -p "$backup" date=$(date +"%Y-%m-%d-%H-%M-%S-%Z") filename="flash.$date.zip" echo Compressing flash backup \"$filename\" cd /boot zip -r "$backup/$filename" .* * chown -R nobody:users "$backup" chmod -R ug+rw,ug+X,o-rwx "$backup" echo Removing backups over 30 days old find "$backup" -mtime +30 -type f -delete -print backupFlash.sh Edit: Added quotes around everything incase people wanted to add spaces in the backup directory name and/or the backup file name.
  23. PC running Windows 10 Pro 22H2 Here is an example of a 7.02 GB transfer of 3 files:
  24. I have also noticed that throughout the v6.11.x releases that Samba has gotten slower. Before I used to get a steady 100 MB/s. Now I get a roller coaster between 70 MB/s and 20 MB/s. Also, when I transfer over 5 GB in files, it will stall multiple times to 0 MB/s.
  25. I'm not sure what you are trying to do. "./backupDockers.sh -l" only lists the dockers you have installed on the system, not the backups of the dockers. You should never use /mnt/cache as your backup directory. The backup will fail because it syncs the files to a folder called appdata in the backup directory. appdata is usually located in /mnt/cache/appdata. It would mean it is trying to overwrite itself.