Jump to content

je82

Members
  • Content Count

    275
  • Joined

  • Last visited

Everything posted by je82

  1. can you elaborate on how to do this properly?
  2. I have a problem where whenever an rsync script is ran via user scripts, When i test running this in the CLI it works fine, it sets both directories and files that are being backed up to the correct permissions for unraid, but whenever the same command runs via user scripts it for whatever reason sets the directory permissions to drwxrwx--- instead of drwxrwxrwx+, please advise?
  3. found the issue, i was running one of the backup scripts as a different user and rsync default tries to clone the permissions
  4. The rsync script is invoked as root, it then sends the files to another unraid server. The permission in /mnt/user/Backup was looking like the permissions in /mnt/user0/Backup before the rsync script ran. I have multiple different scripts backing up to the /Backup folder, there are windows machines running FreeFileSync daily to send files to /Backup/Files\ Backup/ but whenever the rsync script has ran the windows machine credentials no longer have read access to the /Backup folder, i've ran the the "Fix Permissions" thing on the Backup share multiple times and i've narrowed it down to the rsync script doing something with the permissions everytime. Cache permissions seems fine
  5. Hi, I've noticed that my file/folder permissions keep changing whenever my cron rsync scripts run. /mnt/user/Backup ls -l While /mnt/user0/Backup ls -l Sometimes inside "Files\ Backup/" folders are now owned by user "sshd" Any idea what i am doing wrong or why this is occuring? The backup jobs are simple rsync jobs, any idea what i need to change to have the permissions not change? Thanks.
  6. Follow up questions: 1. when i first accidentally ran the command on everything i exited out before it had completed, issues could occur yes/no? 2. what exactly is it doing when it says "sync" ? Since i exited out before it had completed obviously it missed syncing, i have no idea what syncing means so i need to know what that does so i can figure out if i will have issues
  7. So what i read this will only be a problem on particular docker containers that may run as a special user, i don't believe i have any of those, everything seems to use UID 99 GID 100 UMASK 000 which pretty much already is nobody:user ?
  8. Yeah seems like shares are working like normal, but i am worried about my docker installation: "Note that this tool may negatively affect any docker containers if you allow your appdata share to be included." As i said, i though by selecting the share and then selecting all drives it would only target the particular share that may be spread over _all drives_ but it ran on everything, i cancelled the command pretty quickly but it ran through entire /mnt/cache so everything is nobody:users now, is this bad? I've not noticed any issues, but i really want to know what kind of issues this can create and if there's something i can do to restore it to whatever it was before i made this mistake?
  9. So i saw that my /Backup/ share had weird permissions... 2 folders in the root folder had only access from one user yet in unraid the share is permitted from multiple users. Stupid like i automatically figured that the "New Permission" tool would account for my current user accounts that i've created and apply those to the share when ran. Turns out it applies user nobody:users to everything! How do i make unraid run its permission WITH the settings i have in the share configuration? Also its pretty dump that Select Disks and Select share is on the same menu, because a share can be on multiple disks so i figured if i select the share in particular i want to run permissions on and then select ALL disks (because a share can like i said be spread out over alll disks) i though it would only run on that particular share, but it ran on everything in the entire unraid array. Thanks for any help.
  10. I just realized i have 3 fans mounted to my extremely warm disk controllers and if they would for whatever reason stop spinning the controller card would most likely fail eventually without and i would only find out about the fan stopping by the time its too late. I was wondering if there's any way to get fan sensor data from the motherboard Supermicro x10SLR-F? What plugins should i look for? Edit looks like i found something looks like its giving me the inforamtion i need :
  11. I am interested in starting using this plugin, but i have some questions: 1. If the hashing reults are NOT saved to flash, where are they stored? 2. Is it possible to store hash results log to another path then flash? I have for example used fstab to statically mount a smb share on a remote logserver which holds all my logs, i would want the logs to go directly there instead of moving them from the flash each day with a script (avoiding writing as much as possible to flash) is this possible or do i need to modify the plugin to do this?
  12. Sorry for hijacking the thread but i am interested in this, has there been any system or script that does this work for you and you can then later use it to find out which files may be affected if parity errors occur? It would be a great addition to the unraid universe to be able to know exactly which data is affected if parity error were found and corrected.
  13. Just confirmed, the error above has nothing to do with unraid, it was the latest build of RDM having some type of bug which apparently doesn't happen on all systems because i run the same version on another machine and there's no problem there. Issue solved.
  14. This might just be a bug in a new version of Remote Desktop Manager that i updated yesterday that just wanted to rear its ugly head at the exact same moment as my unraid got borked. I've opened a ticket with them.
  15. Hello. I accidentally filled my cache today and docker got mad about it. After a reboot most containers seem to work, but containers that use a pid file to know if its running or not are problematic: My question is, how can i ssh into my unraid installation and access the data inside the docker.img file? In this particular case deleting the pid file should be enough to get the docker back up and running. Ideas? Edit: solved it, via gui simply click the container and hit console:
  16. Today i accidentally filled my cache and docker crashed, but it appears unraid is acting strangely now when i attempt to connect via SSH to it, but ONLY when i use a particular SSH client, regular putty works fine. My first attempt to remove the blockage by restarting sshd Still blocked. ---------------------------------------- Second attempt was trying to completely re-install SSH client, and remove the previous cached data and certificates Still blocked.. ------------------------------------- Third attempt, checking the iptables if there's any entry that blocks me? Looks empty.. So what exactly is blocking me from accessing the unraid installation? I am confused if the issue is client side or server side or a mixture of both. Installing the particular SSH client with the same cached keys and config but with a different ip on another computer lets me access unraid via ssh so my idea is still that there's something on the unraid side blocking me but it cannot be a specific ip block because my ip can access the instalation via another ssh client. I'm thinking specific fingerprint from the particular ssh client is somehow blocked in unraid, but where? Any ideas are welcome
  17. Yeah i hear you, but the question is, once i cleared up space on the cache, i could continue to write to it, yet docker kept returning 403 which was strange as the cache was no longer full. Then restarting the docker service wasn't helping either. Anyway i have to just move past it and make sure i don't fill the cache.
  18. From what i could see in the gui none of the systems reported anything being out of space after the intial fill up, the docker.img container was never above 66% usage, the cache was 100% at some point though when everything came crumbling down, just strange that things wouldn't get back to normal when i cleared up space via cli in the cache... i still had to reboot which annoys me i hate rebooting! Since i kept getting 403 errors my guess is that BTRFS "forced readonly" was still enabled for some files on the cache drive, particularly docker.img?
  19. Thanks for the info. Strangely enough docker appears to work after i rebooted entire unraid. Tried to simply shutdown the docker service and start it again without rebooting unraid but it would not start. For whatever reason restarting unraid solved it? What could be the cause? Here's what happened: I was filling the cache to trigger the mover to transfer to array.. i forgot that on wednesdays i have a backup that runs on a server in the morning. The backup ran and i was transfering files, it ended up filling the cache completely and i even lost access via web gui for whatever reason. I realized that the backup was running so i cut the network to unraid, and then i went to the console and manually went to /mnt/cache/*path where my backup file was being built*/ and then i simply rm Backup_file That file cleared up 150gb, i connected the network again, the webgui came back online and the cache looked like it had space left, but docker for whatever reason was returning 403 error when i tried to start a container after stopping it. I shut down the entire docker service, and attempting to start it again resulted in error. Should i be worried there may be issues somewhere? I see the logserver logged a lot of IO errors during the time when smb transfers were being made but cache was full: Aug 26 09:18:24 NAS kernel: BTRFS: error (device loop2) in cleanup_transaction:1846: errno=-5 IO failure Aug 26 09:18:24 NAS kernel: BTRFS warning (device loop2): Skipping commit of aborted transaction. Aug 26 09:18:24 NAS kernel: BTRFS info (device loop2): forced readonly Aug 26 09:18:24 NAS kernel: BTRFS: error (device loop2) in btrfs_commit_transaction:2236: errno=-5 IO failure (Error while writing out transaction) Aug 26 09:18:24 NAS kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 4, rd 0, flush 0, corrupt 0, gen 0 Aug 26 09:18:24 NAS kernel: print_req_error: I/O error, dev loop2, sector 645504 etc... but right now all systems seems to work fine..
  20. My cache volume became full and docker crashed, i've disabled docker now and i look into the appdata folder and i see all the cached data of the applications is available there, that's good and all, but my worry is... where is all the docker configuration data stored? Like the data what the docker apps name, ip address, manually configured paths etc? Is this stored in docker.img i am essentially screwed and have to manually figure out all my settings because i never backed up my docker.img. When i try to mount docker.img or extract its data it tells me it is corrupt, i never tried this before so i don't know if it was possible or not. EDIT: my docker.img is 20GB, my cache volume became full, docker never used more than 16GB, yet it became scrambled, what happened to the remaining 4gb is a question that remains
  21. Thanks for the input, and you're right, setting up a dedicated share for these kinds of things is a good workaround. I was just very surprised i had not run into any issues at all until a few days ago.
  22. Thanks for explaining. Is it possible to somehow include into the mover script to check filesize and current space available to path before each copy when running the mover or would that create far to much overhead? I feel like there must be a smarter way of doing this. I sometime have 500gb files (backup images) and having to set the minimum free space > 500gb seems crazy waste of space! I am surprised i have not ran into any issues of this kind until now.
  23. I was wondering why my cache is filled up even though mover has been running and completeing. Turns out my mover is looping and filling up a drive to 0bytes and then failing and trying again. I have no idea why the mover keeps trying to add files larger than what the remaining space on the drive is. Share configuration: Allocation Method: Fill Up Minimum Space Free: 10GB 4 Drives assigned to the share. Mover is trying to add 60gb blu-ray files into a drive that has 14gb free (and 10gb being the limit), so the mover should only see 4gb free? Why? EDIT: I've now set the minimum space free to 30gb to make the mover bypass the drive with 14gb free, and it allocates the new files to a drive that has space, i was under the assumption that the mover is smart enough to stat each file before it moves to see if it actually has the allocated space available but i guess not? That seems like something that should be fixed (if it is not already fixed, i am running Version: 6.7.2 still)
  24. Found the "problem" apparently unraid does not cacluate the size of the cache if the current share has no files on the actual cache at the point of the calculation, Incoming has files on the cache right now, Incoming FTP does not, i added a file to the share (which goes to the cache) and the size calculation became the same.
  25. Hi, I am having the strangest time with a share i've created, i cannot for the life of me understand why it is being calculated at a different size then another share with the exact same specifications. Can anyone help me understand? These 2 shares have the exact same configuration, yet they display different sizes. Why is Incoming FTP being calcuated differently? I've tried to change settings and re-apply them, such as "cache only" etc but it remains the same, different sizes for exact same configuration. Ideas?