Jump to content

Squid

Community Developer
  • Posts

    28,769
  • Joined

  • Last visited

  • Days Won

    314

Everything posted by Squid

  1. This is what lsio et al are going to need from you http://lime-technology.com/forum/index.php?topic=40937.msg481150#msg481150
  2. CA supports any mounted destination (ie: anything mounted via Unassigned Devices, Any array device, RClone destinations, etc) Whether it works or not depends upon the destination filesystem capabilities. CA will return an error on a backup if errors occurred.
  3. I have an example in the docker FAQ about CPU shares. For even more examples, and further options to prioritize docker apps over unRaid / VMs / etc then you need to google "docker run reference" for the parameters to pop into the extra parameters section
  4. The "disks" folder is normally created by the Unassigned Devices plugin. Setting the mount point to be within /mnt/disks is really only required if you need the mounts created by this plugin to be utilized by a docker container. If you don't require that, then it doesn't matter where the mount is located. Perhaps it would be a good idea for this plugin to automatically create the disks folder upon installation to eliminate any confusion. I went ahead and installed the Unassigned Devices plugin, I now have a Disks folder. However it's completely empty and I see no way to get to it from SMB etc. so I'm not sure what good it would do for me. Assigning mount points under an existing share works well for being able to see the cleartext progress of what's being loaded to ACD. One interesting issue I'm puzzled about though that I somehow hoped that the Disks share might solve is that I appear to have two main subdirectories that I need to backup: user and user0. At first glance the data in them appeared to be the same but then I began noticing that there were files in each of them that didn't exist in the other - or the sorting was whacky I've tried specifying two different targets for rclone but this appears to be a no-go. I'm not certain what to do - two different jobs run back to back maybe? Anyone else gotten around this? I could jigger mount symlinks or something but i don't want to accidently recurse something or end up moving data 2x. I've got enough data that doing just user will take a month or so, hopefully a solution presents itself or I'm stupid and missed something lol /mnt/user is the contents of all the shares including the cache drive /mnt/user0 is the contents of all the shares excluding the cache drive You would only need to backup user not user0 Sent from my SM-T560NU using Tapatalk
  5. The "disks" folder is normally created by the Unassigned Devices plugin. Setting the mount point to be within /mnt/disks is really only required if you need the mounts created by this plugin to be utilized by a docker container. If you don't require that, then it doesn't matter where the mount is located. Perhaps it would be a good idea for this plugin to automatically create the disks folder upon installation to eliminate any confusion.
  6. Just an FYI. User scripts parses the commented lines, but passes the script untouched for execution. The comment lines do not have to be present (and even if they are they are comments in bash and PHP (and probably others) Any issues with user scripts and comments are probably fixed with today's update. But, a super simple workaround is prior to any legitimate comments in a script is to put a dummy command in (something like cat /dev/null) as user scripts stops processing once it hits a non-comment line Sent from my LG-D852 using Tapatalk
  7. Check for updates. (It was danioj's copyright message messing it up - tightened up the parsing on looking for my own variables within the comments)
  8. That showed up in the UI? Can you show me the line that you're adding to the script
  9. THAT'S what happened.... I was unfortunate enough to be doing a drive rebuild during that and I woke up the next morning and discovered the syslog was full, which kept me from being able to do about anything. Rebooted and had XFS corruption which shortly after it caused a drive to red ball. huh? That quote is a reply to wgstarks. Your last post was about AD permissions.
  10. Any mount point is possible for destinations. If it doesn't appear in the filetree, simply type it in. How you mount it is up to you. I *believe* there is a plugin that will mount it for you (rclone?)
  11. No the log is cleared every boot. Your system has been up and running since Nov 7
  12. Looks like on the 14th, something weird happened with rsync from ca backup where rsync decided to log everything from the backup into the syslog, even though the command given to rsync is correct. Net result is that god knows how many extra lines got logged into the syslog on that day. (My initial thought is that there was an issue with the docker.img file on that particular day that you fixed later) Easy solution to the log starting to get full is just reboot the server. And IIRC FCP does a warning at 50%, then at 80, then an error at 90 to try and get your attention before this becomes an actual issue that affects the operation of your server. Right now, there's no problem, and a reboot can be done at your leisure
  13. Added: Inline variables to scripts Going forward, the file description is now deprecated on user scripts (but will remain supported) The recommended way of setting the description for the script(s) are via inline variables in the script itself. There currently only a few variables available that you can use: (but I'm thinking about others) description This is the description of the script - ie: what will show up on the UI for the plugin foregroundOnly Setting this to be true disallows background running (and scheduling) of the script backgroundOnly Setting this to be true disallows foreground running of the script arrayStarted Setting this to be true will only run the script (foreground or background or scheduled) if the array is up and running How to implement these variables: Immediately after the interpreter line (eg: immediately after the #!/bin/bash *) line, add these lines if you choose: #description=this is the description of the script #foregroundOnly=true #backgroundOnly=true #arrayStarted=true After the first non comment line within any script, parsing for these variables stops. (IE: they have to be right at the top of the script file) Note that you do NOT have to have any or all of the lines contained within any particular script. (Also, if you do have the description file present, then its contents takes precedence over the description variable) *PHP scripters: You can also place these variable lines immediately after the <? or <?PHP line
  14. Due to the amount of times that I have to cause trips during development, my wife has now threatened me with divorce if I do any development on this plugin while she's still awake In the webGUI, go to shares, Disk Shares, click on each disk in turn and change the settings to whatever they were (or you think they were). If you don't remember ever changing them in the first place, then they were probably set to public (RP sets them to secure if they were previously public). Also delete the comment line so that its easy to see when RP changes it
  15. Would like this as well. Preferably if it could be set to a schedule or triggered via the CA backup app to run after the backup is complete. This is what I was looking for here without response: https://lime-technology.com/forum/index.php?topic=53783.msg515483#msg515483 lol Did you happen to chat with CHBMB about this today? By coincidence I happened to have looked at this earlier today for probable inclusion in the plugin, and then get a couple direct questions about it... Here's a very basic script that will update all docker containers that dockerMan is showing an update available for: #!/usr/bin/php <? require_once("/usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php"); $DockerTemplates = new DockerTemplates(); $info = $DockerTemplates->getAllInfo(); $allContainers = array_keys($info); foreach($allContainers as $container) { if ( ! $info[$container]['updated'] || $info[$container]['updated'] == "false" ) { $updateList[] = $container; } } $_GET['updateContainer'] = true; $_GET['ct'] = $updateList; include("/usr/local/emhttp//plugins/dynamix.docker.manager/include/CreateDocker.php"); ?> You can run this via cron (or the user.scripts plugin) (And its not unRaid that's not updating the containers with the stops and starts. Its lsio that has decided to not update the apps at startup, but rather pump out weekly updates to the containers themselves You'll want to execute it with the output redirected to /dev/null because the CreateDocker.php routine is expecting to output the status to a webpage, so it'll look kinda strange at a command prompt (but you'll still see what's going on) Like I said, I started the basic planning on this earlier in the day, but some updates to the user.scripts plugin were first on my todo list for the weekend, so the eta for auto Docker updates is next weekend. (or knowing how I do things tomorrow couple hours 10 minutes ) - The main real holdup is having to wait for next week's lsio's updates so I can actually test against a bunch of stuff
  16. There is no attack history present. Did you delete the attack log? Did you uninstall and reinstall the plugin hoping that would fix it? Also post a screen shot of the shares tab I think I uninstalled it with that hope then thought it over and decided it might be a plugin that is better to have installed than not so I reinstalled it and set it to put bait fiales only in root which seems to be working fine so far other than my boo boo from the initial install and setting it to put squid files in all directories without realizing what I was doing. Thank you very much for your help by the way!! The uninstall thing is something that I spent some time go over in my head about what to do. Did I want to restore normal permissions or leave it in the tripped state. I ultimately decided that due to the nature of the plugin to leave it in the tripped state so that someone wouldn't merely uninstall the plugin in case of a legitimate attack and didn't understand what was going on. Unfortunately, what that means is that in the case of a reinstall without first fixing those share settings is that the plugin assumes that what is set is what its supposed to be. I'll change the uninstall routine to restore the permissions.
  17. I tried running this via ssh but it doesnt seem to help. Possible bug. Gotta wait a few hours before I can check it out. Everything's restored itself back to read-write, with the exception of the disk shares. If you access the appdata share via the share instead of first navigating to the cache drive over the network you should be ok. But you're going to have to manually reset the disk share permissions for each of the disks. No real way around it
  18. There is no attack history present. Did you delete the attack log? Did you uninstall and reinstall the plugin hoping that would fix it? Also post a screen shot of the shares tab
  19. If the share does not state Read-only mode, then its probably a permissions issue within the Downloads folder. I've been plagued by it recently on ~50% of my DLs via NZBGet for some reason. I just do newperms /mnt/user/Downloads to fix it up.
  20. There is no real way to distinguish how or what caused the trip. A docker container deleting the bait is the same as doing it from the command line is the same as doing it over SMB. Prior to v2016.11.11, there was an implementation error (read that as I didn't consider all possibilities) where once the program tripped, a subsequent trip (the subsequent trip wouldn't happen via SMB, but either from the command line, docker app, etc) that would then trash the backup copies of the share configs. This double trip situation basically resulted in the backup copies of the share configs being overwritten by the read-only settings, so attempting to restore the normal access would just restore a backup of the readonly settings so you're back where you started. After 2016.11.11, a check is made to see if the backup copies exist prior to overwriting them, and if they do, then the copy is skipped. Since this has been going on for ~a week, the time frame is about right for when the issue started vs when it was fixed. The only solution at this point is to click the button to restore normal permissions (which won't do anything obvious, but it will get the program back in a state that you can work with) - from your posted pic it already is in normal mode - and then change the share permissions back manually to what they should be. Ultimately, I don't advise tossing bait files into every folder, as the chances for innocent trips skyrockets - Just use root of all shares and use the bait shares option. Also, any shares which are manipulated by other apps (eg: Downloads) should always be excluded as the programs running have no concept that this plugin is monitoring the files within.
  21. 1. Install nerdpack plugin and install python 2. Copy python script to your flash drive 3. Install user scripts plugin 4. Write script to execute python script & place in the folder told by user scripts 5. Set to execute weekly I beat you this time
  22. Install python via the NerdPack plugin Install the user scripts plugin Set the user scripts plugin to run the python script weekly
  23. I've set the appdata folder to prefer on the cache and ran the mover script. It copied all the app data folder to my cache drive like you said, but the appdata still remains on the array as well. Is this normal behaviour? In the future will it read and write from the cache and not the array? If files / folders are open, then those files / folders will not get moved. But, an empty appdata share sitting on the array will cause no ill effects, and you won't even notice it since most access to the shares is via /mnt/user/....
  24. General rule, with adding any app is that the only things you ever need to change are the host / container volume paths, and port(s) if there is a conflict. Specifically on Dolphin, everything in the template that appears is correct. The only thing you may need to change is the Host Port for it (entry is blank on the template). Set it to something like 8080. After the app is installed, wait a minute or two then go to the webUI, and you'll be in business (Navigate to ROOT, then /mnt).
  25. You have to incorporate a test into your script. If the output of cat /var/local/emhttp/var.ini | grep "mdState" is mdSate="STARTED" then your script can continue
×
×
  • Create New...