Jump to content

overbyrn

Community Developer
  • Posts

    582
  • Joined

  • Last visited

Everything posted by overbyrn

  1. Hi all, I put together a simple docker container as a wrapper around Bulk Downloader for Reddit, the tool for downloading submissions. The container also has options to run several supporting tools; detox to clean filenames, rdfind to delete dupes or convert to symbolic links, clean messy symlnks & create relative versions. I'm currently using it as a personal docker template. I'd prefer not to support it as a fully fledged CA app. If there's a dev out there that wants to take over, please go for it You can find it here: Github | Dockerhub Regards, overbyrn
  2. Awesome, thanks Dan. That'll be a huge help. Now, if you ever feel the desire to give UD the option to have per-device SMB and NFS share permissions, you can have my first born. Well, no... no you can't, it's a bit late for that. But you'll have my undying gratitude Big overhaul to the code there though, so do understand if you don't have the time or inclination. Thanks once again, overbyrn
  3. Hi Dan, Is it possible with UD as it stands to export a mount as NFS but with specific parameters, much like can be done with regular shares when exporting as NFS? I'm guessing not, but figured I'd ask. Admittedly it's a bit of an edge-case scenario, but would be very helpful to me at least if it was possible to tweak the NFS export parameters, rather than standard Public with read/write access. I guess also that would mean having to change NFS export from a global setting to per-mount, as I notice that /etc/exports- get's populated with all mounted devices once NFS export is set to enabled. I was going to take a look under the hood at your code and see if I could find where /etc/exports- gets written. Before using UD, I used to have a manually created exports- at /boot/config which got auto-added to /etc/export when exportfs was called. Is there something similar perhaps, whereby you store a copy of exports- or do you generate during plugin run only? I could go out an manually change /etc/exports- myself, but that's messy for all kinds of reasons. Would appreciate your thoughts. Regards, overbyrn
  4. I compiled it, but zero clue if it's what you need or will help. Was built against Slackware Current, which I think is what 6.2 is based on. ./configure --prefix=/usr Configuration on Mon Aug 1 20:38:13 EDT 2016: Host: i686-pc-linux-gnu -- slackware Slackware 14.2 Apcupsd version: 0.7.5 (13 June 2016) Source code location: . Install binaries: /sbin Install config files: /etc/apcupsd Install man files: ${prefix}/share/man Nologin file in: /etc PID directory: /var/run LOG dir (events, status) /var/log LOCK dir (for serial port) /var/lock Power Fail dir /etc/apcupsd Compiler: g++ 5.3.0 Preprocessor flags: -I/usr/local/include Compiler flags: -g -O2 -fno-exceptions -fno-rtti -Wall Linker: gcc Linker flags: -L/usr/local/lib Host and version: slackware Slackware 14.2 Shutdown Program: /sbin/shutdown Port/Device: /dev/ttyS0 Network Info Port (CGI): 3551 UPSTYPE brazil UPSCABLE simple drivers (no-* are disabled): apcsmart dumb net no-usb snmp no-net-snmp pcnet modbus no-modbus-usb no-test brazil enable-nis: yes with-nisip: 0.0.0.0 enable-cgi: no with-cgi-bin: /etc/apcupsd with-libwrap: enable-pthreads: yes enable-dist-install: yes enable-gapcmon: no enable-apcagent: no https://dl.dropboxusercontent.com/u/572553/apcctrl-0.7.5-x86_64-1rj.txz Send me a PM if you need to get in touch as I rarely check the forum these days. Regards, overbyrn
  5. It's okay, you can name the original person. He won't mind I'm glad someone took up the mantle to make the plugins work for later unRAID versions. Good job!
  6. Phaze (and others creating v6/plgman compatible plugins), don't bother creating and downloading png files. Instead generate them from inline code and save directly to your plugin directory underneath /usr/local/emhttp/plugins eg. Instead of <!-- Download application and web GUI icons --> <FILE Name="/boot/config/plugins/&name;/images/information.png"> <URL>--no-check-certificate https://raw.githubusercontent.com/PhAzE-Variance/AppSupport/master/information.png</URL> </FILE> do <FILE Name="/usr/local/emhttp/plugins/&name;/information.png" Type="base64"> <INLINE> iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAABGdBTUEAAK/INwWK6QAAABl0RVh0U29mdHdhcmUAQWRvYmUgSW1hZ2VSZWFkeXHJZTwAAAKcSURBVDjLpZPLa9RXHMU/d0ysZEwmMQqZiTaP0agoaKGJUiwIxU0hUjtUQaIuXHSVbRVc+R8ICj5WvrCldJquhVqalIbOohuZxjDVxDSP0RgzyST9zdzvvffrQkh8tBs9yy9fPhw45xhV5X1U8+Yhc3U0LcEdVxdOVq20OA0ooQjhpnfhzuDZTx6++m9edfDFlZGMtXKxI6HJnrZGGtauAWAhcgwVnnB/enkGo/25859l3wIcvpzP2EhuHNpWF9/dWs/UnKW4EOGDkqhbQyqxjsKzMgM/P1ymhlO5C4ezK4DeS/c7RdzQoa3x1PaWenJjJZwT9rQ1gSp/js1jYoZdyfX8M1/mp7uFaTR8mrt29FEMQILr62jQ1I5kA8OF59jIItVA78dJertTiBNs1ZKfLNG+MUHX1oaURtIHEAOw3p/Y197MWHEJEUGCxwfHj8MTZIcnsGKxzrIURYzPLnJgbxvG2hMrKdjItjbV11CYKeG8R7ygIdB3sBMFhkem0RAAQ3Fuka7UZtRHrasOqhYNilOwrkrwnhCU/ON5/q04vHV48ThxOCuoAbxnBQB+am65QnO8FqMxNCjBe14mpHhxBBGCWBLxD3iyWMaYMLUKsO7WYH6Stk1xCAGccmR/Ozs/bKJuXS39R/YgIjgROloSDA39Deit1SZWotsjD8pfp5ONqZ6uTfyWn+T7X0f59t5fqDhUA4ry0fYtjJcWeZQvTBu4/VqRuk9/l9Fy5cbnX+6Od26s58HjWWaflwkusKGxjm1bmhkvLXHvh1+WMbWncgPfZN+qcvex6xnUXkzvSiYP7EvTvH4toDxdqDD4+ygT+cKMMbH+3MCZ7H9uAaDnqytpVX8cDScJlRY0YIwpAjcNcuePgXP/P6Z30QuoP4J7WbYhuQAAAABJRU5ErkJggg== </INLINE> </FILE>
  7. Yes, I get what you meant. What I was meaning is that within the 'plugin' script, the general methodology looks to be that a check method is used to perform a check of the local version against remote version, the update method is used to download the later version to tmp and then subsequently calls the install method to perform the install (again). So if you build your install method to ensure it tears down the areas you need refreshing, then calling the update method will perform an installation, creating new versions of the files where required. You can also call multiple methods, allowing the same bash script to be used across more than one method; <!-- The 'install' script. --> <FILE Name="/tmp/plugins/tmp" Run="/bin/bash" Method="install update" Type="text"> <INLINE> if [ ! -d /tmp/plugins/builtin/&name; ]; then mkdir -p /tmp/plugins/builtin mv /usr/local/emhttp/plugins/&name; /tmp/plugins/builtin else rm -r /usr/local/emhttp/plugins/&name; fi tar -xf /boot/config/plugins/&name;/&name;-&version;.tar.gz -C /usr/local/emhttp/plugins mv /usr/local/emhttp/plugins/&name;-&version; /usr/local/emhttp/plugins/&name; </INLINE> </FILE>
  8. I have figured out how to do this, but I wonder if an 'update' method would be appropriate. I used a combination of check, update and install methods from /usr/local/emhttp/plugins/plgMan/plugin.
  9. Tom, thanks for the explanation. I've previously used info gleaned from the code in plgman to help understand the new plugin structure (install/remove/check, anon scripts etc). I appreciate you're working on plugin documentation and I imagine it's low priority, but I'd be interested in anything you can share as I'm currently going through all my plugins (dropbox, ssh, btsync, beets, airvideo, lms, denyhosts, nzbget) and making them v6 compliant. It'd be great to get them right first time. Regards, overbyrn
  10. FYI, back in May I re-wrote my ssh plugin to take advantage of the v6 plugin structure, including the install, remove and check methods. Copy attached here in case it's helpful. Also can be installed by entering https://raw.githubusercontent.com/overbyrn/unraid-v6-plugins/master/ssh.plg into the Install Extension section of the v6 Extentions WebGui. ssh.plg
  11. I've been searching for a script to help with cloning VMs, but I couldn't find much out there that didn't rely on extra tools or applications. I just wanted something simple to run from the ESXi shell, rather than mess about with PowerCLI / Powershell. I ended up rolling my own. It's probably not great, but has worked for me and takes away the tedium of working directly with vmkfstools. I attach a copy here in case its of use to anyone else. Alternatively, if someone has a better script, please let me know. The script clones a virtual machines settings and disks, renaming and updating the config as it goes. The idea being you should end up with an exact copy of the source VM, but with the config (VMX) file all disk (VMDK) volumes renamed to reflect the new VM. You can optionally add the newly created VM into the ESXi host inventory. Usual caveats apply. Don't get running it on a production platform without suitable testing. That being said, the script does a few sanity checks so should be fairly safe to run; It attempts to check if the source VM is in the ESXi inventory and if found it'll check whether it is powered on. If it is, it won't go any further and will request you power off the VM However, it isn't necessary for the VM you wish to clone to be in the inventory. This covers the scenario where you may already have a VM created but not in the inventory. If it only allowed cloning from what is known to the ESXi inventory, then you'd not be able to perform a clone. The script only checks the inventory for the purpose of working out if the VM you intend to clone is powered on. [*]The script will terminate if the source VM path is not valid [*]The script will terminate if the target path already exists Run the script without arguments or with "-h" for help. It's pretty self explanatory. You need to include the whole path to the source and target VMs. The reason for this is that you may, or may not, intend to clone across datastores. By specifying the full path including datastore, the script isn't confined to performing the clone on a single datastore. Oh, it's hard-coded to perform a "thin" provision. Syntax: CloneVM.sh [full.path.to.source.vm] [full.path.to.target.vm] [-r] eg. CloneVM.sh /vmfs/volumes/store1/srcvm /vmfs/volumes/store2/tgtvm -r where you can optionally include the third -r parameter. I believe the script is compatible with ESXi 4.1 and 5.x releases. Regards, overbyrn CloneVM.zip
  12. Just chipping in on an old message... Re: OpenSSH installing openssl-solibs. I can assure you openssh_overbyrn.plg does NOT install solibs or any SSL package. Just OpenSSH. Can't speak for alternative methods (unmenu?) of installing SSH.
  13. Will drop you a PM as I don't want to pollute this thread any further.
  14. This has been fixed. http://lime-technology.com/forum/index.php?topic=18310.msg256480#msg256480 Get the latest version from my github page.
  15. If you have time mrow, a write up of the steps involved including links to the dev kit, command line args for vmware-mount.exe etc, would probably benefit a ton of people. Maybe BetaQuasi could include it on the first post or link it to a wiki location? I guess the truth is, if you're running this type of config then the implication is you'd know how to do it, but well you know...
  16. theone, thanks for the plugin. Been meaning for a long time to tinker with running virtualbox on unraid and this might make the process a lot more straightforward. I've just installed the plugin onto a dev box (no other plugins installed at this time). Noticed upon first entry into virtualbox setting page, there is an error regarding vms_session.cfg not found. I've only briefly skimmed your code, but I'm assuming that's because there are no running vms as yet? I think there may be a mismatch between where this file gets created vs where the php var looks for it. From the php and rc scripts: rc.virtualbox: VBOX_PLUGIN_PATH="/boot/config/plugins/virtualbox" vboxmanage list runningvms > ${VBOX_PLUGIN_PATH}/vms_session.cfg virtualbox.php $vm_session_cnt = count(file("/boot/config/plugins/vms_session.cfg", FILE_IGNORE_NEW_LINES)); May have misread the code, so apologies if this is by design but I think otherwise it may be looking in the wrong place. That being said, on first load of the gui page there isn't a vms_session.cfg in either location likely due to the reason mentioned above. Regards, overbyrn.
  17. How can I check the validity of a downloaded file to check after download? If it is possible I will add it to the code. Calc an MD5 sum. Use /usr/bin/md5sum to compare against a known good hash.
  18. Looks good. Welcome to plugin writing. Not installed yet, just skipped through the code. One very minor point; device_status.png, new_config.png & information.png are never downloaded. If this were the only plugin on a system, they'd not exist in /boot/config/plugins/images. Regards, overbyrn
  19. Glad to hear its working for you now. I don't think it's likely to make a one-size-fits-all rsync script, so please feel free to modify mine for how you need things. I really only ever made this thread to semi-document my own rsync experience in the hope it might help others. As for swapping include with exclude, don't forget you can also include and exclude from the file you specifiy either in the the include-from= or exclude-from= parameters. Precede each line with plus or minus and this'll tell rsync what you want included/excluded. For instance, in my include from file, I don't want a subdir of sabnzbd so my "rsync_include_list" looks like this; Services/ Services/airvideo/*** Services/couchpotatoserver/*** Services/denyhosts/*** Services/dropbox/*** Services/iTunes/*** Services/lms/*** Services/MediaMonkey/*** Services/playlists/*** - Services/sabnzbd/Downloads/incomplete Services/sabnzbd/*** Services/sickbeard/*** Apps/*** Dropbox/*** Music/*** Music\ Videos/*** Music2/*** Pictures/*** Stuff/*** The lines without + or - could just as easily have + at the start, but it's kind of implied in that it's the include-from parameter which is reading this file and not exclude-from. Regards, overbyrn
  20. dalben, I've rarely used rsync across remote systems so I'm a bit rusty on what to do. How does your example above connect to the remote rsyncd process? I seem to recall when I last set it up, it was between two Unraid systems, where one of these was running the rsync daemon. Mostly setup like WeeboTech outlines in post 3 of this thread: http://lime-technology.com/forum/index.php?topic=3159 But the syntax of that rsync command looks nothing like yours so I'm assuming perhaps you're doing it over ssh? Happy to help if you can bring me up to speed on how your setup is configured. For what it's worth I did some thinking and realised my script is going to fail at the first point where it tests for target directory presence. No good when the location is remote! But that got me thinking in fact I don't need the test in place at all anymore. Not since I rejigged the script as regardless of the reason it fails, a log will be produced and errors reported if configured. I setup an rsync daemon on a test server and using the script have been able to remotely backup to this location. Again, this is basically configured the way WeeboTech outlined. So for the moment in my script for target I have target=rsync://tower/mnt/disk1/testbackup Here's the copy of the script I've used : https://dl.dropbox.com/u/572553/UnRAID/v3.sh Regards, overbyrn
  21. trurl, I've had another hack at the script and made some more changes (for the better I hope); Temp rsync log and final log file overhaul. It wasn't great was it I wanted a way to run the script multiple times during the same day and not have a ton of logs to show for it. So what I've done is have rsync create its log to /tmp using a combination of the script name + script PID. That gives me a unique log per script invocation. It then creates the final log as a compressed file (.tar.gz) to the logdir and logname of your choosing. This is pretty much the same as before, just that I've hardcoded the fact it's going to be a .tar.gz file. (makes the next step easier) For each subsequent time the script is run, it'll add that rsync log to the final compressed log archive file. The assumption is that the log name will always be a composite of static text + date. So with luck you should only end up with one .tar.gz log file in any given day, which contains one or more rsync logs. I've tested it here and it seems to work ok. So far every time I run the script, the temp rsync logs that are created in /tmp are being deleted once the script completes. I'm really hoping I've squashed that issue where files in /tmp were getting left behind. Sounds like you've got some real funnies where unicode files are concerned. Not sure I can say much more on that side of things I'm afraid. Previous dropbox link contains latest script amendments. Regards, overbyrn
  22. rsync exit codes: http://bluebones.net/2007/06/rsync-exit-codes/ 23 Partial transfer due to error Sounds right given the problem you're seeing with unicode utf-8 characters. This seems less about rsync and more about how unicode characters are being handled in general. I did a couple of tests: 1. created test file "12 ~ Bourée.mp3" via Windows PC on unaid array disk. set putty to utf-8. from command line, listed the file: 2. rsync above file to external ntfs-3g mounted drive using same putty command line: What version of unRAID are you using? What is your unRAID locale set to? (issue "locale" from command line) What is the method/command you're using to mount your Offsite disk to /mnt/user/Backups/Offsite? Update: trurl, grab a fresh copy of the script on the dropbox location. I noticed I had an error where dryrun would always run even if set to blank. I've also specifically set "LANG=en_US.utf8" right before the rsync takes place. Perhaps that'll help.
  23. trurl, Thanks for giving it a try and letting me know how you got on. Not immediately sure why rsync exited with a non-zero exit code. I must admit I was fairly lazy and simply made the assumption if rsync didn't exit with return code 0, then "something" went wrong. I don't really care what the something is other than to get an email saying it failed so I know to follow it up. We need to know what the return code was before being able to troubleshoot it further. See below... Currently, the script only creates a log if rsync completes successfully (eg. return code = 0). That's something to consider as I suppose there's value in having the rsync log even if it fails. I've made some modifcations to the script. Try the copy at this link: http://dl.dropbox.com/u/572553/UnRAID/rsync_backup_to_external.sh I've simplified the temp log creation as I was trying to be too clever for my own good and it was leaving files behind in /tmp. I've added three config options: 1) verbose - when set to true, it'll log to syslog when run as cron or alternatively you get the same log messageson the screen if running from command line. Changed the error logging to include the rsync return code so we'll see what is happening there. 2) dryrun - when set to true, it'll do everything but perform file transfers. Useful for testing. 3) email - turns on / off the emailing if there's an error IMPORTANT CHANGE: I added --delete rsync parameter. I had wrongly left this out as I used to have it enabled my end. Be aware, this means any files at the target location no longer at the source location will be DELETED. You may want to review this in case it doesn't suit your needs. Subtle change to the tar command used to create the final log file. Means the file extension (.tgz, bz2) on the logname will dictate what compression is used when creating the tar. In the script on the link above, dryrun is enabled, as is verbose logging & email. You just need to re-enter the config data for your setup. I suggest you run the script from command line as that'll help understand what's happening. Give it a go and let me know how you get on. Regards, overbyrn
  24. trurl, I was interested to see if I could replicate your issue but it seems to be working ok for me. As a test I created a subdir on my cache disk [/mnt/cache/test]. From a windows PC, I browsed to this directory, right clicked, created a new file using ALT+146 on the keypad to create character "Æ". So now I have a file called "Æ.txt" From command line I performed an rsync from this location to my target which is an NTFS formatted USB external disk, mounted via SNAP. Checked the target and it definitely copied the file across. The log file says the same: For what it's worth, I'm using ntfs-3g-2010.3.6-i486-1.txz which I think originally came from Unmenu.
×
×
  • Create New...