master.h

Members
  • Posts

    127
  • Joined

  • Last visited

Posts posted by master.h

  1. This is a bit of an older topic but I'm having the same problem as Ivegottheskill and Hugh Jazz.... except I can't figure it out like they did. I've got Data Storage mapped to /mnt/user/appdata/.airdcpp (container path is default of /airdcpp). If I reboot the container, all seems to be fine, but if I modify it by adding a new data path or remove one, the container is rebuilt and all my changes are lost. I've copied the default config files into /mnt/user/appdata/.airdcpp and I've added -e PUID=1000 -e PGID=1000 in Extra Parameters (as the container wouldn't even start for me without specifying PUID/PGID). Any help would be appreciated.

  2.  

    On 4/2/2021 at 1:00 PM, Ademar said:

    It took some tinkering, but I've figured out how to do it with python.

     

    import sys
    import time
    import json
    import urllib
    
    name = sys.argv[1]
    
    user_key = "user key goes here"
    app_token = "application token goes here"
    
    import http.client #Python 3
    
    def pushover():
        conn = http.client.HTTPSConnection("api.pushover.net:443")
        conn.request("POST", "/1/messages.json",
        urllib.parse.urlencode({
        "token": app_token,
        "user": user_key,
        "message": name + " has finished downloading",
        #"message": " has finished downloading",
        }), { "Content-type": "application/x-www-form-urlencoded" })
        conn.getresponse()
    
    pushover()

     

     

    I had to tweak this just a bit for my purposes. For some reason the script would fail to execute if I passed through the tags and size (even if I had the %G and %Z specified in the "run external program" section of qbittorrent). I just deleted those variables in the script and stopped passing them through the "run external program" section. I also renamed the "token" and "user" variables in the script to make it easier for me to follow. At this point I'm successfully receiving notifications on torrent completion, TYVM @Ademar!

  3. On 4/2/2021 at 1:00 PM, Ademar said:

    pushover_user = "USER"

    pushover_token = "TOKEN"

    Well, so far I've not received any alerts. Fairly certain I've got the path and and the "run external program" sorted out properly, but I  think I might have this section wrong. What should the values for USER and TOKEN be (inside the quotes)? Right now, I have USER set with my pushover user token and TOKEN is set with my application api token I created.

  4. 8 hours ago, Ademar said:

    It took some tinkering, but I've figured out how to do it with python. The docker image includes python3, so this is what I did:

    • Mount a folder, "/mnt/user/path" to "/script"
    • In qBittorrent, put this under "run external program": "python3 /script/notify.py "%N" "%G" "%Z""

    Thank you very much! I'm not much of a scripter at all and know nothing about python, this is hugely helpful.

  5. Been having some issues with my server this past week. Got some UDMA CRC error messages on disk5 then it quickly disabled after that. I was able to successfully rebuild the data to that drive, then the same thing happened with disk2: udma crc errors then disk2 disabled. Only this time, the disk goes offline again as soon as I start the data rebuild process. The server has completed one data-read check, so I tried to rebuild existing disk2 again, but once more disk2 immediately disabled then paused a read-check. My assumption was the data cables are bad (they're pretty old at this point) but now concerned my drive may actually be bad. I've attached logs, I would appreciate any advice or suggestions.

    saidin-diagnostics-20210307-1646.zip

  6. On 12/7/2019 at 5:16 PM, EgyptianSnakeLegs said:

    Thanks so much!

    EgyptianSnakeLegs

    I had that same issue, except after startup my calibre log was full of the timeout error messages. I resolved it by editing the calibre docker and removing the fields for GUAC_USER and GUAC_PASSWORD. Started up like a charm and I can get to the webui without issue.

    • Like 1
  7. Some time ago I had created a cron job to run a simple file copy script to copy out material periodically to deliver to my parents. I've since removed the file, and I can't find any entries for it when editing my cron jobs with "crontab -e" but every minute or so I get these same three entries in the system log:

    Sep 25 23:11:01 Saidin crond[1730]: exit status 127 from user root /boot/custom/ToParents.sh
    Sep 25 23:11:02 Saidin sSMTP[29924]: Unable to locate
    Sep 25 23:11:02 Saidin sSMTP[29924]: Cannot open :0

     

    It's not a big deal at all, my server functions just fine, but it's certainly cluttering up the log. Any suggestions for a fix?

  8. I manage an unraid server for a friend of mine, recently rebuilt from scratch on some new-to-him hardware. Everything's been fine for a few weeks, but overnight had an unexpected reboot. I got a notification the server started an unscheduled parity check approximately 1:30am CST, and when I checked on it around 6am, it had only been up for something like 5.5 hours. So it definitely rebooted. Normally I'd just attribute it to a small power blip, as he doesn't run it on a battery backup (his whole house is on a backup generator in the event of a true power outage) but I saw several errors in the syslog for "missing csrf_token." Never seen that error before, no idea what it means. I've attached diagnostics, and would appreciate any advice. Thanks!

    boughserver-diagnostics-20190813-1454.zip

  9. 16 minutes ago, emod said:

    How do I pull linux/resilio-sync image to my unraid?

     

     

    I installed the plugin Community Applications on my server. It will add an "apps" tab to your main Unraid webpage. It's sort of like the iOS app store, or Google Play app store. Search/browse for the docker/plugin you want, and it automatically installs from there. You can download the plugin here: https://forums.unraid.net/topic/38582-plug-in-community-applications/

     

    That's how I install all my dockers. I don't know how to add repos in manually.

     

  10. I can't remember off the top of my head how to clear logs, I think you can force an update on the docker and it'll wipe all the logs. To prevent it you can add this to the "extra parameters" field week you're editing the docker with the advanced settings button turned on. It will limit your max log size to 50 megabytes

     

    --log-opt max-size=50m

     

  11. It's been a bit since I've installed the docker, but it looks from your screenshot that you're missing some configuration options. Here's what mine looks like. "Host Path 2" is what shares I'm syncing. In my case I passed through /mnt because I'm syncing disk-to-disk between two Unraid systems.

     

    EDIT: I see you're actually using the Resilio from the limetech repository, I'm using Linuxserver's repo. Not sure what settings to use or not use on the Limetech version.

     

    Capture.JPG

  12. Server has been running latest 6.7.0 since it was officially released, no problems at all. Earlier this morning my system rebooted itself, and I'm not 100% sure why. I was running a movie through Handbrake, which I'm guessing is the culprit, but not sure. Fix Common Problems suggested pulling diags and posting here because of the "Machine Check Events" message. It also said I should install mcelog, which I have done, but I don't know how to pull logs from it, or if I even need to. Diags attached, any advice is appreciated.

    saidin-diagnostics-20190514-1427.zip

  13. I've been using Resilio-sync successfully for quite some time now, but recently it seems to have gotten stuck "indexing" one of my read only shares, which in this case, is /mnt/disk1. Indexing ran constantly overnight, I woke up to over 44 million reads on disk1 at the main unraid management page. I know I can view the logs for resilio on the docker page, but how do I grab those out short of a copy/paste of the log window?

  14. Fix Common Problems notified me of some call traces that were found on my server; not sure what this means or what I need to do to fix it. I've attached diagnostics from my server for review. I'm also getting warnings about disk6: "offline uncorrectable" and "current pending sector" is 168. I'm assuming this means I have a failing disk; I've attached the smart report for that disk as well.

    saidar-diagnostics-20171215-0952.zip

    saidar-smart-20171215-0956.zip

  15. I actually do this very thing myself. For a long time I had both my main and backup server in the same house (literally a foot away from each other) so a simple rsync worked just fine. However now that I've moved my backup server to an offsite location, I use a docker called Resilio. Resilio on the primary server indexes whatever folders I choose to pass through and copies them to my backup server's instance of Resilio, again to whatever folders I choose. It took a bit to set up, and the initial index takes a while (especially if you have tons of file, pictures for example) but now that it's set up it works really well.

  16. I upgraded two systems this morning. My second system was running a VM, Plex, and Reslio-sync. I failed to realize that was about 98% RAM utilization, so when unraid attempted to extract the update after downloading, my system ran out of memory and crashed. Had to hard reboot. Second try I stopped the dockers ahead of time, then all went well. Maybe an FYI for those with smaller amounts of RAM installed.

  17. I understand what you're saying with /sync, /Sync, or sync all being different paths. I'm not sure that is pertinent to this particular situation, though. When you open the webui, there's basically a button for "choose a folder to sync" and then once you navigate to the internal /sync path, all your user shares are presented, and you select Audio (for example) and click OK. An indexer starts running and you moved on to the next one. I don't mean to derail this thread into some sort of resilio troubleshooting thread, there's already one of those. i can post in there for more details/troubleshooting help.

     

    Is there a command or set of commands I can use to determine file size of a given docker? Like "du -h /mnt/user/Audio" would give me total size of the Audio user share; is there something like that I could use to find the total size of a docker? I figure if there is something like that, I could get total size of all dockers, wait an hour and run again, then compare results to see which (if any) dockers are growing.