uiuc_josh

Members
  • Posts

    26
  • Joined

  • Last visited

Everything posted by uiuc_josh

  1. I'm seeing this same thing as well. I was traveling for a while, and don't really know exactly when this started. I've tried to re-add the directories, and plex appears to initiate a directory scan, but when the server library is opened, it gives the 'something went wrong' page.
  2. I'm interested in preventing this, as I have a 2x mismatched SSD cache pool w/ btrfs that has my docker volume on it. Any tipper for success for folks in this position? Edit: Unregard. Answer was way too close the top of the 6.9.0 release notes. Josh
  3. Haha, thanks Squid. Now what excuse am I supposed to use to be on the computer on the weekend? Thanks again, J
  4. saarg, I think you were correct that this wasn't a calibre issue. I removed and had similar issues trying to reload. I'm not 100%, but I think some IPs from the CDN that relays some of the calibre images popped onto a threat list, so a just few of the streams incoming were getting hit by my IDS. Thanks for the help! J
  5. OK--have force updated, but will give the manual remove/replace a try. Thanks!
  6. I'm having a similar issue where some dockers (particularly linuxserver calibre for reasons unknown) haven't not completed a docker pull successfully in months. I used to never have docker pulls fail, but now it's a regular occurrence. I update on the weekend every week. This started early this year, maybe Jan or Feb. I'm on 6.8.3. I usually get something like the following text, and even when the downloads all progress to 100%, they don't Extract. This is very frustrating--I'm not sure what to try. I just pulled 4 updates and not one completed successfully. I'd appreciate any ideas! Josh IMAGE ID [1191746980]: Pulling from binhex/arch-jackett. IMAGE ID [ae8bad35e085]: Pulling fs layer. Downloading 100% of 263 B. Verifying Checksum. Download complete. Extracting. Pull complete. Pulling fs layer. Downloading 100% of 263 B. Download complete. Extracting. Pull complete. IMAGE ID [6bc503260a9b]: Pulling fs layer. Downloading 100% of 3 KB. Download complete. Extracting. Pull complete. Pulling fs layer. Downloading 100% of 3 KB. Download complete. Extracting. Pull complete. IMAGE ID [ff3d0022ca4c]: Pulling fs layer. Downloading 100% of 656 KB. Verifying Checksum. Download complete. Extracting. Pull complete. Pulling fs layer. Downloading 1% of 656 KB. IMAGE ID [85aa5e1255ae]: Pulling fs layer. Downloading 100% of 5 KB. Verifying Checksum. Download complete. Extracting. Pull complete. Pulling fs layer. Downloading 100% of 5 KB. Verifying Checksum. Download complete. IMAGE ID [8bacc57869e2]: Pulling fs layer. Downloading 100% of 200 MB. Verifying Checksum. Download complete. Extracting. Pulling fs layer. Downloading 100% of 200 MB. Verifying Checksum. Download complete. IMAGE ID [5d769186571e]: Pulling fs layer. Downloading 100% of 243 B. Verifying Checksum. Download complete. IMAGE ID [530a345a26e3]: Pulling fs layer. Downloading 100% of 2 KB. Verifying Checksum. Download complete. IMAGE ID [d38befd7d245]: Pulling fs layer. Downloading 100% of 268 B. Verifying Checksum. Download complete. IMAGE ID [fca4f06c64a3]: Pulling fs layer. Downloading 97% of 73 MB. TOTAL DATA PULLED: 274 MB Stopping container: binhex-jackett Successfully stopped container 'binhex-jackett' Removing container: binhex-jackett Successfully removed container 'binhex-jackett' Command:root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-jackett' --net='bridge' -e TZ="UTC" -e HOST_OS="Unraid" -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -p '9117:9117/tcp' -v '/mnt/user/Downloads/sickrage/torrent/':'/data':'rw' -v '/mnt/user/appdata/binhex-jackett':'/config':'rw' 'binhex/arch-jackett' dadd590554929486c3889695ab42dc916361ed88a6d76627b22f5c75c5cbe5d5 The command finished successfully!
  7. Ok, so mine is stuck on 4.13 and 4.20.0 is current. I am on the latest Docker, so I'll try a different forum. The only reason I put it here is that this is the only docker I seem to consistently have this issue with (out of 8).
  8. saarg, Yes--thanks for pointing that out. Sadly, it will run through the process, restart the container, but will still be on the old version and will be marked as requiring an update. I've seen this with other containers, occasionally, however with this container the downloads haven't successfully completed in several months, with weekly attempts. unRaid will show the same indications as you've pointed out whenever an image within an update doesn't completely download, but I've never had another docker fail consistently like this. So, I think something other than just a single bad update/download is at issue here. J
  9. I really like this docker for my book library needs. That said, I've been stuck on 4.13 for several months, as all the docker update attempts within unRaid fail on this docker, with one or two components freezing below 100%. I have this problem intermittently with the linuxserver unifi controller docker, but no others. This (calibre) docker hasn't completed a pull in many months. I've checked my firewall/IDS logs and nothing is being blocked during the update process. There is alwyays one or two files that do not completely download. Any ideas on this one? The update always looks something like the following: Pulling image: linuxserver/calibre:latest IMAGE ID [1436953874]: Pulling from linuxserver/calibre. IMAGE ID [6b59f4d42254]: Pulling fs layer. Downloading 97% of 25 MB. IMAGE ID [875de62a65e8]: Pulling fs layer. Downloading 100% of 275 B. Verifying Checksum. Download complete. IMAGE ID [7af6c5f56812]: Pulling fs layer. Downloading 100% of 13 MB. Verifying Checksum. Download complete. IMAGE ID [c7bd6e473c84]: Pulling fs layer. Downloading 100% of 4 KB. Verifying Checksum. Download complete. IMAGE ID [862cf5bf9c91]: Pulling fs layer. Downloading 100% of 94 MB. Verifying Checksum. Download complete. IMAGE ID [9ee8bb076a0e]: Pulling fs layer. Downloading 11% of 4 KB. IMAGE ID [7ea0e6f90ec9]: Pulling fs layer. Downloading 100% of 616 KB. Verifying Checksum. Download complete. IMAGE ID [cdc6542d74b6]: Pulling fs layer. Downloading 100% of 103 MB. Verifying Checksum. Download complete. IMAGE ID [8d9f46996599]: Pulling fs layer. Downloading 100% of 2 KB. Verifying Checksum. Download complete. IMAGE ID [ecc00f389bd3]: Pulling fs layer. Downloading 100% of 185 MB. Verifying Checksum. Download complete. IMAGE ID [85b4fae8406b]: Pulling fs layer. Downloading 100% of 653 B. Verifying Checksum. Download complete. TOTAL DATA PULLED: 421 MB Stopping container: calibre Successfully stopped container 'calibre' Removing container: calibre Successfully removed container 'calibre' Command:root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='calibre' --net='bridge' -e TZ="UTC" -e HOST_OS="Unraid" -e 'GUAC_USER'='XXXXX' -e 'GUAC_PASS'='xxxxxxxxxxxxxxx' -e 'PUID'='99' -e 'PGID'='100' -p '8097:8080/tcp' -p '8098:8081/tcp' -v '/mnt/user/Kindle_Library/':'/books':'rw' -v '/mnt/user/Kindle_Library/0. Import/':'/import':'rw' -v '/mnt/user/appdata/calibre':'/config':'rw' 'linuxserver/calibre' 9xxxxxxxxxxxxxxxxxxx The command finished successfully!
  10. Hello, I have a weird docker-ism where my docker always shows that I'm in need of an update. The docker install is on 5.6.42. I can pull the update, but on next check, it will always say I need to pull a new version. Weird. It's not a big deal, but I've attached my docker settings and the "header" on the docker summary page. Thanks guys--if you have a free minute or know what causes this, I'd appreciate the insight! Josh PS: I have a similar cert issue that dorgan's having, except I can manually override the Chrome objections and go to the site. Not sure how to get a properly signed cert from the IP address I have the docker on like I do with the main unRaid server site.
  11. Hello all! New to the world (and frustration) of Unifi. I'm experimenting with upgrading off of LTS, and attempted ijuarez's method posted in March. Every time I attempt to restore my backup file from 5.6.42 onto either 5.8 or 5.9, I get an error saying that the uploaded file is "newer" than the current controller version. Any known workarounds? Thanks! J
  12. I'm not seeing any corresponding events logged there. I'm reconfiguring the log rotation in the IPMI to make sure I'm not missing anything from a full log file. After rebooting the server, I've had three recurrences of the error in two weeks. Not sure what I'm dealing with here. Josh
  13. All, I'm getting the following (flagged by FCP) in my syslog: May 6 22:37:21 unServer kernel: mce: [Hardware Error]: Machine check events logged May 6 22:37:21 unServer kernel: EDAC sbridge MC0: HANDLING MCE MEMORY ERROR May 6 22:37:21 unServer kernel: EDAC sbridge MC0: CPU 0: Machine Check Event: 0 Bank 9: 8c000041000800c0 May 6 22:37:21 unServer kernel: EDAC sbridge MC0: TSC 397c3a0867fb60 May 6 22:37:21 unServer kernel: EDAC sbridge MC0: ADDR 1d0543000 May 6 22:37:21 unServer kernel: EDAC sbridge MC0: MISC 90000004000428c May 6 22:37:21 unServer kernel: EDAC sbridge MC0: PROCESSOR 0:50662 TIME 1525646241 SOCKET 0 APIC 0 May 6 22:37:21 unServer kernel: EDAC MC0: 1 CE memory scrubbing error on CPU_SrcID#0_Ha#0_Chan#0_DIMM#0 (channel:0 slot:0 page:0x1d0543 offset:0x0 grain:32 syndrome:0x0 - area:DRAM err_code:0008:00c0 socket:0 ha:0 channel_mask:1 rank:0) They are occuring about every 1-3 days for about the last month. Is this a bad ECC DIMM? I haven't rebooted since this started, so I'll start there. Any ideas how to read this error? I'm running a XeonD-1520 setup on an AsRock D1520D4I on unRAID 6.4.1. It's been in service about 20 months. Thanks! Josh
  14. Is anyone using this with Jackett? I cannot for the life of me get it to work. I have the latest binhex/arch-medusa running (rebuilt from my Sickrage media, but not database). This worked fine, and things were up and running. My downloader is deluge-VPN, which I've always gotten to work via WebUI. Deluge also looks a a folder /downloads/jackett/ for torrents. It seems that Medusa is passing the request just fine to Jackett, but can't deal with what it gets back. Jackett has a "black hole" setting, but it doesn't actually seem to put anything in there on normal requests, just manual downloads that I tell it to put there. Apologies in advance for the jpg spam, but I'd really like to figure out why Jackett isn't pushing the torrent or magnets back to Medusa in a way that can be passed to deluge. Thanks to anyone with enough insomnia to look at this one. As I look over the stuff, I'm wondering if the issue has something to do with medusa not handling the magnet links well. That reminds me of some SR issues I think I had before, but I can't remember that cause/solution. Edit: Replying to myself here--It was a medusa side issue that they've fixed by adding explicit Jackett providers. Sadly you can't treat all of Jackett as one provider, and have to configure each separately, but it's working with Jackett. Thanks! josh
  15. Thanks guys, I ended up using rclone successfully with this guide here: http://lime-technology.com/forum/index.php?topic=46663 Muchas gracias to stignz--it's a little fiddly at first, but works like a champ!
  16. Just had the same issue as dheg, Calibre 2.67 update breaks the VNC connection somehow (though I also have a custom library set up). Trying EDGE=0. Update 1: Worked just like dheg's. Probably a critical fault with the 2.67 release itself, vice the VNC setup, now that I think about it. Calibre-ErrLog.txt
  17. trurl, Thanks for that link. I guess my answer was more the academic one of best practices--got some further info from this link http://unix.stackexchange.com/questions/196009/location-of-the-crontab-file , if any other newbies are curious. In case anyone other than me is confused, it seems (again, grains of salt here, friends) that the unRAID folks have a plugin cron "injector" running that injects entries from /boot/config/plugins/dynamix (with .cron endings) into /etc/ cron.d/root, cron.hourly, cron.daily, cron.weekly, and cron.monthly. I'm guessing this happens at boot time. These injected cron jobs (specific to unRAID operations) can be controlled from the GUI: settings->scheduler and flexibility of this is expanded with the schedules and SSD trim plugins (and those are only the ones I'm using). When I commit a change to the crontab using the crontab command, it is changing the file at /var/spool/cron/crontabs/root, which already has hooks to the periodic cronjobs in /etc/ in it, which I'm guessing get put there by the "injector" code whenever I reboot or make changes. It's pretty academic at this point, but I'm guessing that makes rclone scheduling best done via the recurring crontab command/scripts at the top, so I guess this was a long circular road. At least I learned something! Thanks again, Josh
  18. Hmm. So not all crontabs are created equally. Good point. I tried this and it worked. I guess I'll just need two entries to do the job. Thanks Squid!! For shitzengiggles, what's the difference between 'committing' a change via "crontab [file]" and editing /etc/cron.d/root or cron.[interval]? Is there a better/more secure way of doing this? Josh
  19. Hi folks! Working this same project to get b2 working--Thanks stignz! I've got the command line rclone and script working, but cron is making me scratch my head. I followed your input command with crontab -l | { cat; echo "0 0 * * SUN,WED /boot/rclone/scripts/rcloneStop.sh > /dev/null 2>&1"; } | crontab - but then realized I wanted it to run at 2 am instead... so I dumped the crontab with crontab -l >> tmp.cron and edited it to # If you don't want the output of a cron job mailed to you, you have to direct # any output to /dev/null. We'll do this here since these jobs should run # properly on a newly installed system. If a script fails, run-parts will # mail a notice to root. # # Run the hourly, daily, weekly, and monthly cron jobs. # Jobs that need different timing may be entered into the crontab as before, # but most really don't need greater granularity than this. If the exact # times of the hourly, daily, weekly, and monthly cron jobs do not suit your # needs, feel free to adjust them. # # Run hourly cron jobs at 47 minutes after the hour: 47 * * * * /usr/bin/run-parts /etc/cron.hourly 1> /dev/null # # Run daily cron jobs at 4:40 every day: 40 4 * * * /usr/bin/run-parts /etc/cron.daily 1> /dev/null # # Run weekly cron jobs at 4:30 on the first day of the week: 30 4 * * 0 /usr/bin/run-parts /etc/cron.weekly 1> /dev/null # # Run monthly cron jobs at 4:20 on the first day of the month: 20 4 1 * * /usr/bin/run-parts /etc/cron.monthly 1> /dev/null # # Runs twice a week at 2:00 to execute Backblaze b2 rclone job: 55 17 * * SAT,WED /boot/rclone/scripts/rcloneCopy.sh > /dev/null 2>&1 and then ran crontab on that file (I put the 17:55 Saturday time in there to check to see if it ran.) Unfortunately, I got this ( a couple times) in the syslog with no output in the rclone logs unServer crond[2440]: failed parsing crontab for user root: SAT,WED /boot/rclone/scripts/rcloneCopy.sh > /dev/null 2>&1 Any idea where I boinked this thing? Thanks guys!! Josh
  20. I'm pretty new to the unRAID game, having just gotten my basic unRAID going with a few dockers for content management. The new Backblaze B2 service looks like a reasonable way to start off-siting my more critical data, but I can't find any Dockers or plugins that are meant for this purpose. Anyone know a good way to get started down this path? Thanks! Josh
  21. Thanks! Just out of curiousity, what's the difference between /mnt/disk1/appdata and /mnt/user/appdata? Does it matter which I move to /mnt/cache/appdata? Still not sure what unRAID does from a file structure standpoint with user shares. What's /mnt/user0? Thanks again!! Josh
  22. I hope nobody minds if I hop into this thread with a newbie question... I've been messing around with getting PMS, sickrage, delugevpn, and also got an RDP-calibre docker going with my book library. After tinkering for a month or so, I realize that my parity and disk1 never spin down. I've set the /config directories to /mnt/user/appdata, which I, sure enough, had set to Use cache disk=no, vice only. If I simply change this option back to "only," will it move the data while keeping the /mnt/user/appdata path? I think I initially didn't want to burn up my cache disk space with /config data, but now I see that it's keeping the array from spinning down. Anybody have a suggestion for the best way to proceed? Thanks! Josh
  23. jonathon, John and gary, First, may I just say "Wow!" A reputation for community responsiveness is one of the things that drew me toward unRAID. Validated. John_M, you got what I've missed here-I misunderstood the high-water mark, and your link clears that up. garycase and jonathonm, thanks for the advice, I will hold the rest of the data dump until parity completes and keep dumping in the TB's in high water. Thanks for the help! Josh
  24. Good morning! I'm new at this, having just built my first unRAID server. I've precleared the three 4 TB HGST drives, and added them to the array. To migrate my old NAS data over, I used ssh into my unRAID and using screen + mc, am downloading the directories onto my unRAID. I've set up my directories to "automatically split as required" for several collections of unconnected items (documents, etc), and auto split the top two and top three directories for movies and TV shows, respectively. Unfortunately everything is only going onto disk1 after about 320 GB of copying. So I thought I'd stop before I get too far down this process. Caveats--I'm still in the free license, as I'd like to see parity and balancing work for me before I buy, so I don't have a cache disk set up. My first parity pass is still running while I'm adding the initial data. And also, I've set the cache setting on the user shares to "Yes," since I'd like to add one after everything is working and I buy the full license. My understanding of the setting help dialogue is that, with no cache available, it will just move the data to the array (which it is doing, but only to disk1). Any ideas why this might be doing this? My global shares settings are Enable disk shares: Auto, User shares: Yes, include: All, exclude: None. Thanks! J unserver-syslog-20160705-0803.zip
  25. Mattail, I'm looking at going a similar direction as you are--how's the build working out? Any issues or changes you would make? josh