Jump to content

Squid

Community Developer
  • Posts

    28,769
  • Joined

  • Last visited

  • Days Won

    314

Everything posted by Squid

  1. Here's what's going on. (Basically this is a rehash of the one docker FAQ link I already gave) Sonarr tells sab to download the file. * This works fine Sab downloads it fine, then tells Sonarr that the file is stored in /data/tvCategoryName/filename Sonarr looks in /data and says WTF there's no file there. (This is logged within sonarr's logs) You've got to have the mappings match exactly. IE: sab downloads into /data, but /data doesn't exist within sonarr On the Sonarr container, add in a /data container path mapped to /mnt/user/downloads/ BTW, the same thing happens on CouchPotato / Radarr
  2. edit the containers. Make a change (say set network to host). Change it back (back to bridge) Apply Then post the command that appears at the end
  3. Presumably Sonarr already has the tv show in it with it's locations as /TVShows/Sleepy Hollow/Season 4 If your /download mapping assigned to both sab and sonarr matches exactly, then everything works just fine. Post your docker run commands for both sab and sonarr
  4. Geared towards using nzbGet, but what its talking about is the exact same for sabNzbd
  5. Old bug from a prior trip. Just click in the box where you're seeing it, and delete the contents. Disk won't actually be read-only. Just the comment got left in there a while ago.
  6. Its seemingly an issue, but it really isn't. If the flash dies, make a new trial usb. Boot it up. And assign the disks At that point, all of your shares are now accessible. Copy the backup from the array to the new flash. Reboot and and your back in business like nothing happened. Or, as @wgstarks said, set the backup destination for USB to something else accessible. (A UD assigned disk, a UD mounted SMB share on a different computer, etc)
  7. I had thought about adding in a utility to delete them all when the bug was discovered, but decided against it because for all I know those files named Squidbait* might actually belong on your array for your own purposes, and the last thing I wanted to do was inadvertently delete your own files. The latter versions tell you about creation errors (if you have the bait files set to be recreated), and clicking the link brings up a list of where the stragglers left from prior versions are... A straggler is a file that as far as RP is concerned belongs on the array because it is a pre-existing file named the same. IE: If you do have a file of your own named Squidbait*, then RP will not overwrite it, nor will it monitor it for changes)
  8. The local copy would definitely work on a restore. Ymmv on the Google version Sent from my LG-D852 using Tapatalk
  9. TBQH, on my systems I don't both bother using Bait Files any more. I strictly use the bait shares.... The bait files while offering more protection were just too prone to be inadvertently deleted / changed in my daily usage of my servers
  10. Those would have been created on a previous version on the plugin that had a bug within it, and the system didn't keep track of them. Easiest solution is to navigate via the network to those shares, then do a search for squidbait, highlight them all and then delete...
  11. Its basicallyexactly what bonienl stated #description=Add to hosts file echo "192.168.111.10 hostname.mydomain.com" >> /etc/hosts placed on the flash drive called script within /config/plugins/user.scripts/scripts/someFolderName and set to run at array start See
  12. Post your diagnostics when you're having trouble like that. In particular when you wind up with an orphan after an attempted update
  13. I replied in your other thread. You are currently having hundreds of hack attempts being made. Doesn't matter how strong your root password is. An attempt will eventually guess it and then you're pooched...
  14. Problem with appdata backups either directly through a rclone script or with this plugin to a folder mounted with the rclone plugin is symlinks / hardlinks. Most (if not all) docker apps makes extensive use of them, and they *may* be incompatible. While CA Appdata Backup does support destinations of folders mounted with the rclone plugin, if it returns errors (and it probably will) then the backup may or may not be of any use to you in the event of a cache drive failure. I only recommend (and support) backups with a destination on the local array (or to an Unassigned Device mounted via the UD plugin). See here for another user's attempt:
  15. Only problem then with appending the entries within go is that you can't guarantee when in the boot process unRaid creates the file. OP would be better off setting up a user script (set to run at array start) with the appropriate echo command.
  16. Quick check though is that if you're running a Plex docker, then you have to start / stop the container for the local version of hosts that it uses to re-grab the one that unRaid uses. IE: You're going to have to make a separate hosts file on the flash drive and then edit /boot/config/go and add the line cp /boot/hosts /etc/hosts before the line that has emhttp in it so that it'll take effect with every boot.
  17. Well that blew away the only theory I had... Here's what's going on: The rsync started and was carrying on no problems. Then, up in the middle of it it just decided to quit. This is probably because something somewhere killed off CA's backup script. Except all of this happened without any errors or notifications or anything being logged anywhere. There have been intermittent reports of mover crashing (also uses rsync), but AFAIK no one has ever adequately explained why or how. This is a built-in command to linux, and is not an add-on at all. But the messed up thing is that if you run it manually, it runs to completion. My suggestion on how to fix you're probably not going to like: Easter is coming... Take the server with you on Easter Sunday to your local church and get it blessed along with all of your food for the dinner.... It's possessed. IE: I have no clue what's going on, nor is there anything I can do about it. The cause is not within CA Appdata Backup. EDIT: Not quite true. I have a feeling that php is getting Bus Errors causing complete abnormal exits (may be a bug in php7) (does anything appear on the locally attached monitor?), but either way it's way out of my purvue.
  18. After thought: No matter what you do, you should always set the polling time to be significantly less than the spindown time for the disks (ie: my default of 5 minutes vs the default spindown of 15 minutes). This will let you take advantage of when drives do get spundown and avoid spinning them back up again. You have to keep in mind that even with turbo mode enabled, the disks may not spin down concurrently due to reads taking place. Another thing to keep in mind with relation to all of this is how you set your spin up groups should you utilize them.
  19. All your points are valid. unRaid's UI isn't particularly reliable at notifying when drives are up or down, hence the need for separate polling. On my systems at home, I have it set to every 30 seconds. Absolutely correct that if constant writes happen to the array, then at some point, turbo mode will kick in, and it will not let down until the writes cease long enough for the drives to spin down manually. You can't manually spin down a drive in order to cancel turbo mode in the middle of writes because unRaid will wind up just spinning it back up again. In my use case, the only setting for number of disks spun down is zero. If they are all up, then I might as well enable turbo. But, writes to my system are all handled automatically, and worst case scenario is a 10-15 gigs is written every hour or so in a single batch. Net result is that pretty much any time I look at my system to see how it is, all drives or most drives are indeed spun down. But, if you have say VMs or Docker applications and no cache drive that they are stored on, then its a given that once turbo kicks in, it will stay kicked in due to the constant writes by either VM / Docker to the appdata. Cache-enabled shares won't affect this. (I don't have any cache enabled shares) Even if LT should implement a true Auto mode at the md driver level, every single one of your points is still valid. There is no perfect world unfortunately. But the plugin does fill a whole in the unRaid eco-system... YMMV
  20. I checked this out yesterday. The template is actually set for container port of 9022. And take make matters worse, because of how the template is created, you cannot simply change the container port. You actually have to delete the port mapping and create another one.
  21. The flash is only used to boot and store settings. unRaid runs completely in RAM and doesn't access the flash unless you change a setting. FYI, most use cases of actual storage space on the flash is around 500Meg. You make another boot flash, and copy (you have a backup) of your .key file from the original flash to the replacement. When booted up, the UI will transfer the registration over to the new flash no Yes
×
×
  • Create New...