Jump to content

Squid

Community Developer
  • Posts

    28,770
  • Joined

  • Last visited

  • Days Won

    314

Everything posted by Squid

  1. Not particularly. Binhex uses /data, and he expects you to tell Sab download to /data/whatever LSIO uses /downloads and they expect you to tell sab to download to /downloads/whatever No real reason under either to have to specify a second mapping for incompletes, and have sab download to incomplete and then move it to another mapping for completes. You can just use a sub-folder from the main downloads mapping. Its actually a hell of a lot faster to specify both complete and incomplete as subfolders within the application from a single mapping. That way the move from incomplete to complete is instantaneous (simple rename) instead of a copy/delete operation. By and large most applications it doesn't make a single difference at all what you name the container mapping (so long as they match between the apps communicate with each other). /config is the exception and shouldn't be changed (why its hidden) Binhex Sab can communicate no problems with LSIO Sonarr. So long as the mapping matches between the two. Doesn't matter is you decide to use /data or /downloads or /StuffImHidingFromTheMPAA - it just has to match between the two and within the app's settings you tell it to put the downloads in the container mapping you chose.
  2. Yeah, I've never been able to do it. Always figured it was "one of those things", and simply entered in the appropriate IP address and manually entered in the Share Name and appropriate credentials
  3. It doesn't. It only sends the command to switch when the # of drives threshold is reached one direction or the other. I'll look at it again, but unRaid does not log any spinups caused by drive access. Only spindowns due to inactivity.
  4. This is what I see You got it set up so that one data disk is allowed to be spun down and turbo mode is still enabled Everything looks like it works fine. When 2 or more data disks are spun down turbo mode gets disabled The only thing that I'm seeing is that when you manually spin down all drives, I'm not picking up a second later most of the drives even being installed. This is probably because every scan the first thing I do is check to see what drives are even available via an unRaid variable. That variable during a mass spin down is probably not complete with its reporting, so its possible that if I only see say 2 drives being available at that time, and only one of them is spun down that I will switch to turbo mode and spin everything back up when the next write happens. But, on the next scan if no write happened, then the drives are still spun down and I would wind up switching to normal. IE: I don't think anything is wrong here. But, your log does highlight a flaw where I can't 100% tell what's going on if unRaid is manipulating the spindowns at the same time that I'm checking. But, on the next scan everything recovers itself and carries on. Ultimately something like this plugin needs to be incorporated at the driver level by Limetech themselves (and make the turbo mode setting a true "auto") Don't forget to disable debug mode in the plugin. Your syslog will fill up very quickly with it enabled.
  5. A cron should only have 5 entries. You've got 7 http://corntab.com/ You want 0 2 * * 1 to run Mondays @ 2am I'll look at the log later in the day.
  6. You would have to enable debug mode in the plugin, and then see what's happening via the syslog
  7. If unRaid is able to determine that the /config mapping is indeed a /config, then it automatically is supposed to hide it. There are however template options to force it to display
  8. /config only shows up under Show Advanced Settings (or Show More Settings if running 6.4)
  9. This is where we stand. rsync vs rm is not the solution. Managed to replicate this after sitting there and doing 5 hours of full backups, renaming all of the sets, and then watching them delete (and watching a syslog tail / htop / iotop - Its been a fun Saturday ) Issue is one of the following: XFS et al update the journal constantly as metadata changes. During a deletion, there are many, many updates of metadata per file (more than simply creating the file). This constant update of the journal / log is bogging the system right down and eventually it basically just halts. I'm sure that if you sit there long enough it may recover eventually. Many, many reports of this via Google, along with some work arounds, none of which are particularly applicable however to unRaid. Completely out of my control. Running out of memory. During the deletion, memory usage does continually climb in proportion to the number of files. At one point on my system, I was running ~90% in use. Completely out of my control. I have a plan that will still allow dated backups and not be a problem to delete, (namely switching over to a zip or tar archive), but its definitely not going to happen in the immediate future, and it may also come with its own caveats (performance in creating the archive?) In the meantime, an update to the plugin is available. Beyond a simple fix for a display aberration introduced by 6.4.0-rc10b, the change is a red note on the Use Dated backups option advising that you may have problems with this. If you're stuck with old dated sets, you best option would be to go through with Krusader or something and gradually delete the subfolders from your Plex backup (main ones to separate from each other would be ...../TV Shows and ../Movies, and whatever ../Music is called
  10. No they're not. Your host path for /downloads does not match between SabNZBd and Sonarr. Set both apps' templates to be /mnt/cache/appdata/Downloads https://forums.lime-technology.com/topic/57181-real-docker-faq/?page=2#comment-566086
  11. I'm testing the update right now. - Still going to be slow as its an inherent problem with XFS as massive deletions (and Linux does not support actually deleting a directory and all the contents in one shot), but so long as the system doesn't have a heart attack then we should be good.
  12. I have a problem with UD running on 6.4.0-rc10b It will not mount any of my SMB mounts that are hosted on my secondary unRaid server (6.3.5) Oct 28 08:40:29 Mount SMB/NFS command: mount -t cifs -o rw,nounix,iocharset=utf8,_netdev,file_mode=0777,dir_mode=0777,username=andrew,password=******* '//SERVER_B/Movies' '/mnt/disks/SERVER_B_Movies' Oct 28 08:40:29 Mount of '//SERVER_B/Movies' failed. Error message: mount error(95): Operation not supported Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) I reverted back to 6.4.0-rc9f for testing, and there are zero problems mounting those shares.
  13. Guys, I've been researching, and the problem is a combination of XFS and the rm command (and how it works) combined with the fact that XFS is dog slow at deleting insane number of files. (Also, I am still unable to actually replicate the problem... Probably because my Plex appdata only contains ~500,000 files instead of millions) This is one suggestion I have to get rid of the folder. Note that this is definitely in the danger zone if you mistype the path. I'm going to test an update to the plugin using this system instead, so you might want to hold off until then mkdir /tmp/empty rsync -avXH --delete /tmp/empty/ /mnt/user/Backups Substituting /mnt/user/Backups with your appropriate appdata backup share. Failure properly escape any spaces, etc could potentially result in data loss though. You've been warned. Apparently rsync'ing an empty folder and having it do the deleting is far more efficient than the linux rm command I've been using.
  14. I've been busy Fixed: Templates created by authors running unRaid 6.4 would not populate the Network type properly when adding the application Enhanced: Ability to black list a particular application from a specific author instead of the application as a whole * Removed: CA Modules from the Section. All applications are now treated equal and others created by myself will no longer appear separately Added: Ability to delete any private docker application from the list of available apps ** Enhanced: Many CSS and coding improvements with a net result of the CA application being roughly 30% of the previous release's size Changed: Access to CA's settings is now done via the Apps Tab (Settings link) instead of via the Settings Tab Fixed: Under certain circumstances, some Stats could be out to lunch Changed: On a brand new install of CA, no additional plugins are installed (Previously Appdata Backup/Restore, Auto Update, and Cleanup Appdata would be installed). These plugins can still be installed via searching in CA or selecting the Plugins category. * I've noticed a couple of applications within the lists that are actually 100% duplicates of each other (ie: multiple templates referring to the same identical dockerHub repositories). This included: mongo-db Gitlab-CE I've removed / blacklisted the duplicated template based upon some reasonable (albeit subjective) criteria including Who had theirs in CA first Which author is more active / more likely to actually support the application This step will remove confusion for the user on which of those above mentioned apps to install when the duplicated template adds no value to the unRaid community. Note that things like Binhex-Plex vs LinuxServer Plex et al are NOT duplicated templates and are unaffected by this change ** If you have dockerHub searches enabled, then you've probably got a growing list of applications (appearing under the Uncategorized section, or whenever you search) that you may (or may not) need any more after adding them and are only cluttering up the displays. Previously you could always have deleted those templates from the CA folder on the flash drive. Now, you can delete those unused / uneeded private applications directly via CA's GUI. On any display of available applications, a private app/template will now have a Red-X allowing you to delete the template to clean up your system.
  15. The reason why is due to a fatal error in the template as described in Apps - statistics - template errors
  16. Something like /moviesA mapped to /mnt/user/BluRay and /moviesB mapped to /mnt/user/DVD Add movies within Radarr from both /moviesA and /moviesB
  17. The cache drive and caching writes to the array was introduced back when average write speeds directly to the array were much slower than they are nowadays. Many people (including myself) only use the cache drive for applications (appdata, and the downloads share). All writes to user shares go directly to the array. (That, and I find it impossible to justify to the "boss" why I need a larger cache drive when neither her nor myself see any real improvements from caching writes to the media shares) You still use the cache drive for downloads. But post processing / moving by Couch / Radarr / Sonarr will bypass the cache and go to the array. Now, if you cache drive isn't big enough to actually handle the size of the downloads, then try setting the download share to be use cache: Prefer. When an article doesn't fit on the cache drive, it will fallover to the array.
  18. Are they in multiple shares (/mnt/user/BluRay and /mnt/user/DVD) or multiple folders (eg: /mnt/user/Movies/BluRay and /mnt/user/Movies/DVD)? If the former, add another path mapping to Radarr, and import the movies from that path. If the latter, you may have to specify the subfolder of the share and import the movies from that subfolder, then go on to the next subfolder.
  19. https://forums.lime-technology.com/topic/57181-real-docker-faq/?page=2#comment-566086
  20. Or you can simply run Tools - New Permissions against the TV Shows share (or Docker Safe New Permissions)
  21. You're probably right. Just force of habit from old, old days thinking of it like that.
  22. On your last pic, flip over the setting to "Set Permissions"
  23. No idea. Haven't touched a torrent in years, but seems to me if the file is still In use then it can't be moved. Sent from my SM-T560NU using Tapatalk
×
×
  • Create New...