Jump to content

Squid

Community Developer
  • Posts

    28,769
  • Joined

  • Last visited

  • Days Won

    314

Everything posted by Squid

  1. The extension on that file referenced is .js which is javascript. So my post is in fact wrong. But that then means that its client side (ie: your browser)
  2. I *believe* that its more that the default heap size for a java program was exceeded. This isn't directly related to the RAM size. The default heap size is something like 64Meg. Not sure if lsio has any control over this or if its strictly a nzbGet issue, but here's a reference for the powers that be: http://alvinalexander.com/blog/post/java/java-xmx-xms-memory-heap-size-control
  3. Been available within CA for a couple of weeks now.
  4. The container was last updated Dec 20. Plex is a rather unique situation in the unRaid universe. Limetech includes a default app for it, the majority of the users around here run the Plex app via either linuxserver.io or binhex's repositories, and now the guys over at Plex are now officially supporting unRaid with a template to their own docker app. Because of this saturation for this particular app, if I had my way, then no more PlexMediaServers would be allowed into Community Applications as fundamentally they are all identical. Whether or not Limetech is going to continue to update their container now that Plex has an official docker for unRaid I do not know (EDIT: jonp just answered this above), but generally within the forum itself the advice has always been to install either the lsio or binhex flavours. With Plex officially supporting unRaid, I believe that gradually the advice from the forum will be to install the official version instead of any of the others.
  5. - Added: Output of mcelog is now logged when an mce error is found - Added: Warnings when irq xx: nobody cared errors are found. Output of cat /proc/interrupts is logged - Added: Check if maximum number of inotify watches has been exhausted. When an irq nobody cared is found and a call trace is associated with it, then a warning (irq nobody cared) and an error (call trace) happen. Still not at the point where we can determine what exactly caused the call trace and then ignore them on a case-by-case basis, as that's a big revamp of the testing procedure, but in the interim at least the output of cat /proc/interrupts will help to determine whether or not to ignore the call trace for the time being.
  6. Was looking at expanding FCP to actually run mcelog if it finds an mce error within the syslog (as I have yet to see a diagnostics that details if mcelog actually logs the causes of the error in the syslog if its installed), but ran into this issue on my systems. Not quite sure if you can do anything about it... mcelog: ERROR: AMD Processor family 21: mcelog does not support this processor. Please use the edac_mce_amd module instead. : Success CPU is unsupported
  7. There's an entry in the Docker FAQ about that
  8. I was going to add a comment to the app within CA until I noticed this in your MineOS template <Repository>yujiod/minecraft-mineos</Repository>
  9. And like every other test, FCP highlights what tends to trip people up. If you know what you're doing, then there's no reason why you shouldn't put stuff within /mnt (and then ignore the error)
  10. One way is misconfiguring /config to be mapped to /mnt
  11. Yeah, I've seen probably a half dozen diagnostics where various apps / whatever were storing folders / files within /mnt. This check is fundamentally a bug fix for the check for disks being able to be written to. I've got no problem making an exception for Recycle Bin (already there for UD)
  12. If it's showing as running when the backup starts then I stop it and then restart it. If it's not showing as running when the backup starts but the plugin restarts it then I'd have to give you a script to help me identify what's going on Sent from my LG-D852 using Tapatalk
  13. You have the option to keep any docker running during the backup. Your mileage may vary. If errors result by doing so the plugin will report it. The files for the VMs are the xml templates and by request the nvme roms Dockers that the plugin stops at the start are the ones which are restarted when it finishes Sent from my LG-D852 using Tapatalk
  14. Under the advanced options for CA backup you have the option to specify which apps to not stop during a backup. Sent from my LG-D852 using Tapatalk
  15. It only works if the keep after X days is set. The delta is from one of the old sets that the backup would wind up deleting
  16. - Added in tests to detect files / folders being written to /mnt. Mainly to detect situations like this: but more thorough.
  17. It has never zipped it. IMHO, backing up to a share that is able to use the cache drive is problematic - What happens if the cache drive drops dead before mover moves the files to the array? Now you have no backup - Mover logs everything it does, and appdata depending upon your apps could potentially log hundreds of thousands of files potentially causing it to fill up. While backing up to a non-cached share is slower than to a cache-enabled one, only the files that are changed (when using non-dated sets) are copied over. When using dated sets, you can also set attempt faster rsync which will only backup the changed files to one of the backup sets that is older than the cut-off date will be backed up. IE: I have my appdata set to backup every week. I have it set to only keep the backup sets for 2 weeks. My first 2 backups that run each take about an hour to run. All the subsequent ones take around 10-15minutes tops.
  18. I don't use headphones, but you set the exact same volume path mapping (both host and container) to be identical on both NZBGet and on Headphones
  19. Doesn't work exactly like that... A maintainer creates a template to the docker app, and maintains a support thread for it. You can however add in any docker app yourself by turning on dockerHub searches within CA's general settings, searching for the app and then adding it. CA will attempt to create the template with all the port mappings, volumes, etc already filled out by scraping the web page. Not an exact science so you still have to make sure everything is correct by checking out the page on the container itself (ie: the link you posted)
  20. - Fixed: Don't follow symlinks to directories during an extended test
  21. Wattage doesn't matter. Amps per rail are what counts. The key with that power supply is that it has 2 12volt rails, each rated at 32A individually, or for a combined maximum of 56A) It doesn't look like Athena has a manual online, but you need to check out what 12v1 and 12v2 actually power. IF the hard drives are running off of the same rail as the CPUs (ie: both are on 12v1 and 12v2 is dedicated to the PCIe connectors (not an unusual situation), then you are really pushing the supply to the limit (and beyond) due to the spinup current required by the drives. Side note, PSU for server's running actual RAID systems can generally be rated lower than what is necessary for unRaid. This is because under a normal RAID system, the drives are running 24-7, and during boot up, you would generally enable the controller to spin the drives up individually rather than all at once (which happens say at the start of a parity check), to keep the current draw at a minimum
×
×
  • Create New...