Jump to content

Squid

Community Developer
  • Posts

    28,769
  • Joined

  • Last visited

  • Days Won

    314

Everything posted by Squid

  1. To view the log as it progresses? Logs are available (in DOS format) on the flash drive upon completion.
  2. When two vegetarians have a rivalry, is it still called a beef? Added: configurable notifications on appdata backup (disabled, start and stop, completion only, errors only) Added: Excluded folders to appdata backup Updated: CA manual Added: rsync errors now logged to syslog Lowered memory footprint of program (and download size) by ~20% Excluded folders are limited to the first level of your appdata folder. If you require more control (exclude subfolders of subfolders), then modify the rsync options accordingly
  3. How you accomplished it is exactly how you do it with any other app. Only thing is that you generally cannot do this with any apps that run as host instead of bridge All of lsio's apps automatically update the application, etc upon starting up. Depending upon numerous factors, this could take awhile
  4. No. You never change the container ports. What do the logs state. Maybe they're still updating
  5. Also, you may have to manually go to the appropriate ip address (ie: navigate to http://192.168.x.xxx/whatever port) in your browser as there are some bugs with dockerMan and the webUI settings in 6.1.x (mostly fixed in 6.2)
  6. Don't know. So long as they all have different host ports, different host appdata paths, and different names they won't interfere with each other.
  7. Was just investigating some things, and I saw that both the Plex and Sync apps from LT have their /config folders pre-mapped to /mnt/user/appdata/... Plex (not sure about Sync) writes a ton of symlinks (unless LT has done something within the app to eradicate this problem) to its appdata share which are not 100% compatible when running through user shares. If you've got a cache drive (and appdata is set to be a cache-only share), you really should change your host volume to be /mnt/cache/appdata/... to minimize issues with this app. (many documented examples of weird results with apps not working correctly with /mnt/user/appdata, but they work correctly with /mnt/cache/appdata_
  8. You should also change the WebUI port to the new one. No. the WebUI port pretty much never needs to be changed on ANY app. Its value is always something likehttp://[iP]:[PORT:80] The [ ] tell dynamix to use the host mapped port number (the 80 in the example is the container port number).
  9. As Bjonness406 stated, the reason for having to specify both a disk and a share is all because of symlinks. They only work correctly when reading / writing directly to a disk. (This is why some apps do not work if you specify /mnt/user/appdata/appname, but do work if you specify /mnt/cache/appdata/appname). I've now removed the "/mnt/user/" from the displayed name. Yes its confusing, but it is the only way to guarantee that a backup is successful. (Basically, I'm using the share list to present the user with a GUI to specify a folder) Your other problem with rsync returning a mkdir error is either the disk you specified as a destination is either 100% full, corrupted and set to read-only mode, etc. Writing to a disk share completely overrides any and all settings that you may have set in share settings. eg: If you have your backup share constrained to disk1 in share settings, and then set this module to write to disk 3, the backup will succeed. Technically, you don't even need to create a share with the GUI. I just typed in on my system "/mnt/user/blahblahblah" (which doesn't exist at all) and the backup is happily proceeding. You do have some underlying problem, which your diagnostics should tell the story on. On the positive side, after seeing your screenshots, I realized that the rsync error was only going to be displaying within the warning notification that was sent, so I've now added that error to also show up in the syslog Incidentally, the reason the one screen shot shows Disk8 as the destination, but rsync tried disk5 is because you ran the backup to disk5 (and it failed) and then changed the destination drive (which disabled the backup button until you apply the changes) and then took a screen shot.
  10. Gimme an update or 2 to accomplish this. You really don't know what features people want / need until you throw it out there. I already have a .cron running a script for this but intend to convert over to your method. I am currently putting my backup in a subfolder also so if you plan to implement this I may wait for it or else I can move my existing backup to a new share. Thinking the "initial backup" should be pretty quick if I already have an "initial backup". Should be by tomorrow (if not today). - its a lot easier to implement than the exclusions which needs many many test runs (backup and restore) which all depends upon how much TV the wife wants to watch Thanks for the subfolder feature. I wound up keeping my .cron and script because it was also backing up other cache-only shares, but I modified the script rsync to exclude appdata. I have setup the appdata backup using CA for nightly and my other script is only running weekly. My "initial backup" only took a little over a minute since most of it was already there. One thing I would prefer is a notification only on completion with success/fail, instead of a notification on start and another on completion. I love getting emails from my server in the middle of the night but only so I can check them the next morning to see that everything went OK. No point in finding out about the start of the backup several hours after it completed. Got no problems with that (already ahead of you actually), but what I've put in a feature req with bonienl for a low priority notification, and on my system, I've actually set all of the normal notifications to be browser only, as with CA now capable of handling plugin updates, I don't particularly see much use in normal notifications via email much anymore. The excludes are actually easy to implement via a simple text box, but I'm shooting for something a hair more friendly, and HTML is not my specialty. But I'll get it in somehow.
  11. Hit advanced settings when you add the app
  12. It is not an own docker app, it is a part of CA. First update CA to the latest version, then from the unRAID GUI go to settings, and then "Backup Appdata" under Community Applications, then you can set up a backup for appdata. Should look like this: https://i46.photobucket.com/albums/f109/squidaz/Untitled_zpsgayo2aji.png[/img] And its a simple appdata backup utility, not a general purpose cache backup per se
  13. - Added more warnings on the fact that the destination folder is going to be overwritten - Added ability to set backup destinations to a subfolder - Enhanced script / share selection - Added ability to skip docker.img on backups - Fixed autoupdate of applications would not always display only installed plugins * Personally, I do not see any need at all to backup the docker.img file if you are storing it within the appdata share. It is very easily recreated in case of a cache drive failure, and the apps are easily reinstalled using CA's previous applications section. In my mind, it is a complete waste of hard drive space to back it up. But, its entirely up to you. Either way, the option defaults to excluded. Wife is now watching sitcoms, which is seriously hampering my ability to develop and test excluded folders from the source, so it probably won't be released for another day or two.
  14. If the ps only states on the side of it or in the specs, 12V then it'll be single rail. If it states 12v1, 12v2, etc then its dual rail (or more)
  15. Gimme an update or 2 to accomplish this. You really don't know what features people want / need until you throw it out there. I already have a .cron running a script for this but intend to convert over to your method. I am currently putting my backup in a subfolder also so if you plan to implement this I may wait for it or else I can move my existing backup to a new share. Thinking the "initial backup" should be pretty quick if I already have an "initial backup". Should be by tomorrow (if not today). - its a lot easier to implement than the exclusions which needs many many test runs (backup and restore) which all depends upon how much TV the wife wants to watch
  16. You were warned. Right above the apply button And it's not easy enough to fix the r sync because if the source and destinations do not have the exact same contents then it's not a backup Sent from my SM-T560NU using Tapatalk Ugg yeah been staring at the screen too long today. Wouldn't a better solution be to use btrfs snapshots? Sure. Except that Nothing says you have to have your cache drive formatted as btrfs Nothing says you have to have your array formmatted as btrfs
  17. Can you show me the contents of your flash drive (config/plugins) EDIT: What this module was doing was basically looking at /config/plugins/*.plg to see what was installed. I've now switched it to look at /var/log/plugins instead which should only show the plugins which have been successfully installed.
  18. You were warned. Right above the apply button And it's not easy enough to fix the r sync because if the source and destinations do not have the exact same contents then it's not a backup Sent from my SM-T560NU using Tapatalk
  19. Initial backup takes me ~ 1 hour. My plex has something like 200,000 files in metadata. Mine was an hour almost exactly. My subsequent backups are usually ~5 minutes. And the vast majority of time there is merely rsync traversing the Plex folder just to see what changed.
  20. lol If I came up with a sample script to accomplish this, then the plugin would actually support it natively I really don't envision this as ever being a full-featured backup application. Rather as a utility to fulfill a need. Namely to restore all the appdata in case of a cache drive failure. Near term, I can see this performing dated backups, and you would be responsible for deleting the old ones. But rotating backups I'm thinking would very quickly become a nightmare on my end. Priority #1 has to be excluded folders. So that users who (unlike myself) have appdata containing items like VM's (I wouldn't even consider setting up appdata like that, otherwise it would have been in there originally). Hopefully at some point shortly someone will come up with stop / start scripts for PhaZe plugins and for VM's. I myself won't be because quite simply I don't run my servers with those apps, and I don't want to take responsibility for support of any user scripts.
  21. Initial backup takes me ~ 1 hour. My plex has something like 200,000 files in metadata.
  22. Gimme an update or 2 to accomplish this. You really don't know what features people want / need until you throw it out there.
  23. Danioj brought up a good point to me via PM. Some users utilize their appdata share for VMs. CA does not stop and start VMs, so I would either create a separate share for VMs, or modify his script (http://lime-technology.com/forum/index.php?topic=47986) accordingly to be integrated with CA. I had a very quick look at his scripting, and it doesn't look like too much trouble to separate the stop and starts, but CA itself will not directly support starting and stopping of VMs (or PhAzE plugins for that matter). Hopefully someone who utilizes PhAzE's plugins will bang together a script to properly stop and restart them, but it won't be me as I cannot support options that I am unable to test. I will however investigate incorporating an Excluded folder option to help out those users using appdata for VMs. (But as I said, this is part of the reasoning for the stop / start scripts support)
  24. I don't see this behaviour. If it continues, can you do this before and after a manual backup and let me know the results: docker ps -a (and also post a screen shot of the dashboard showing the stopped / running containers before and after) All OK this time. Included the info requested anyway. BEFORE AFTER Good. I can't fix what ain't broke. But was thinking that if it happens again all I really need is the diagnostics as CA and dockerMan both log the starts and stops Sent from my LG-D852 using Tapatalk
  25. Problem with stopping the service is that anything set to autostart will come back up whether it was running or not. Hence why I'm stopping individual containers. (And also at least on my 6.2, if I do a rc.docker stop, then dynamix just re-enables docker and restarts the containers if I do anything on the GUI )
×
×
  • Create New...