xamindar

Community Developer
  • Content Count

    440
  • Joined

  • Last visited

Everything posted by xamindar

  1. I'm changing this thread into the support thread for my repository. Will make it easier for the templates to have the same url in them all. Please feel free to report any issues or ask any questions about my dockers in this thread. NEW: Quassel-core - have an always-connected IRC client that you can connect to with Linux/MaxOS/Windows/Android. Sort of like an IRC bouncer, but different. Here is a new Syncthing docker that resolves a couple of the issues I had with the existing one. -an auto-update of syncthing no longer spawns multiple instances. It updates cleanly, closes, and th
  2. Got mine working using supervisor and syncthing does the same exact thing when it updates its self. It seems strange that runit and supervisor can't figure out that it is already running before constantly spawning another instance - at least I think that's what is happening.
  3. you do know you can change that yourself in the docker template, instead of relying on docker authors to anticipate everyone's usage ? Yes that is what I had to do. I do not expect the authors to anticipate everyone's usage, but I do expect the defaults to be set properly for the application to fully work. I was speaking in a user point of view in that they would expect the defaults to allow the application to work properly, not to have to know to change the settings from defaults. Or in the least an explanation in the description with any gotchas. This is the first time
  4. Yeah I noticed that through further testing so it appears to be expected behavior. The multiple instances errors start happening when syncthing does it's auto-update. Checking processes after this happens and when the errors start in the log shows 4 instances of syncthing running with one (or maybe two) of them constantly dieing and getting re-spawned.
  5. you do know you can change that yourself in the docker template, instead of relying on docker authors to anticipate everyone's usage ? Yes that is what I had to do. I do not expect the authors to anticipate everyone's usage, but I do expect the defaults to be set properly for the application to fully work. I was speaking in a user point of view in that they would expect the defaults to allow the application to work properly, not to have to know to change the settings from defaults. Or in the least an explanation in the description with any gotchas. This is the first time
  6. If you are successful, be sure to share with the community. I definitely will! Learning it has been interesting. Perhaps it would be worth posting in gfjardim's support thread. Reckon it would be easier to help fix the problems with the existing container than duplicate a lot of work, plus may help a few others too. If you do create your own containers then please post them here to benefit the rest of the community. That was where I posted an earlier issue with one of his dockers and was completely ignored (he even posted directly after it). Oh well. Looking at the
  7. Oh well, I think I'm done with the gfjardim dockers. I have tried three of them and they all have problems. -syncthing - will run an auto update of its self and then constantly complain about multiple instances running. These dockers don't seem to be designed to know when syncthing restarts its self. It will also fail to set upnp routes on my router (something practically required for this type of service to work) using the default bridge mode - needs to be changed to host. -dropbox - will start throwing errors of multiple dropbox instances running after a week or so. Maybe this is the sa
  8. I have an issue with these two containers in that they launch two processes of their application. The Dropbox one will be fine for a while and then after about a week it will suddenly have two processes running. I have to restart the dropbox one to have it go back to one instance. Now I have just tried syncthing and it launches two instances right away: ps ax | grep syncthing 21369 ? Ss 0:00 runsv syncthing 21371 ? Sl 0:00 /opt/syncthing/syncthing -home=/config -gui-address=https://0.0.0.0:8080 21390 ? Sl 0:04 /opt/syncthing/syncthing -home=/config -gu
  9. I just updated this docker as I saw your post and now it will not start anymore. Here is what the log says every time I try to start it: *** Running /etc/my_init.d/config.sh... Continous console status not requested *** /etc/my_init.d/config.sh failed with status 1 *** Killing all processes... Does the update resolve the multiple dropbox instances running and it making me reconnect to my dropbox account on every restart?
  10. I just updated through the plugin page and rebooted. It's working except for the dashboard page, any ideas? EDIT: I have the Dynamix web gui plugins installed - maybe those are not supported on this version yet?
  11. Here you go, ask there and you are likely to get a clear answer: https://btrfs.wiki.kernel.org/index.php/Btrfs_mailing_list
  12. The multi-drive cache pool feature has nothing to do with the btrfs bug so I don't understand your point.
  13. No. Not for beta 15. Also, just a quick update. We found a bug as we were going through final release testing that is holding this up right now. I'll spare all the details, but in short, it has to do with the fact that the btrfs progs for 3.19.1 are not 100% compatible with the Linux 4.0 kernel. We are looking to roll back to the previous kernel, but to do so, we would need to manually apply a patch that is slated for inclusion in the 3.19.5 kernel (which is not yet released). We're looking into all this and testing thoroughly. You guys are joking, right? If you keep changin
  14. Turn the tables around, how frustrated must everyone be at LT after all the hard work and nearly getting everything ready to go, only to find a bug right at the end. I'm sure they feel just as frustrated as us, if not more. Being a bit of a perfectionist myself, I feel I can relate to that pain. But that being said, there are good and bad ways to react to those situations. Drop or postpone the feature that that one last remaining bug is affecting. We should give LT at least some credit for not suffering a huge amount of feature creep. It's just that 'Kernel Creep' is a lot worse
  15. No. Not for beta 15. Also, just a quick update. We found a bug as we were going through final release testing that is holding this up right now. I'll spare all the details, but in short, it has to do with the fact that the btrfs progs for 3.19.1 are not 100% compatible with the Linux 4.0 kernel. We are looking to roll back to the previous kernel, but to do so, we would need to manually apply a patch that is slated for inclusion in the 3.19.5 kernel (which is not yet released). We're looking into all this and testing thoroughly. You guys are joking, right? If you keep changin
  16. Lets try to narrow this issue down. My docker image is on "/mnt/cache/appdata/docker.img", the "appdata" share is set to cache only and is where my dockers store their config files. All my disks in the array and the cache drive are formatted as btrfs.
  17. Curious if your docker.img / appdata files are stored on the array or on the cache drive? All my docker stuff is only on the cache drive.
  18. I just realized the "docker" tab will spin up all my data drives when I go to it. Is this a known issue?
  19. That's mean! We have so many projects that somethings just got lost in the way. Just a friendly reminder. You guys are doing a great job.
  20. Thanks. I've patched this a while ago, just waiting bonienl to release the update. I will release a update soon. Soon is not that soon.
  21. Can you explain what you mean? So if I have a plus key that allows 8 slots you want it to still show 26 so that I can assign one to slot 25 and one to slot 2? What does it matter as I will still only be allowed a total of 8 slots? I don't think "slots" apply to any particular port so it doesn't really matter if I assign a disk to slot 8 or slot 25, might as well show the correct amount of slots allowed by the license.
  22. As of beta 14, we upped plus to 8 drives instead of 7. You'll see far more slots than your license allows, but if you assign more than 8 on plus, the array won't start. Wouldn't it be better to only list the slots your license allows? Why the regression? Does it at least tell you why the array wont start and the actual amount of slots you are allowed?
  23. Yeah I think that's how it worked in previous unRAID versions. But in 6, I don't have the option of manually editing that field. How can I test this in unRAID 6?
  24. Your question does not help my problem in any way but I'll answer it anyway. Disk4 is for the shares that usually get updated daily (btsync, dropbox, developement files, etc), Disk1 is my "backup" share wich is updated rarely and has extra space, disk2 and disk3 are for all my multimedia files (movies, tv shows, music, photos, etc). There, now you know my layout. The point is I want the array to only need to spin up disk4 and parity when these disk4 shares are used. The other three disks are used far less. My understanding of the "Fill-up" mode was that it would continue putting new