Waseh

Community Developer
  • Posts

    203
  • Joined

  • Last visited

  • Days Won

    2

Posts posted by Waseh

  1. Yea i tried that as well with same result - Forgot to mention it :)

     

    Edit: However it seems that i missed that a log file has been created in the config folder with this content:

    Traceback (most recent call last):
      File "~/.local/share/letsencrypt/bin/letsencrypt", line 11, in <module>
        sys.exit(main())
      File "/config/~/.local/share/letsencrypt/local/lib/python2.7/site-packages/letsencrypt/cli.py", line 1349, in main
        plugins = plugins_disco.PluginsRegistry.find_all()
      File "/config/~/.local/share/letsencrypt/local/lib/python2.7/site-packages/letsencrypt/plugins/disco.py", line 168, in find_all
        plugin_ep = PluginEntryPoint(entry_point)
      File "/config/~/.local/share/letsencrypt/local/lib/python2.7/site-packages/letsencrypt/plugins/disco.py", line 31, in __init__
        self.plugin_cls = entry_point.load()
      File "/config/~/.local/share/letsencrypt/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2380, in load
        return self.resolve()
      File "/config/~/.local/share/letsencrypt/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2386, in resolve
        module = __import__(self.module_name, fromlist=['__name__'], level=0)
      File "/config/~/.local/share/letsencrypt/local/lib/python2.7/site-packages/letsencrypt_apache/configurator.py", line 22, in <module>
        from letsencrypt_apache import augeas_configurator
      File "/config/~/.local/share/letsencrypt/local/lib/python2.7/site-packages/letsencrypt_apache/augeas_configurator.py", line 4, in <module>
        import augeas
      File "/config/~/.local/share/letsencrypt/local/lib/python2.7/site-packages/augeas.py", line 78, in <module>
        class Augeas(object):
      File "/config/~/.local/share/letsencrypt/local/lib/python2.7/site-packages/augeas.py", line 82, in Augeas
        _libaugeas = _dlopen("augeas")
      File "/config/~/.local/share/letsencrypt/local/lib/python2.7/site-packages/augeas.py", line 75, in _dlopen
        raise ImportError("Unable to import lib%s!" % args[0])
    ImportError: Unable to import libaugeas!

     

    Edit2: Deleted the container and tried again and now its working! :) Something must have gone wrong :D

    Are you using the container I created? It wasn't designed to be used with multiple domains. Only one works.

     

    Letsencrypt allows for creating multiple certificates through command line, but it doesn't work with how I handle the symlinks and such for nginx integration

     

    Ah well that explains it ;) It does work on first run but refuses to start on subsequent runs :)

    I'll just keep using the plain old nginx container then :D

  2. Hmm im not sure it's actually working after all. It seems to work if i do the configuration from a clean container and on the first run, but if the container is restarted the keys are not recognized, and the container stops at the same point as i was stuck before:

    Requesting root privileges to run with virtualenv: ~/.local/share/letsencrypt/bin/letsencrypt certonly --standalone --standalone-supported-challenges tls-sni-01 --email [email protected] --agree-tos -d example.com, www.example.com, xxx.example.com, xyz.example.com
    Jan 2 23:03:18 07692e42a846 syslog-ng[127]: syslog-ng starting up; version='3.5.3'

     

    So the keys are generated af first run but the container wont start again if restarted.

    Im not sure why the cert/keys arent getting picked up by the script. I copied the key and cert to my working nginx container and started it with the config pointing to them without any problems.

  3. Yea i tried that as well with same result - Forgot to mention it :)

     

    Edit: However it seems that i missed that a log file has been created in the config folder with this content:

    Traceback (most recent call last):
      File "~/.local/share/letsencrypt/bin/letsencrypt", line 11, in <module>
        sys.exit(main())
      File "/config/~/.local/share/letsencrypt/local/lib/python2.7/site-packages/letsencrypt/cli.py", line 1349, in main
        plugins = plugins_disco.PluginsRegistry.find_all()
      File "/config/~/.local/share/letsencrypt/local/lib/python2.7/site-packages/letsencrypt/plugins/disco.py", line 168, in find_all
        plugin_ep = PluginEntryPoint(entry_point)
      File "/config/~/.local/share/letsencrypt/local/lib/python2.7/site-packages/letsencrypt/plugins/disco.py", line 31, in __init__
        self.plugin_cls = entry_point.load()
      File "/config/~/.local/share/letsencrypt/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2380, in load
        return self.resolve()
      File "/config/~/.local/share/letsencrypt/local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2386, in resolve
        module = __import__(self.module_name, fromlist=['__name__'], level=0)
      File "/config/~/.local/share/letsencrypt/local/lib/python2.7/site-packages/letsencrypt_apache/configurator.py", line 22, in <module>
        from letsencrypt_apache import augeas_configurator
      File "/config/~/.local/share/letsencrypt/local/lib/python2.7/site-packages/letsencrypt_apache/augeas_configurator.py", line 4, in <module>
        import augeas
      File "/config/~/.local/share/letsencrypt/local/lib/python2.7/site-packages/augeas.py", line 78, in <module>
        class Augeas(object):
      File "/config/~/.local/share/letsencrypt/local/lib/python2.7/site-packages/augeas.py", line 82, in Augeas
        _libaugeas = _dlopen("augeas")
      File "/config/~/.local/share/letsencrypt/local/lib/python2.7/site-packages/augeas.py", line 75, in _dlopen
        raise ImportError("Unable to import lib%s!" % args[0])
    ImportError: Unable to import libaugeas!

     

    Edit2: Deleted the container and tried again and now its working! :) Something must have gone wrong :D

  4. How would i go about creating a certificate which is also valid for xxx.example.com and xyz.example.com?

    Right now when i try to pass multiple domains in the setting nothing happens.

     

    The log says:

    Requesting root privileges to run with virtualenv: ~/.local/share/letsencrypt/bin/letsencrypt certonly --standalone --standalone-supported-challenges tls-sni-01 --email [email protected] --agree-tos -d example.com -d www.example.com -d xxx.example.com -d xyz.example.com

    Jan 2 20:00:04 b5ad9d44859e syslog-ng[123]: syslog-ng starting up; version='3.5.3'

     

    I tried both without and with the -d parameter.

     

    Cheers :)

  5. Thing is im seeing this both in edge and chrome, and both in incognito mode and normal mode.

    Running Windows 10 in safe mode fixes the problem, but i cant seem to narrow down what program or setting is causing the problem.

    I guess it's some trial and error from here :)

     

    Edit: Turns out it was bitdefender. Now to find the setting thats causing it!

  6. Hey Guys

     

    The last couple of months i have been unable to view logs from the webgui on my pc including both the syslog and individual docker logs.

    The popup windows open but then hangs with no information being displayed. I have somewhat of the same problem with plugins and dockers when updating them however data is displayed when the operation is done, but hangs until then.

    I just updated to 6.1.6 and the whole webgui hanged until the update was done.

    Any ideas as to why this is happening? Ive tried multiple browsers with the same result, however it does seem to be working correctly when using my phone which i find a bit strange.

     

    Cheers

  7. Hey guys

     

    I have a problem with getting the docker to remember where to store the log file.

    When i set a new path in the interface the log file path resets to ${MainDir}/nzbget.log after a restart of the docker no matter which path i chose. Changing the nzbget.conf file manually gives the same result.

    The problem being that it keeps some of my array spun up instead of saving the log file on a SSD i have mounted outside the array.

     

    Any ideas?

  8. My printer isn't listed when adding it to CUPS.

    Is there a way to install the necessary driver?

     

    Its a Brother DCP-7065DN.

     

    *Edit*

    Found the proper PPD file.

    Now how do I get google cloud print working?  After adding the appropriate variables CUPS fails to start and does not start if I manually do it.  Any ideas?

     

    I have the same problem

     

    Check your log under Docker.

    I'm getting "Google authentication failed."  Which doesn't make sense because with my RPi it worked fine...

     

    Exactly the same - I also created 2 app specific passwords which dont work either.

  9. My printer isn't listed when adding it to CUPS.

    Is there a way to install the necessary driver?

     

    Its a Brother DCP-7065DN.

     

    *Edit*

    Found the proper PPD file.

    Now how do I get google cloud print working?  After adding the appropriate variables CUPS fails to start and does not start if I manually do it.  Any ideas?

     

    I have the same problem

  10. I spent the weekend studying the math behind Raid-6 - learning Galois field algebra: now that's an exiting Saturday night!  It turns out there's no reason all data devices have to be spun up after all to recompute Q, and in fact read/modify/write of target data disk, P and Q disks should indeed be possible (even desirable in some cases).

     

    In looking at linux md-layer again, yes indeed they do force reconstruct-write.  Time to start googling... It turns out there have been efforts in the past to introduce rmw for raid-6 "small write", even patch sets which didn't get merged.  Then, lo and behold, what got added in 4.1 kernel release?  That's right, rmw operation for raid-6!i

     

    Bottom line is that we should be able to incorporate this into the unraid flavor of md-driver.  It won't be a quick-and-dirty change however, since a lot of other code needs to be P+Q aware (such as user interface).

     

    That sounds amazing. Very good news Tom!

  11. Another point that has not be explicitly stated - it is not like NetApp or IBM are actively soliciting "customers" like unRAID to use their patented technologies. In order to go in that direction, Tom would have to get lawyers involved and approach them about a possible agreement with a less than interested party. This could be potentially expensive and the outcome uncertain.

     

    While I agree that having only 3 drives spin up for a write, (2 parity + data disk being written to), such a solution may be a non-starter. I'd rather see Tom implement using the publicly available option. Most of the changes he would make would be the same regardless of the algorithm, and if done properly, would enable the other solution to be added at a future date.

     

    BTW, I do think "knocking on the door" to NetApp and maybe even IBM would be worthwhile to verify that they do not already have a licensing option available that Tom could use without the legal process. But if the answer comes back "no" - or it is prohibitively expensive - he should (IMO) move forward with the public solution.

     

    Completely agree on this

  12. Spinning the drives is a TINY price to pay for the HUGE advantage of dual fault tolerance  :)

     

    ^This!

     

    I don't disagree, but if a solution exist where array spin up is not needed on write I do think that as worthwhile

     

    I am not a specialist in this area - but short of implementing some sort of redundant cache (which was discussed earlier) or storing the writes in memory (now I am thinking outside of the box) what possible solution is there for not spinning up the drives to write in a stripe scenario? Perhaps if the stripe is implemented so not ALL drives have to be spun up. Isn't that what NewApp claims? Getting out of my depth here - but I cannot see how there is any solution to spinning up on write if there isn't an intermediate / cache. You've got to write to something right and to write to it (if its a HDD) it has to be spinning right? *shrugs*.

     

    I see I might have expressed my self less than clear there - I simply mean retaining the functionality we have now where only the drive being written to (as well as parity disk/s) are spinning during write operations. Examining the licensing of one of the first two options mentioned in the first post would be very interesting in my opinion.

  13. Spinning the drives is a TINY price to pay for the HUGE advantage of dual fault tolerance  :)

     

    ^This!

     

    I don't disagree, but if a solution exist where array spin up is not needed on write I do think that as worthwhile

  14. This is definitely very interesting and a feature I have hoped to see for some time. I do however appreciate the ability to not spin up the entire array for write operations and would be more than willing to pay to preserve this feature with dual parity.

  15. After updating to rc6 all my dockers disappeared from the docker menu.

    The docker image is still in place in the docker settings and all my appdata is still there, but no dockers are being reported under the docker tab.

    I am trying to recreate them from the templates which are still available from the container menu, however im getting at ton of errors trying to start the containers.

    Just as a heads up.

     

    Did your shares disappear?

    No only difference is all dockers are gone - Shares and everything else (plugins etc.) are working as before the update. Not quite sure what is going on.

     

    Where is your docker.img located? Is that still there and on a cache only share?

     

    On a drive outside the array - I have since deleted the docker image and recreated it and now i can recreate the containers from the templates without errors. No idea what happened but it seems to be working again now.

  16. After updating to rc6 all my dockers disappeared from the docker menu.

    The docker image is still in place in the docker settings and all my appdata is still there, but no dockers are being reported under the docker tab.

    I am trying to recreate them from the templates which are still available from the container menu, however im getting at ton of errors trying to start the containers.

    Just as a heads up.

     

    Did your shares disappear?

    No only difference is all dockers are gone - Shares and everything else (plugins etc.) are working as before the update. Not quite sure what is going on.

  17. After updating to rc6 all my dockers disappeared from the docker menu.

    The docker image is still in place in the docker settings and all my appdata is still there, but no dockers are being reported under the docker tab.

    I am trying to recreate them from the templates which are still available from the container menu, however im getting at ton of errors trying to start the containers.

    Just as a heads up.

  18. Not quite true. 

     

    I have (finally) successfully compiled mono 3.2.8 for both unraid 5 and 6 and will be posting updated plugins in a new thread (likely tomorrow) that will download these mono packages.  The plugin has been quite heavily modified and can accommodate setting ssl and sslport, as well as webroot from the config gui in unraid.  I've also added some error checking in the plugin so its less likely a user can enter invalid information like blank spaces, or non numeric values for ports and mess up the nzbdrone install.  This will also create an xml file in the specified config directory if one doesn't exist so the first launch of the program is set with the plugin settings from the user.

     

    The crashing hasn't occurred for me running them for 2 days straight on either version, but i'm going to need people to test it out further.

    Thanks to dragonfyre13 for the code on symlinking the config directory.

    That is amazing! Great work. I will be more than happy to test this out on unraid 5!