Jump to content

danioj

Members
  • Posts

    1,530
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by danioj

  1. Hi Danioj!

     

    I have finally found time to play around with this script for my VMs, I used it on a test VM first just to be safe. It all worked perfectly however some of the configuration options confused me slightly (not difficult!)

     

    For example the startvm setting, in the config file

     

    1) default is 0 but set this to 1 if you would like to start a vm after it has successfully been backed up

     

    in the log file while running this set to 1 it shows

     

    2) information: start_vm_after_backup is 1 vms will not be started following a successfull backup

     

    and the log after the backup shows

     

    3) action: start_vm_after_backup is 1. starting Ubuntu.

     

    so it looks like the first log note is incorrect noting that if set to 1 the vm will not be restarted. This also applies to killvm and startvm_afterfailure

     

    other than that it was entirely successful, thanks!

     

    Hi Ashe. I am not surprised there are some kinks. Your outcome though is similar to mine, it works - just needs to be cleaned up.

     

    I think I will collude somewhat with Andrew and see if I can get it finalised.

     

    Ill spend some time on it this week.

  2. Hey Andy!!

    I'm going to try and smile at that moniker, but truth be told I hate with a passion Andy.  Much prefer Andrew or Squid or even a$$hole

    The Linuxserver.io znc container has been moved from the lsiodev repository to the main linuxserver repository. I only spotted today that it doesn't appear in CA any more.

     

    A little birds tells me that there is some xml that needs to be updated to make it appear again!?

     

    :)

    Nothing on my end ever needs to be done regarding updates to xml's.  The application feed (by Kode / hosted by lsio) handles everything automatically and is automatically updated every two hours.

     

    If you happen to browse the categories looking for ZNC and its already installed, then you will not see it unless you either go to Installed Apps, or alternatively within the settings, you can turn off separating installed from not installed.

     

    IE: I've checked, ZNC is indeed there and everything is working as is.

     

    Sorry buddy ... no more abbreviations of your name! I should know better, I hate it with a passion when people call me Danny. Daniel or at a push Dan please!!

     

    I've obviously misunderstood how the system works. When it was mentioned the "xml needs updating" I VERY wrongly assumed that this was a manual process and also that it was on your end. WRONG! Lol!

     

    So I'll take this an opportunity to learn. Thanks for clarifying for me!!

  3. Hey Andy!!

     

    The Linuxserver.io znc container has been moved from the lsiodev repository to the main linuxserver repository. I only spotted today that it doesn't appear in CA any more.

     

    A little birds tells me that there is some xml that needs to be updated to make it appear again!?

     

    :)

  4. I can't seem to get this to work correctly.

     

    I set up a new user and password via command line, I added the user in the web gui, I believe I have all the setting right, and I forwarded the port on my router.

     

    Here are my settings.

     

     

    I need a little more. What isn't working?

     

    I can't connect to the server from inside or outside my network.

     

    I sent you a PM. Happy to help.

     

    I noticed an issue with the networking mode of the container when you choose to just open UDP port and also share port 943 for Connect and Admin Interfaces.

     

    Essentially when you setup like this the container doesn't seem to work in Host mode as is recommend. My resolution to this was to switch to Bridge mode and map 1194 and 943 to the Host.

     

    Screen_Shot_2016_09_03_at_4_59_26_PM.png

     

    EDIT: God I can't spell. In the pic, ump is supposed to say udp. Toodles off to correct.

  5. I had an email last night saying there was an update for this docker so applied it. I can't get any videos or music to play.

     

    I've tried rebooting the server as well as restarting the emby server.

     

    Anyone know if I can roll back to a previous version easily. I have backup/restore installed but don't particularly want to restore all my dockers as for some reason emby isn't in the list of apps.

     

    I've just updated my container remotely and played a video fine. Tried Kodi too. Played fine.

     

    I need more. What is in your server logs??

     

     

    Sent from my iPhone using Tapatalk

     

    I just read your post again - are you saying that your Emby container is not even showing in the Docker tab?

     

     

    Sent from my iPhone using Tapatalk

  6. I had an email last night saying there was an update for this docker so applied it. I can't get any videos or music to play.

     

    I've tried rebooting the server as well as restarting the emby server.

     

    Anyone know if I can roll back to a previous version easily. I have backup/restore installed but don't particularly want to restore all my dockers as for some reason emby isn't in the list of apps.

     

    I've just updated my container remotely and played a video fine. Tried Kodi too. Played fine.

     

    I need more. What is in your server logs??

     

     

    Sent from my iPhone using Tapatalk

  7. While i wholeheartedly agree with the fact that you shouldn't expose your server to the public domain-i respectfully disagree that using openVpn is any more safer than a reverse-proxy server with a proper tls certificate handling encryption of authentication and traffic. As i understand it - you have 2 methods of encryption with openVPN, a preshared  secret or a tls/ssl-vpn method. The tls/ssl method is much like the reverse proxy with a proper tls/ssl certificate. There is specific uses for each of these types and I use both. I plan on having many web servers/services so it would be in my interest to use the latter. I won't get into it more than this as it is beyond the scope of the OPs question. I would not recommend doing either if you are unclear on the security risks or methods for mitigation .

     

     

    Sent from my iPhone using Tapatalk.

     

    Sorry, I have to disagree.

     

    A reverse proxy acts as an intermediary between the client and the web application you're accessing. It adds an additional layer between the client and the web application.

     

    A reverse proxy (in the scenario of using it to expose applications to the Internet) is available on the internet so generally anyone (unless you are going to set some really tight firewall rules) who knows the url will have access to that application. Although setting additional security like SSL (which just encrypts client to server) it is still directly connected to the web.

     

    In this case you are also going to be relying on the security system of each application for access. Much of the applications we all use (in a home server scenario) have different levels of maturity when it comes to their vulnerability to attack. In addition many of these applications authors advise (such as unRAID) not to expose their interfaces directly to the Internet.

     

    In contrast, a VPN has an additional layer of security as a client has to first authenticate itself with a server before a user has access to the VPN. If you therefore trust users of your LAN (which most people I know who use unRAID in a home setting would) you can even leave the applications unsecured by userids and passwords) as the security is managed by the VPN server.

     

    I would also add that if there are vulnerabilities I would be more confident of the ongoing security of OpenVPN (being a widely used enterprise technology) and exploits being patched than the exploits being found, patched etc in many of these applications we all use.

     

     

    Sent from my iPhone using Tapatalk

  8. Add an nginx docker container with mapping port 443 to 443.

     

    Use openssl to generate certs.

    https://www.digitalocean.com/community/tutorials/how-to-create-a-self-signed-ssl-certificate-for-nginx-in-ubuntu-16-04

     

    Create a config for reverse proxy:

    site-confs/www.conf

     

    ---

    server {

     

        listen 443 ssl;

        server_name unraid-ssl;

     

        ssl_certificate          /certs/unraid.crt;

        ssl_certificate_key      /certs/unraid.key;

     

        location / {

            proxy_set_header X-Real-IP  $remote_addr;

            proxy_set_header X-Forwarded-For $remote_addr;

            proxy_set_header Host $host;

            proxy_pass http://your_unraid_ipv4:80;

        }

    }

    ---

     

    I advise strongly against this if it is to be then exposed to the Internet.

     

    If it is on the LAN then it's fine. I guess there are use cases where HTTPS is required on your LAN too.

  9. Hi,

     

    i cannot find how to set the web gui to use https.

     

    Can someone point me to the right direction please?

     

    regards,

     

    Olivier

     

    This product requires a bit tinkering.  You could just load a linux VM and do a basic reverse proxy w/ SSL offloading to your storage server.  Just another thought.

     

    Please don't expose you're unRAID GUI to the Internet.

     

    unRAID is not hardened for such a use.

     

    My view is that unRAID 6.1.9 would fail even a basic security audit. It has not been patched since its release. I would say that 6.2 RC4 is better but again it's not intended to be exposed to the Internet. Doing so will expose you to what I consider unacceptable risk.

     

    Those risks are mitigated (somewhat) by the advice to keep it securely behind a firewall and not exposing the system or its GUI to the Internet.

     

    A good source of information on the current state of unRAID security is here:

     

    http://lime-technology.com/forum/index.php?topic=50643.0

     

    I personally do expose one of my unRAID Server's to the Internet for the purpose of facilitating a VPN connection and also serving a basic web site via the Docker platform which is running on unRAID.

     

    While it is not whiteout risk, I feel this risk is mitigated by the fact that both dockers I use for this are updated regularly (or in the case of Apache, daily) and as such I feel that security patches are applied in sufficient time to deal with known threats.

     

    That doesn't mean however that there isn't a risk. As NAS points out deep into the thread I posted above, the risk I am managing through doing this can be described as:

     

    The bad news is what happens if someone exploits vulnerability in your applications and gets root. At that point you are relying 100% on unRAID to protect you from attack escalation. The chances are you dont have a DMZ in this setup as current unRAID does not lend itself to this. Also this is where unRAID patching is critical. Docker/VM et all need to be up to date to reduce the risk of a known exploit being abused to breakout into the host. Currently there is no know exploit however there could be tomorrow and this is why it is critical you apply security patches.

     

    The specific post in that above thread about doing what I do is here:

     

    http://lime-technology.com/forum/index.php?topic=50643.msg488115#msg488115

     

    Down to the help. If you're happy to accept the risk of exposing applications (hosted on a VM or the Docker Platform) running on unRAID to the Internet then I have a solution for you.

     

    Before I do, it was previously noted that a reverse proxy could be used. While this is true I would advise against this for your unRAID GUI.

     

    Many people use this method for accessing other applications on their LAN (usually GUI's of other Docker containers). These applications often have their own security system built in (which may or may not be that hardened themselves) and often with the addition of .htaccess password protection this can be reasonably ok. It's important to note though that these applications often don't provide you access to the functionality that the unRAID GUI does though.

     

    For that reason I'd say stay away from reverse proxy for unRAID GUI access. In fact, I personally don't use reverse proxy for application access at all. I use a VPN connection.

     

    For completeness sake here is a link to the linuxserver.io apache docker container:

     

    https://lime-technology.com/forum/index.php?topic=43858.0

     

    The following is an excellent guide we have posted on the website for setting up Apache to work as a reverse proxy:

     

    https://www.linuxserver.io/index.php/overview-reverse-proxy-with-docker/installation-of-apacheweb-docker/

     

     

    However, I will recommend OpenVPN-AS.

     

    OpenVPN Access Server is a full featured secure network tunneling VPN software solution that integrates OpenVPN server capabilities, enterprise management capabilities, simplified OpenVPN Connect UI, and OpenVPN Client software packages that accommodate Windows, MAC, Linux, Android, and iOS environments. OpenVPN Access Server supports a wide range of configurations, including secure and granular remote access to internal network and/ or private cloud network resources and applications with fine-grained access control.

     

    https://openvpn.net/index.php/access-server/overview.html

     

    Over at linuxserver.io we have a docker container running this application that you can use.

     

    https://lime-technology.com/forum/index.php?topic=43317.0

     

    The easiest way to install this (and other apps) is via the Community Applications plugin. Also a linuxserver.io creation.

     

    This plugin will allow you to easily search for and add any of the unRaid docker or plugin applications, along with some related optional utilities (automatic updates of plugins, backup of appdata shares)

     

    http://lime-technology.com/forum/index.php?topic=40262.0

     

    The application as setup in the container almost works "straight out of the box". It's as simple as installing the container, configuring some basic options (well documented in the support thread), changing and creating a password, creating a port forward on your firewall and downloading the .ovpn profile file to your client.

     

    15 mins of tinkering and you can connect to your LAN securely (noting what I have already said above) from a remote location as if you were local.

     

    That's how I would (and do) deal with your access issue.

     

     

     

    Sent from my iPhone using Tapatalk

  10. Not sure if this is the best place to put this, it looks like Emby now has a newer public release 3.0.6070 than what I am running in this docker (mine reports Version 3.0.5972.0).  Any chance of getting an update of this docker to the latest public release of Emby?

     

    I have been running this container for months now and it rocks!  Please keep up the good work. Thanks.

     

    Essentially I think this container is always going to be behind somewhat as Emby have a VERY regular release schedule.

     

    As containers go, I feel it gets maintained and updated very well.

     

    I keep an eye on this link:

     

    https://github.com/MediaBrowser/Emby/releases

     

    As long as there isn't something that critical in there (with respect to a recent release) - which I don't think there is (for me at least) at this time, I don't worry about it.

  11. This post relates to SSD TRIM Plugin.

     

    I have just upgraded my Main Server from 6.1.9 to 6.2.0 RC3.

     

    This morning I received a notification of the following error:

     

    fstrim: /mnt/cache: the discard operation is not supported
    

     

    This never happened on 6.1.9 but also I'm appreciative of the fact that no-one else has reported this (based on a forum search of the error) but can't be the only person running this Plugin on 6.2.0 RC3.

     

    I've checked the Plugin and it is up to date.

     

    I'm running the cache in raid0 configuration:

     

    dconvert=raid1 -mconvert=raid1

     

    The pool contains 3 SSD drives:

     

    CT250BX100SSD1_1514F00579EC - 250 GB (sdb)
    CT250BX100SSD1_1514F0057C13 - 250 GB (sdc)
    Samsung_SSD_850_EVO_250GB_S21MNXAGA24112P - 250 GB (sdi)

     

    Btrfs Pool: 750GB capacity - 47.5GB used - 703 GB free

     

    Can anyone help please!?

  12. Have been trying to get this to run now for over a month (on and off) but this has now gotten to me and it's either me or the machine  :)

     

    I have gotten the webUI to come up, and I pointed it to the server, however I cannot get it to see any folders at all. the path should be something like '/mnt/user/movies' (since that is the actual path) but it won't accept it at all it tells me there was an error adding the media path please ensure the path is valid and the emby server process has access to it.

     

    Don't have much hair left...lol

     

    Ice

     

    You haven't provided me with enough detail to be sure BUT it sounds to me like youre misunderstanding how docker works.

     

    This is how mine is setup - works a treat:

     

    /config          /mnt/cache/appdata/emby/

    /mnt            /mnt/user/media/

     

    The docker path config ice pube has provided is almost typical. However I would make one minor amendment:

     

    /config          /mnt/cache/appdata/emby/
    /movies        /mnt/user/media/movies
    /tv               /mnt/user/media/tv
    

     

    If you mirror this then the path you enter into Emby for mapping your directory is NOT /mnt/user/media/tv or /mnt/user/media/movies it is just /moves and /tv.

     

    If you want, think of them like shortcuts or symlinks! :)

     

    I hope this makes it clearer for you! Emby is awesome!

  13. Yeah honestly at the least what is needed is a way to retain the users and their client certificated between updates. Needing to regenerate and reissue all user certificates, to all clients can be a real limitation for some people. Fortunately not me really since I have on user and two devices. But if I had a lot more I would certainly have to consider skipping some update points just to lighten the maintenance load and that isn't the best solution for security :(

     

    I am not familiar with your personal setup, so please excuse me if I am appearing ignorant, but I am struggling to see what the maintenance overhead would be. You don't have to regenerate the certificates yourself. You just direct the user to the OpenVPN Connect page. They log in with their userID and password (which you have generated with a few one liners on the command line) and they download their auto connect certificate for the device they are using. Click Click. Done.

  14. Oops, I realised that I didn't say once what my issue was!!

     

    I'm just posting to say that my server version hasn't been bumped for some reason :)

     

    I'm still on 2.0.24 instead of the latest 2.12 version

     

    I can't figure out why though, as I've definitely been updated to the latest docker image :P

     

    No worries mate. My excuse is that its 8:18pm here and Ive had a wine or 2  :)

     

    Interesting observation though. Mine doesn't appear to have been updated either but interestingly I am on 2.0.20.

     

    I'll raise it with Sparkly.

     

    Haha, nice!

    I'm supposed to be enjoying a holiday, but just can't resist updating my dockers :D

     

    I do wonder if it's just OpenVPN-as reporting the version incorrectly for some reason :P

    I couldn't find a --version command anywhere inside the docker itself though, so it's just a theory atm :P

     

    Ive asked the question, I am sure Sparkly will look into it when he gets a minute.

     

    The very fact that you're reporting a different version to me though suggests that the old Container almost certainly did auto update (answer to a previous question in the thread). The new Alpine Container has certainly had the auto update feature removed BUT as you say, we are not showing 2.12. The common thing between us both is that we upgraded from the old Container - Ill test doing a completely fresh install with fresh appdata and see if that works.

     

    I deleted my app data config folder for this app and did a completely new install and the GUI is now reporting 2.12. I am not sure if this will end up being the formal advice but if you really MUST have the latest version now (assuming it wasn't the latest version before and the GUI was just not reporting it right) then doing what I suggest will work.

     

    Its not a huge drama really as it pretty much works out of the box and the setup and config is only 10 mins of your time. If you don't want to do this though, suggest wait for formal advice from Sparkly.

  15. Oops, I realised that I didn't say once what my issue was!!

     

    I'm just posting to say that my server version hasn't been bumped for some reason :)

     

    I'm still on 2.0.24 instead of the latest 2.12 version

     

    I can't figure out why though, as I've definitely been updated to the latest docker image :P

     

    No worries mate. My excuse is that its 8:18pm here and Ive had a wine or 2  :)

     

    Interesting observation though. Mine doesn't appear to have been updated either but interestingly I am on 2.0.20.

     

    I'll raise it with Sparkly.

     

    Haha, nice!

    I'm supposed to be enjoying a holiday, but just can't resist updating my dockers :D

     

    I do wonder if it's just OpenVPN-as reporting the version incorrectly for some reason :P

    I couldn't find a --version command anywhere inside the docker itself though, so it's just a theory atm :P

     

    Ive asked the question, I am sure Sparkly will look into it when he gets a minute.

     

    The very fact that you're reporting a different version to me though suggests that the old Container almost certainly did auto update (answer to a previous question in the thread). The new Alpine Container has certainly had the auto update feature removed BUT as you say, we are not showing 2.12. The common thing between us both is that we upgraded from the old Container - Ill test doing a completely fresh install with fresh appdata and see if that works.

  16. Oops, I realised that I didn't say once what my issue was!!

     

    I'm just posting to say that my server version hasn't been bumped for some reason :)

     

    I'm still on 2.0.24 instead of the latest 2.12 version

     

    I can't figure out why though, as I've definitely been updated to the latest docker image :P

     

    No worries mate. My excuse is that its 8:18pm here and Ive had a wine or 2  :)

     

    Interesting observation though. Mine doesn't appear to have been updated either but interestingly I am on 2.0.20.

     

    I'll raise it with Sparkly.

  17. Image has been rebased to ubuntu xenial and the app itself updated to 2.12.

     

    More info on changes at linuxserver.io

     

    http://lime-technology.com/forum/index.php?topic=50793

     

    Hi sparkly!

     

    Unless I'm looking in the wrong place ;)

     

     

    Access Server version: 2.0.24

     

    I'm definitely on the xenial version, just based on the logs:

     

    [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
    [s6-init] ensuring user provided files have correct perms...exited 0.
    [fix-attrs.d] applying ownership & permissions fixes...
    [fix-attrs.d] done.
    [cont-init.d] executing container initialization scripts...
    [cont-init.d] 10-adduser: executing...
    
    -------------------------------------
    _ _ _
    | |___| (_) ___
    | / __| | |/ _ \
    | \__ \ | | (_) |
    |_|___/ |_|\___/
    |_|
    
    Brought to you by linuxserver.io
    We do accept donations at:
    https://www.linuxserver.io/donations
    -------------------------------------
    GID/UID
    -------------------------------------
    User uid: 99
    User gid: 100
    -------------------------------------
    
    [cont-init.d] 10-adduser: exited 0.
    [cont-init.d] 20-time: executing...
    
    Current default time zone: 'Europe/London'
    Local time is now: Sat Aug 6 10:40:29 BST 2016.
    Universal Time is now: Sat Aug 6 09:40:29 UTC 2016.
    
    [cont-init.d] 20-time: exited 0.
    [cont-init.d] 30-config: executing...
    [cont-init.d] 30-config: exited 0.
    [cont-init.d] 40-openvpn-init: executing...
    [cont-init.d] 40-openvpn-init: exited 0.
    [cont-init.d] 50-interface: executing...
    MOD Default {} {}
    MOD Default {} {}
    MOD Default {} {}
    MOD Default {} {}
    [cont-init.d] 50-interface: exited 0.
    [cont-init.d] done.
    [services.d] starting services
    [services.d] done.

     

    Any ideas? :)

     

    Have you updated the container?

     

    I thought I'd made that clear in my post? :D

    Yes, I updated it this morning, and restarted it about 5 mins ago to see if that made a difference too, it did not :)

     

    You were very quick off the mark with your reply. I realised that after I had posted and hit delete almost instantly but like I said, you were just too quick for me  ;)

     

    I have just forced my container to update with the following results:

     

    [s6-init] making user provided files available at /var/run/s6/etc...exited 0.

    [s6-init] ensuring user provided files have correct perms...exited 0.

    [fix-attrs.d] applying ownership & permissions fixes...

    [fix-attrs.d] done.

    [cont-init.d] executing container initialization scripts...

    [cont-init.d] 10-adduser: executing...

     

    -------------------------------------

    _ _ _

    | |___| (_) ___

    | / __| | |/ _ \

    | \__ \ | | (_) |

    |_|___/ |_|\___/

    |_|

     

    Brought to you by linuxserver.io

    We do accept donations at:

    https://www.linuxserver.io/donations

    -------------------------------------

    GID/UID

    -------------------------------------

    User uid: 99

    User gid: 100

    -------------------------------------

     

    [cont-init.d] 10-adduser: exited 0.

    [cont-init.d] 20-time: executing...

    [cont-init.d] 20-time: exited 0.

    [cont-init.d] 30-config: executing...

    [cont-init.d] 30-config: exited 0.

    [cont-init.d] 40-openvpn-init: executing...

    [cont-init.d] 40-openvpn-init: exited 0.

    [cont-init.d] 50-interface: executing...

    MOD Default {} {}

    MOD Default {} {}

    MOD Default {} {}

    MOD Default {} {}

    [cont-init.d] 50-interface: exited 0.

    [cont-init.d] done.

    [services.d] starting services

    [services.d] done.

     

    Everything is working fine. I have noticed that I have had to reset the admin password via this command:

     

    docker exec -it openvpn-as passed admin

     

    and recreate the users I had via this command:

     

    docker exec -it openvpn-as useradd <yourusername>

     

    But once I did that, everything is working fine. Note, that the auto update "feature" has been removed.

  18. I installed this last night and initially it was very slow as all the thumbnails were generated. 

     

    This morning however it's no longer exhibiting the behaviour you're describing.

     

    Sent from my LG-H815 using Tapatalk

     

    I was curious about this and wanted to confirm what / where the "issue" was.

     

    I reset my installation and added my photos again and now I feel it is most certainly the thumbnail generation / adding photos to the library that causes the performance lag. I added a small sub folder first (10 photos) and there was almost no issue at all. This must have been due to the relative speed to add just 10 photos. Then I added a larger subfolder (1000 photos) and then tried to use the app and the thing was laggy. I left it 30 mins and everything was zippy and fine when I came back to it.

     

    It is obviously an issue with the app itself and not the Container implementation etc. It is also clear to me that it is likely to be experienced by users who initially install and want to import a very large library OR add a large amount of photos at a later date AND perhaps try and use the app while the files are processing and being imported.

     

    I think the advice with this app has to be, add your photos and give the app time to add them into the library and process the files. Then when done, performance will be fine.

  19. I love you jbrodriguez! been waiting for an android app forever!

     

    Brad just let me onto this app and I have to say it looks fantastic.  I am certainly a big +1 for iOS support as I do not use android and have reflected as such on your feedback Forum.

     

    I did notice that you mention your focus is Android as there is already an iOS app? That there may be, but yours seems to be head light years ahead! I would certainly pay for your app for iOS.

     

    Congratulations on a great development!

  20. I've had this installed for a while but I hadn't played with it much. When I'm browsing my photos it's extremely slow. like every click takes 15 seconds. Not the first click while a drive spins up. I mean every...single...click takes 15 seconds. Going through folders, clicking on pictures, everything is extremely slow. My system is in my signature, so I don't think that would be an issue. I don't see much in the way of configuring. This is all being done on my LAN, not from an outside connection.

     

    Anyone have the same experience or have any tips?

     

    No Im afraid I am not. I have noticed however that when a folder contains LOTS of photos that performance can take a hit, but not to the level you're experiencing.

  21. Read the readme on github or docker hub that are linked in the first post of the thread.

     

    Sent from my LG-H815 using Tapatalk

     

     

    Find the command in docker hub readme.  :)

     

    For those who are lazy and don't want to go looking, drop to the command line on your server either on console, telnet or ssh and do the following:

     

    docker exec -it openvpn-as passwd admin

     

    Then follow on screen prompts.

  22. Can someone help with a permissions error I keep getting

    Cannot change permissions of /downloads/SAB/incomplete-downloads/examplefilename.rar

     

    Permission set to 777 and did run "Docker Safe New Permissions" but still keep getting the error. The only thing I have changed was moving from NFS mounts to SMB using unassigned devices plugin.

     

    I had an issue recently which saw that folders could not be created. I then checked and for some reason the group:owner of the destination folder had changed to root:root so the service did not have the permissions to do anything with that folder, creating files or otherwise meaning that downloads remained in the incomplete folder indefinitely.

     

    ls  - all 
    

     

    The above command confirmed this for me. So I changed the owner and the permissions of the destination folder (recursive) and all started working again:

     

    chown -R owner-user:owner-group directory
    
    Example:
    chown -R nobody:users /directory
    

     

    chown -R ### /directory
    
    Example:
    
    chmod -R 777 /directory
    

     

    I think I have the commands correct. Hope it words for you.

  23. Anyone know how to downgrade mono in the docker? I see I have 4.4.1 and now having issues playing videos. I know there have been reported problems with Emby and 4.4.1. I can connect to the docker and see my version but no idea how to downgrade.

     

    Thanks

     

    Have you experienced any issues or are you just preparing for possible issues? I have upgraded to latest version and everything seems to be running just fine.

×
×
  • Create New...