ken-ji

Members
  • Posts

    1245
  • Joined

  • Last visited

  • Days Won

    4

Posts posted by ken-ji

  1. 12 hours ago, darthdeus said:

    edit: If I wanted to delete the whole thing and start over, I just need to stop & delete the container, it's config folder, and the directory on the share, right?

    Correct
    This whole stopping to sync is something I haven't encountered and I have no Idea what's causing it.

    I only have 3.5G in my Dropbox account and I've never seen it stop syncing. Losing the linked account data yes, but never the hanging on the sync.

  2. On 9/7/2020 at 8:32 PM, jang430 said:

    #!/bin/bash
    # Start the Management Utility
    /usr/local/sbin/emhttp &

    #Setup drivers for hardware transcoding in Plex
    modprobe i915
    chown -R nobody:users /dev/dri
    chmod -R 777 /dev/dri

    @jang430 If you haven't figured it out, your go file is wrong

    it should be

    #!/bin/bash
    
    modprobe i915
    chmod -R 0777 /dev/dri
    
    # this must be the last thing in the go file unless you really know why
    # Start the Management Utility
    /usr/local/sbin/emhttp &

     

  3. Because Dropbox is meant to be run and used by a single user, the container must create files as a single user and any access over Samba/NFS shares must result in the files being owned by the same user or some issues will occur (like being unable to update directories and files modified from other devices)
    There is a setting you can apply (just not sure how it will mess things up further for you)
    in Settings | Samba | SMB extras I added under the [global] section

    force user = nobody
    force group = users

    this makes it so that all access to my files over Samba will be done as the nobody user (and the group users - but this is usually not important).

     

    Enabling this will probably break any existing share that is Secure, Private or even Public ones - if the files and directories have been created with the existing users you may have defined. (Running either Fix Permissions tools on the affected shares should fix the issue though)

     

    You can turn this on and make some tests on a existing share so you understand what it does though, as reversing it is fairly easy by just removing the lines.

     

    I was suggesting that Unraid enable this out of the box, but there's no way AFAIK to do it and not cause issues with users who do not want to enable this.

     

  4. This Docker container will run stuff as the nobody user, hence the error message above regarding permissions. Docker Safe New Perms will work until new files get sync'd or updates are made.

    I'm guessing, but are you accessing the Dropbox share in guest mode (ie its set to Public) while you have a user in Unraid that the Windows PC you have is set to use/match? (Windows Credentials in Settings)

    If so, you probably want to blow away this container and the appdata/dropbox and Dropbox share/folders (ie really start over)

    Then when you reinstall the container, before linking to your account, make sure that the user id to run as is set to the user id of your user (the one windows is using to connect to)

    image.thumb.png.41b844189adb380c743deadaf5b2f90a.png

    If you don't know your user id, just open a terminal to Unraid and run this

    root@MediaStore:~# id username
    uid=1000(username) gid=100(users) groups=100(users)

    which is usually be 1000.

  5. It won't bypass the integrity, but i wonder how you plan to run the second instance. As a VM perhaps? or as a docker container with its own IP? I ask this as you can't run a second Samba instance due to conflict with the service ports (only one process can listen on a port)

  6. On 9/27/2020 at 1:36 PM, nerbonne said:

    [Cache]

    I have a Kingston 480GB A400 SATA 3 2.5" Internal SSD SA400S37/480G SSD that I'm using for a cache drive.  I've had the drive for about two years.  It has no SMART errors and has 48% life left.  For the first year or so it worked fine.  For the last year, I've had to start managing how much data is being written to the drive.  After sustained writes of 20mb or more per sec for over a a minute or so, the drive gets super slow and eventually makes the system unusable.  I read posts how this drive is problematic and can be fixed with a firmware update.  So I pulled it out and updated the firmware using my Windows PC.  Put it back in, seemed to work better initially but then started having problems again.  So, the question:  Is this just this SSD or is this a problem with all SSD's?  Someone on a different thread recommended some 960 Gb Sandisk Ultra II SSD's.  Said they weren't the fastest but they just work.  I don't care about FAST so much as the JUST WORK.  Also, would two SSD's work better?  What about two 7200 RPM drives?  Thoughts?  I am at my wits end with this Kingston SSD.  I had a 500 Gb 4500 RPM drive in and it was slower but never made the system hang.

     

    [GPU]

    I have a 1050 ti installed, but I don't think it's doing anything.  The server is headless, so I don't use it for a monitor.  How do I test to see if GPU transcoding is working.  Someone mentioned there would be a checkbox in the Plex settings if it were supported?  If it's not doing anything, I should take it out to save power that it uses to keep the card on at idle.

    Like any forum, one reason you didn't get a response is because there's nobody with anything to contribute or they just missed your post.

    But in any case about the SSD. just about any SSD should be ok. a possible cause of the slowdown is that your SSD is currently out of free/empty blocks and now the drive has to do a block erase before being able to write anything - causing a massive slowdown. Are you TRIMming the SSD? and is the SSD connected to something other than an HBA? - this is because some HBAs do not support running TRIM. There's the Dynamix TRIM plugin for this.

     

    As for the GPU and docker passthrough, you'll need to have the NVIDIA drivers installed via the linuxserver.io plugin or another that lets you rebuild the kernel as you need. In a nutshell, for a docker container to use host hardware, the host needs to be able to use it via host drivers, which is then passed to the container so that container apps can interact with the device. I believe there's more on the specific thread for Plex container you have installed.

  7. 2 hours ago, CatMilk said:

    That's working for me. It was the case for the password. When i was typing in the password Windows Explorer was not telling me the password was incorrect yet it was passing through as lower case and not mounting correctly. Thank you for spotting that. Its been driving me nuts.

    @CatMilk Not sure how the issue was fixed for you. Did you disable the case sensitive settings on the shares?

  8. Hmm. You have case sensitivity turned on what I think are the affected shares; along with not preserving the case.... ie forcing them to all lowercase (the default) -  on your shares... any particular reason?

    I think this might be what's giving you the unavailable error messages. all your samples that cannot be accessed anymore are mixed case. the one you said was fine is in all lowercase.

     

  9. I will point out that this is my preferred setup that Unraid NAS + WebUI is only available on the main Network.

    If necessary, assigning an IP (statically or via DHCP) on the VLANs is also ok, but most users seem to get confused when the Unraid networking doesn't seem to work properly - and this is usually because the default route to reach the internet is going via a different VLAN/network interface than expected.

    I also use custom docker networks - since I need proper IPv6 support - though my Mikrotik router doesn't want to support DHCPv6, just SLAAC, and my ISP is doing fully dynamic /56 prefix allocation - which is a pain for Docker networks on IPv6

    7 minutes ago, bdydrp said:

    This was the only way it would show up in the drop down box when adding new apps and selecting the interface!

    As long as the docker network is completely defined (subnet address and gateway) it will be available - your screen shots before did not have gateways assigned.

    9 minutes ago, bdydrp said:

    Leaving out the IP assignment, Unraid webui has still assigned itself a DHCP address. I can see this in my unifi software. And its a random IP inside my DHCP pool aswell. Somewhere in the middle!!!

    I'll bet its because after configuring unraid not to get IP addresses for the VLANs, the lease is still considered live by your DHCP server. It should not be renewed after the typical 1day lease lifetime. Somewhere in the middle maybe typical depending on the DHCP server used. Some would do a hash of client MAC address and DUID to pick a suitable number in the leasing range. This means this host will likely get the same IP over and over (unless there are conflicts). Others just pick the next free and assign that.

  10. You need to configure the network of the docker container to the correct one - br0.14 in your case.

    Also the docker network needs the correct gateway defined, otherwise docker will not be able to use it. - its 10.10.14.1 in your case.

    You also might not want to set an IP address and gateway for the VLAN in the Unraid network settings, so that Unraid will not be present on that VLAN. (or confuse you as Unraid tries to figure out which interface its supposed to use to go out to the internet.)

  11. If you have VLAN support on your switch, you can set it up like this

    image.thumb.png.7d97021ddb8717dcd78cc7de0863de59.png

    This will create subinterfaces eth0.x/br0.x (leaving bridging on is up to you as well as joining eth1). Then you configure the docker network on br0.x

    image.thumb.png.9e07d1a3b48e01c961e8e568be1f2157.png

    I just happen to have my nics split off and keep all my dockers on br1 (I set it up before the host access option for docker was there - haven't tested it if it'll work as like.)

    So my docker containers (like my nginx reverse proxy)

    image.thumb.png.ff1ff2ff60ca95c342d4ed30003a0c38.png

  12. Setting it all to automatic would then make your pfsense assign via DHCP to Unraid 3x. For each subnet along with default gateway. So don't. just do static assignment and decide with interface (typically eth0/br0/bond0) is the "main" one and will be used to go out to the internet.

    2 hours ago, 905jay said:

    Is it just a matter of me simply leaving the gateway blank for the interfaces eth1 & eth2?

    Should I leave it blank for all three interfaces (eth0, eth1& eth2)?

    Just leave blank for the non-main interfaces

    and leaving it blank for all will make your Unraid unable to reach the internet.

  13. Actually the GUI won't let you change routes that are already defined by the system ie result of Network settings.

    And the reason its the wrong fix is that it won't stay like that: you restart Unraid; or make networking changes, and then all these extra gateways will all comeback and confuse you. Particularly if you don't reboot until its been months down the line.

  14. You're seem to have default gateways defined for each subnet / NIC interface. Unless its a router (Unraid is not that capable), don't specify the default gateway for all interfaces. Pick the default/main interface only (usually br0/eth0). Then for VM's on the other interfaces, DHCP can take care of it (or have static settings). Same with dockers, the docker custom network you define for each interface can have the gateway defined there, not on the Unraid level.

     

    image.thumb.png.a53879f0635b55c6cbc865d2302078c8.png

    image.thumb.png.47e3d538f167f501a497eed68fe8b892.png

    Just so you don't get confused, I'm running my dockers all on a secondary interface to allow docker and Unraid communications. Its been like this from 6.3 series.

  15. Sorry I missed something, should have been

    scp ~/.ssh/id_rsa.pub [email protected]:/boot/config/ssh/root.pubkeys

    I've got my side Mac and Windows configure with ssh config that always logs into my Unraid boxes as root user so I don't need to specify them.

    However, the fact you are getting the usage output means there might been an unexpected argument on your command line. as the scp expects at least two options, the source and destination. And if its not working for yoou yu can always do this on your mac to copy the public key into the clipboard

    pbcopy < ~/.ssh/id_rsa.pub

    the ssh into Unraid and use vi or nano to create /boot/config/ssh/root.pubkeys and paste in the clipboard with Cmd+V

  16. tower.local is the default name of an Unraid server from the Mac OS point of view. but yes. this can be replaced with the IP

    id_rsa.pub is the default public key file after generating with ssh-keygen

     

    This could be made into a plugin, but I'm just just a regular Unix guy, so it never occurred to me to need a UI for this.

    That said, an argument could be made for the User scripts plugin, which allow you to make some scripts that will run natively in Unraid, either on schedule, or on special events like startup, or on demand from the web UI and you wouldn't need to mess with SSH key authentication at all.

  17. 11 hours ago, Maddeen said:

    unRAID is not able to handle a passwordless (via ssh-key) ssh-login without getting a master degree in development ;-)
    For me it was not possible to solve the riddle with uploading a pub-key in folders who are not present and - additionally - doing the necessary scripting to make sure, that the ssh-key is still present after reboots. Without this scripting the ssh-key will always be lost when restarting your server.
    Maybe some of the Limetech guys will provide us a simple/userfriendly WebGUI-SetUp to easily upload an ssh-public-key without all the scripting

    No need to script anything.

    And you only need to do this once.

    on your mac,

    scp ~/.ssh/id_rsa.pub tower.local:/boot/config/ssh/root.pubkeys

    then ssh in Unraid

    sed -i~ -e 's#^AuthorizedKeysFile.*#AuthorizedKeysFile /etc/ssh/%u.pubkeys#' /etc/ssh/sshd_config
    cp /etc/ssh/sshd_config /boot/config/ssh/
    /etc/rc.d/rc.sshd restart

    What this does is copy your public key into the USB, then it makes a config change to sshd to make it look for the authorized key file in /etc/ssh/root.pubkeys (the %u is expanded to the user trying to login). When the sshd server is started up, all the files from the /boot/config/ssh location is copied to /etc/ssh and permissions are set to 600 (RW by user only).

     

    So no need to add putty (unless you really want to use putty) and no need to keep the root password in plaintext inside your scripts.

    • Like 1
    • Thanks 1