Jump to content

hackersarchangel

Members
  • Posts

    7
  • Joined

  • Last visited

Posts posted by hackersarchangel

  1. Good evening :)

    I have the container installed and so far the logs say everything is working as expected. However I’m attempting to access other containers, and I believe I have followed your guide properly, but it’s not working.

    Edit: I forgot to mention…
    I added a network using the following:

     

    docker network create container:passthroughvpn


    Which then made it a selectable option in the drop down menu.


    I added a port using the “Add another path,port,variable,device” and here is where my possible confusion is coming in. Your guide says the container port is the exposed port but that I need to access it using the host port you specified in the directions on GitHub. Want to confirm I have that correct in setting the port the service is expecting to be reached at as the Container port, and whatever port I want to use as the Host port.

    That said, I like using the default ports of each service, so is that a possibility for me to do so?

    Also, I know the container itself is working as it was working with the other VPN container I was using until I decided to switch.

     

    Edit: I resolved the issue. I am accessing the web interfaces from my Wireguard VPN to the network, which reports me as being 172.x.x.x and in setting the LAN_NETWORK to match that resolved my issue. However, I did try setting it to 0.0.0.0/0 and that did not work, also doing “172.x.x.x/24, 192.x.x.x./24” did not work as well. I was still able to access via 172.x.x.x, but not 192.x.x.x. If that could be fixed somehow to allow access from multiple IP ranges that would be fantastic.

    That said, great work! Glad to have found a “generic” VPN container, and if there is anything I can do to help out, let me know.

  2. 9 minutes ago, etsjessey said:

    *Sigh* I have another weird share question that doesn't make logic sense to me...lol

     

    The same share I mentioned before will let me copy a file to and from the share but, will not let me open a folder to view it's contents within...
    I checked permissions and it says the same user that has the read/write access has permissions on that share has read/write access to the folders so it was marked recursive. I must be missing a line of xml or something that will allow that...any ideas?

     

    Thanks in advance!

    Do you have eXecute permissions? That's what allows you to traverse it.

  3. 5 minutes ago, etsjessey said:

     

     

    That was exactly the problem! I was able to change everything to one user but, it takes away root access.
    If I only have one other user besides root what would I do to accomplish changing permissions back to root but, also giving that one user access? Thank you for the detailed explanation by the way. For someone like me who didn't particularly know what each part meant but actually wants to learn, it's nice to finally know!

    That's interesting, root should always have access for root = God. You can't login to the share as root, that's a big security no no. *wags finger*

    The changes I described are done to the folder via ssh, so I'm not sure I'm following what you did in that regard. As for the config on the share itself via SMB that looks fine.

    Also, it's late for me so I may not be grokking this well. I'll check again tomorrow and see if I read your post differently LOL

  4. 20 minutes ago, etsjessey said:

    It looks like the version negotiated is 3.02 when I looked in MacOS

    Ok he other thing it can be are folder permissions on the server side, meaning you gave that user access in the config but the actual folder and files are still restricted. SSH into the server as a user that can gain root/sudo privileges, and run the following on the folders in question:

    ls -la

    This will tell you what permissions are on the folders. If they don't match the user OR a group the user is in then you would see what you are seeing without giving everyone permission to access.
     

    ** All commands below may need sudo or root access, proceed with caution! ***
    If you will be the only user accessing these (or that specific user credential will be the only one) do the following:

    chown -Rv username "/path/to/folder"

    If other users will be using the share with different credentials, do this instead:

    chmod -Rv 770 "/path/to/parentfolder"
    chown -Rv username:sharedgroup "/path/to/folder"

    Explanation of the commands (I'm not assuming you don't know what you are doing, this is for those that come after us and have all the questions!)

     

    ls -la will show the permissions and the allowed level of permissions. It will show who the owner and group attached to the folder, and it will show what level of access each "ring" has.   rwx means Read Write eXecute, and is necessary to change anything in the folders. The permissions are in 3 sets, first being user, second group, third is everyone else.

     

    chmod is CHange MODe and that allows you to change the access levels of each category explained above. 7 = RWX, 6 is RW. The eXecute bit (a numerical of 1) must be set on all directories or you will be unable to use those folders. Example: chmod 100 "/home/user" means you have set the execute bit on your home folder.

    chown is CHange OWNer, -R is recursive, v is verbose. Owner is in the first half of the : group is second.

     

    Hope this helps anyone else that comes along. If you have made it this far, welcome!

  5. 5 hours ago, testdasi said:

    Doable with a caveat. You need at least 1 device in the array. That's a requirement.

    Some were even able to make do with just an additional USB stick (in addition to the Unraid OS stick).

     

    I was able to move everything (docker, libvirt, appdata, vdisk, daily network driver) to my zfs pool of 2 NVMe - half in RAID-0 and half in RAID-1.

     

    My array currently has a single 10TB spinner that is spun down most of the time. I only use it as emergency temp space (e.g. when upgrading my SSD backup btrfs raid-5 pool).

     

    Terrific. I have some old thumb drives I'm about to retire thanks to Grub2 File Manager so I can co-opt one or two of those to satisfy that, assuming it is OK with them.

     

    I'll have to look into what it will take to do that kind of setup, if you have a general direction (RTFM pointers) I'd appreciate it. I'm a terminal junkie so that will work for me just as well as a GUI.

  6. Quick question about the plugin. It is my understanding that unraid has a cache disk, and it asks for a Parity disk and such. I want to use ZFS as much as I can, and wasn't planning on having a cache drive.

    Disk arrangement as planned: NVMe drive as a VM hosting drive for the 2-3 VM's I'll have, preferably using ZFS. Will accept doing the default UnRaid setup for this if necessary.
    Two hard drives already in a mirrored pool setup, moving from FreeNAS to UnRaid using zpool import force
    Have UnRaid store all data on the pool, not including VM files which will be backed up to the Zpool.

     

    Does this sound doable? I'm going to build a server at some point in the future once I get all the parts together.

×
×
  • Create New...