Jump to content

nmkaufman

Members
  • Posts

    15
  • Joined

  • Last visited

Posts posted by nmkaufman

  1. 18 hours ago, manrw said:


    Hi,
    I'd like to emphasize that I'm not the image author but only created the template.
    This means I generally can't help with any modifications that do not exist in my default template that I provide.

    That being said, I think you have multiple options to fix the container permissions:

    • Ensure that the directories are owned by the docker user. This is usually "nobody". You can also add this user to a group on your host machine and permit write & read to this user. The application shouldn't need execute.
    • It seems that the container runs on root via default: https://github.com/slskd/slskd/blob/master/docs/docker.md . You could ensure that root can write & read on these directories.
    • In the worst case, you could permit all users to write & read. This is considered bad practice and is not recommended. However, it might not be too bad if no parts of your server are exposed.

    If you still have trouble, I would suggest reaching out to the author of slskd. Keep in mind that they usually don't know "Unraid" or other distributions and they expect you to provide the docker run command (you can obtain this from Unraid). I hope this helps.

     

    I will play around with it, this weekend.

     

    I believe the problem is that it's creating files, owned by ROOT, while other containers use NOBODY, by default.

     

    It's not setting permissions that allow other users to access the files.

     

    The easiest solution would be to set the UMASK, but I tried passing a UMASK environmental variable to the container, and it didn't seem to work. I did see a reddit thread where someone claimed this worked, though, so I'm going to further experiment.

  2. On 12/11/2023 at 2:13 PM, VelcroBP said:

    I'm not finding any info in either documentation regarding an existing configuration option for permissions or how to add them to the template. I'm probably overlooking it. 

     

    Did you have any luck getting permissions working?

     

    I tried adding --user 99:100 as a post argument, and it's still not working.

     

    I have to reset permissions on my downloads folder before other containers can interract with the files.

     

    It makes sharing/contributing next to impossible, as well.

  3. Dang; This is so much faster than trying to manage ROMs via SMB.  Thank you for the hard work!

     

    Now I have to decide if it's worth re-hashing my entire library to take advantage of it.

     

    I may see if I can trick RV into using my existing RomVault3_2.Cache file, but I don't have much hope.

     

    After some experimentation, I have some good and bad news.

     

    Good news; this container is much faster than SMB/virtiofs when crawling shares, and scanning ROMs that have already been hashed.

     

    Bad news; the performance of hashing / repairing new ROMs is significantly slower than a Windows VM. Especially CHD files.

     

    It appears that while newer versions of CHDman are multi-threaded, the mono interpreter is less-so.

     

    Good news; an existing .cache file can be successfully dropped into the appdata/ROMVault folder, and Launchbox will use it.

     

    You can use a VM or more powerful machine to hash/build your library, initially, and drop the .cache file into the container for routine maintenance.

  4. Have the 'Recently Added/Spotlight/Trending' sections on the homepage always been there? 

     

    Didn't see it on the changelog, but I've never noticed them before.

     

    I never noticed the 'Why we picked it' blurbs on the spotlight apps, either.

     

    All great stuff.

  5. I'm new to Unraid, but this plugin has immediately become one I can never live without. Thank you!

     

    My disks stay parked while browsing, searching, and even scanning with emby/*arr or freefilesync.

     

    I tried countless solutions to achieve this in Windows over the years, with minimal success.

     

    That said, the cpu spikes are wild. It's a low cpu utilization % per thread (<10%), but since it runs 1 thread per disk, it means my cpu basically never parks cores, and is always in some state of turbo-boost. My computer idles ~5 degrees (C) hotter with the plugin on, vs off (2x ancient E5-2600 v3).

     

    I've tried turning off 'run scan of each disk in a seperate thread' but this results in my disks spinning up. 

     

    I've also tried increasing the minimum interval between folder scans, but this also results in more spin-ups.

     

    Does the 'hack' that this plugin exploits only keep folders in memory if there's a constant folder-scan happening?

     

    Like I said, it's not that the cpu utilization % is so high, but it's just high and persistent enough to defeat all the power-saving features of my computer.

  6. New to Unraid from Windows, and having access to FFS on day 1 was a big help. Thank you.

     

    Silly question, but I deleted several of my FFS sync profiles through the GUI with the 'delete to recycle bin' option on.

     

    Do these actually go anywhere?

     

    I'm rather new to linux, and I'm always paranoid I'm cluttering things up.

     

    Sorry if this is more of a fundamental docker question than an container-specific one, but it seemed like the best place to start.

  7. I'm trying to create a script batch that will push files added/modified in the past 24 hours to an external hard drive.

     

    The idea is to let these files accumulate, and then use the external to update my off-site backup server every few weeks.

     

    When both my servers were Windows, I would use 'date accessed', but this field seems to be identical to 'date modified' on linux.

     

    I don't fully understand -ctime, but it appears to do be the date/time the file hit my server. This is exactly what I want.

     

    Does Unraid every update/use this field? I don't want to wake up one morning to find Unraid spewing 100TB to my 4TB ext HDD.

     

    So far, my script batch is looking like this:

     

    rsync -avh --no-links --backup --backup-dir=/mnt/user/backup/~TRASH~/ --files-from=<(find /mnt/user -name BD-Rips -prune -o -name appdata -prune -o -name apps_linux -prune -o -name backup -prune -o -name domains -prune -o -name Downloads -prune -o -name Downloads_TEMP -prune -o -name system -prune -o -ctime -2 -type f) / /mnt/user/backup/Nightly/ 

     

    Is there a more elegant way to list those exclusions? To combine them into a single '-prune' command?

     

    I found some instructions online, but it wasn't very clear. I'm very new to all this Linux stuff.

     

  8. On 11/13/2023 at 7:12 PM, JonathanM said:

    First question, are your binaries native linux?

    Second, where are you keeping them? The flash drive won't work, you must run them from another file system, either an array drive, a pool, or RAM. If you wish, you can keep the original binary on the flash drive, then make a copy into RAM, set them as executable, and run them from there.

     

    Yeah, so this just goes to show how little I understand linux.

     

    I just needed to run chmod +x on my executables, whenever the permissions got wonky.

     

    I wrote a simple userscript that runs once per boot, and all is good.

     

    ln -s /mnt/user/apps_linux/* /usr/local/sbin/
    chmod +x /mnt/user/apps_linux/*

     

    Now the actual fun begins; figuring out how to convert all my batch and powershell scripts to bash.

  9. 2 hours ago, Baronque said:

    can't access it via the IP address I used to. Previously, I used my old ISP's router in a modem mode

     

    Bridge mode, more than likely? This disables the router in the the modem, and let's you use your own.

     

    Go to google on your home network, and ask 'what's my ip?' Try connecting to that new IP address.

     

    If it doesn't work, you'll need to re-enable bridge mode on the new gateway.

     

    If you can't do that (more and more ISP's are disabling bridge mode) you'll need to use Tailscale, which luckily has never been easier.

  10. 31 minutes ago, primeval_god said:

    Yes. From a linux standpoint unRAID is really a single user system.

     

    I do appreciate the security inherent in this approach. It's just doing my head in trying to get up and running, is all.

    31 minutes ago, primeval_god said:

     Not at all. Docker containers are the preferred way

     

    I ultimately need to learn how to compile my own containers, then.

    Could I potentially put something as simple as these command-line tools in a container, and run them with one of my shares accounts, rather than root?

     

    I've been using Docker for Windows since it launched, and I've only ever learned just enough to get Photoprism and a few other containers up and running.

  11. 19 hours ago, JonathanM said:

    First question...

    I appreciate you taking the time to reply.

     

    The programs are all native linux. I created a new share to store them, on one of my cache pools.

     

    They work while they're still owned by my 'NAS' user account, but once the default unRAID permissions are restored, I get 'permission denied'.

     

    https://freeimage.host/i/JCEM1Pj

     

    https://freeimage.host/i/JCEMGKx

     

    https://freeimage.host/i/JCEMMcQ

     

    I created a Windows VM with a VirtioFS pass-thru today, and VirtioFS definitely comes with it's own set of challenges.

     

    A lot of the Windows binaries are very lax with Case-sensitivity.

     

    I definitely think I'm better off learning how to switch these tools to Linux, one way or another.

  12. I'm nearly through migrating from Windows, but I hit an unexpected hurdle.

     

    I rely on several scheduled batch files (some calling portable exectables) to migrate and organize files from RW 'landing zones' to RO archives.

     

    Basically, anything that doesn't get pulled into a RO share by an *ARR program, I did with a simple batch file.

     

    I found the `user scripts' plugin, and translated all my batch files to linux bash syntax, but I get 'access denied' when trying to call any of my binaries.

     

    I also don't like that all the resulting files/folders are owned by root. Is that really the only user that gets console access?

     

    Is there persistant storage anywhere on the system that root has access to? Or a simple way to run these console apps inside a docker container?

     

    Seems overkill to spin up an entire VM to run these batch files, and ideally I'd prefer not to point a SMB share to my RO storage, at all.

     

    Even docker containers for each program seems like overkill. Let alone anything with a GUI.

     

     

    These are a few of my sample use cases:

    /RW/Podcasts/* >> /RO/Podcasts/*

    /RW/Games/Games (in Subfolders) >> 7-zip >> /RO/Games/Folders are now 0-compression zip files.

    /RW/Photos/Camera Roll/* >> /RO/Photos/YY/MM/*

×
×
  • Create New...