Storing/Running Portable Command-Line Tools as Root?


Go to solution Solved by JonathanM,

Recommended Posts

I'm nearly through migrating from Windows, but I hit an unexpected hurdle.

 

I rely on several scheduled batch files (some calling portable exectables) to migrate and organize files from RW 'landing zones' to RO archives.

 

Basically, anything that doesn't get pulled into a RO share by an *ARR program, I did with a simple batch file.

 

I found the `user scripts' plugin, and translated all my batch files to linux bash syntax, but I get 'access denied' when trying to call any of my binaries.

 

I also don't like that all the resulting files/folders are owned by root. Is that really the only user that gets console access?

 

Is there persistant storage anywhere on the system that root has access to? Or a simple way to run these console apps inside a docker container?

 

Seems overkill to spin up an entire VM to run these batch files, and ideally I'd prefer not to point a SMB share to my RO storage, at all.

 

Even docker containers for each program seems like overkill. Let alone anything with a GUI.

 

 

These are a few of my sample use cases:

/RW/Podcasts/* >> /RO/Podcasts/*

/RW/Games/Games (in Subfolders) >> 7-zip >> /RO/Games/Folders are now 0-compression zip files.

/RW/Photos/Camera Roll/* >> /RO/Photos/YY/MM/*

Link to comment
  • Solution

First question, are your binaries native linux?

Second, where are you keeping them? The flash drive won't work, you must run them from another file system, either an array drive, a pool, or RAM. If you wish, you can keep the original binary on the flash drive, then make a copy into RAM, set them as executable, and run them from there.

Link to comment
21 hours ago, nmkaufman said:

I also don't like that all the resulting files/folders are owned by root. Is that really the only user that gets console access?

Yes. From a linux standpoint unRAID is really a single user system. The "users" created in the webui exist for the purpose of controlling share access only. 

 

21 hours ago, nmkaufman said:

Even docker containers for each program seems like overkill. Let alone anything with a GUI.

 Not at all. Docker containers are the preferred way user added apps/programs in unRAID. They are light weight, and provide an easy way to isolate user programs from the core system. It makes it much easier to create and manage environments for your customizations to run in that wont risk destabilizing the the unRAID os. The general rule of thumb is anything that can be done in docker, should be done in docker. If it cant be done in docker a VM is the next best, and plugins/native services should be reserved for things that need to integrate with the webui or OS. There is also the option of LXC containers, which would sit between docker and VMs in the hierarchy, but are currently provided by a third party plugin rather than as a core unRAID feature (though they work quite well).

Link to comment
19 hours ago, JonathanM said:

First question...

I appreciate you taking the time to reply.

 

The programs are all native linux. I created a new share to store them, on one of my cache pools.

 

They work while they're still owned by my 'NAS' user account, but once the default unRAID permissions are restored, I get 'permission denied'.

 

https://freeimage.host/i/JCEM1Pj

 

https://freeimage.host/i/JCEMGKx

 

https://freeimage.host/i/JCEMMcQ

 

I created a Windows VM with a VirtioFS pass-thru today, and VirtioFS definitely comes with it's own set of challenges.

 

A lot of the Windows binaries are very lax with Case-sensitivity.

 

I definitely think I'm better off learning how to switch these tools to Linux, one way or another.

Edited by nmkaufman
Link to comment
31 minutes ago, primeval_god said:

Yes. From a linux standpoint unRAID is really a single user system.

 

I do appreciate the security inherent in this approach. It's just doing my head in trying to get up and running, is all.

31 minutes ago, primeval_god said:

 Not at all. Docker containers are the preferred way

 

I ultimately need to learn how to compile my own containers, then.

Could I potentially put something as simple as these command-line tools in a container, and run them with one of my shares accounts, rather than root?

 

I've been using Docker for Windows since it launched, and I've only ever learned just enough to get Photoprism and a few other containers up and running.

Edited by nmkaufman
Link to comment
6 hours ago, nmkaufman said:

I ultimately need to learn how to compile my own containers, then.

Could I potentially put something as simple as these command-line tools in a container, and run them with one of my shares accounts, rather than root?

Creating your own containers with all the needed scripts and applications is an option but there are other ways to go about it as well that might not require creating your own containers. For instance in your first post you mention needing to use 7zip in a script.

Quote

/RW/Games/Games (in Subfolders) >> 7-zip >> /RO/Games/Folders are now 0-compression zip files.

Personally I use 7zip with an ephemeral container on the command line and in scripts launched from the user scripts plugin. Basically instead of calling a 7z binary i do this 

docker run --rm --workdir /data -it -v $PWD:/data crazymax/7zip 7z x

where {args} is whatever i would pass to 7z. It launches a crazymax/7zip container with $PWD bind mounted to the /data directory in the container and the 7z command started with my specified arguments in the /data directory. The container deletes itself when it finishes.

I do something similar with ffmpeg when i occasionally need to use it 

Quote

docker run --rm -it --gpus=all --cpuset-cpus="2,4,6,8,10,12,14,3,5,7,9,11,13,15" --cpu-shares=512 -v $PWD:/config --workdir=/config --entrypoint=/usr/lib/jellyfin-ffmpeg/ffmpeg lscr.io/linuxserver/jellyfin:latest

though since i rarely have need of ffmpeg (I transcode in dedicated handbrake containers) I dont keep an ffmpeg image on hand (there are plenty of them available) and I just use another instance of a jellyfin container instead. In the case of the ffmpeg command i have a bash alias assigned such that bash replaces 'ffmpeg' with the quoted code above.

Edited by primeval_god
Link to comment
On 11/13/2023 at 7:12 PM, JonathanM said:

First question, are your binaries native linux?

Second, where are you keeping them? The flash drive won't work, you must run them from another file system, either an array drive, a pool, or RAM. If you wish, you can keep the original binary on the flash drive, then make a copy into RAM, set them as executable, and run them from there.

 

Yeah, so this just goes to show how little I understand linux.

 

I just needed to run chmod +x on my executables, whenever the permissions got wonky.

 

I wrote a simple userscript that runs once per boot, and all is good.

 

ln -s /mnt/user/apps_linux/* /usr/local/sbin/
chmod +x /mnt/user/apps_linux/*

 

Now the actual fun begins; figuring out how to convert all my batch and powershell scripts to bash.

Edited by nmkaufman
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.