nmkaufman

Members
  • Posts

    15
  • Joined

  • Last visited

nmkaufman's Achievements

Noob

Noob (1/14)

0

Reputation

2

Community Answers

  1. I will play around with it, this weekend. I believe the problem is that it's creating files, owned by ROOT, while other containers use NOBODY, by default. It's not setting permissions that allow other users to access the files. The easiest solution would be to set the UMASK, but I tried passing a UMASK environmental variable to the container, and it didn't seem to work. I did see a reddit thread where someone claimed this worked, though, so I'm going to further experiment.
  2. Did you have any luck getting permissions working? I tried adding --user 99:100 as a post argument, and it's still not working. I have to reset permissions on my downloads folder before other containers can interract with the files. It makes sharing/contributing next to impossible, as well.
  3. Dang; This is so much faster than trying to manage ROMs via SMB. Thank you for the hard work! Now I have to decide if it's worth re-hashing my entire library to take advantage of it. I may see if I can trick RV into using my existing RomVault3_2.Cache file, but I don't have much hope. After some experimentation, I have some good and bad news. Good news; this container is much faster than SMB/virtiofs when crawling shares, and scanning ROMs that have already been hashed. Bad news; the performance of hashing / repairing new ROMs is significantly slower than a Windows VM. Especially CHD files. It appears that while newer versions of CHDman are multi-threaded, the mono interpreter is less-so. Good news; an existing .cache file can be successfully dropped into the appdata/ROMVault folder, and Launchbox will use it. You can use a VM or more powerful machine to hash/build your library, initially, and drop the .cache file into the container for routine maintenance.
  4. Have the 'Recently Added/Spotlight/Trending' sections on the homepage always been there? Didn't see it on the changelog, but I've never noticed them before. I never noticed the 'Why we picked it' blurbs on the spotlight apps, either. All great stuff.
  5. If anyone finds this thread in the future, Unraid does occasionally touch ctime. It's not useful as an incremental backup trigger. I instead switched to using -mtime, and configured my software to touch the modified time when extracting archives.
  6. My computer will only boot 1/5 (sometimes less) attempts from a USB 3.0 port. I've never had a problem with any other operating sytem or boot image, but the XHCI hand-off in Unraid is very particular. It doesn't sound like you're even getting that far, but just to rule it out try using a USB 2.0 port.
  7. I'm new to Unraid, but this plugin has immediately become one I can never live without. Thank you! My disks stay parked while browsing, searching, and even scanning with emby/*arr or freefilesync. I tried countless solutions to achieve this in Windows over the years, with minimal success. That said, the cpu spikes are wild. It's a low cpu utilization % per thread (<10%), but since it runs 1 thread per disk, it means my cpu basically never parks cores, and is always in some state of turbo-boost. My computer idles ~5 degrees (C) hotter with the plugin on, vs off (2x ancient E5-2600 v3). I've tried turning off 'run scan of each disk in a seperate thread' but this results in my disks spinning up. I've also tried increasing the minimum interval between folder scans, but this also results in more spin-ups. Does the 'hack' that this plugin exploits only keep folders in memory if there's a constant folder-scan happening? Like I said, it's not that the cpu utilization % is so high, but it's just high and persistent enough to defeat all the power-saving features of my computer.
  8. New to Unraid from Windows, and having access to FFS on day 1 was a big help. Thank you. Silly question, but I deleted several of my FFS sync profiles through the GUI with the 'delete to recycle bin' option on. Do these actually go anywhere? I'm rather new to linux, and I'm always paranoid I'm cluttering things up. Sorry if this is more of a fundamental docker question than an container-specific one, but it seemed like the best place to start.
  9. I'm trying to create a script batch that will push files added/modified in the past 24 hours to an external hard drive. The idea is to let these files accumulate, and then use the external to update my off-site backup server every few weeks. When both my servers were Windows, I would use 'date accessed', but this field seems to be identical to 'date modified' on linux. I don't fully understand -ctime, but it appears to do be the date/time the file hit my server. This is exactly what I want. Does Unraid every update/use this field? I don't want to wake up one morning to find Unraid spewing 100TB to my 4TB ext HDD. So far, my script batch is looking like this: rsync -avh --no-links --backup --backup-dir=/mnt/user/backup/~TRASH~/ --files-from=<(find /mnt/user -name BD-Rips -prune -o -name appdata -prune -o -name apps_linux -prune -o -name backup -prune -o -name domains -prune -o -name Downloads -prune -o -name Downloads_TEMP -prune -o -name system -prune -o -ctime -2 -type f) / /mnt/user/backup/Nightly/ Is there a more elegant way to list those exclusions? To combine them into a single '-prune' command? I found some instructions online, but it wasn't very clear. I'm very new to all this Linux stuff.
  10. Yeah, so this just goes to show how little I understand linux. I just needed to run chmod +x on my executables, whenever the permissions got wonky. I wrote a simple userscript that runs once per boot, and all is good. ln -s /mnt/user/apps_linux/* /usr/local/sbin/ chmod +x /mnt/user/apps_linux/* Now the actual fun begins; figuring out how to convert all my batch and powershell scripts to bash.
  11. Bridge mode, more than likely? This disables the router in the the modem, and let's you use your own. Go to google on your home network, and ask 'what's my ip?' Try connecting to that new IP address. If it doesn't work, you'll need to re-enable bridge mode on the new gateway. If you can't do that (more and more ISP's are disabling bridge mode) you'll need to use Tailscale, which luckily has never been easier.
  12. Nooo way. So basically just calling a container to use as a sandbox? I won't get to try this until after work, but I'm 99% certain this will send me down the right path.
  13. I do appreciate the security inherent in this approach. It's just doing my head in trying to get up and running, is all. I ultimately need to learn how to compile my own containers, then. Could I potentially put something as simple as these command-line tools in a container, and run them with one of my shares accounts, rather than root? I've been using Docker for Windows since it launched, and I've only ever learned just enough to get Photoprism and a few other containers up and running.
  14. I appreciate you taking the time to reply. The programs are all native linux. I created a new share to store them, on one of my cache pools. They work while they're still owned by my 'NAS' user account, but once the default unRAID permissions are restored, I get 'permission denied'. https://freeimage.host/i/JCEM1Pj https://freeimage.host/i/JCEMGKx https://freeimage.host/i/JCEMMcQ I created a Windows VM with a VirtioFS pass-thru today, and VirtioFS definitely comes with it's own set of challenges. A lot of the Windows binaries are very lax with Case-sensitivity. I definitely think I'm better off learning how to switch these tools to Linux, one way or another.
  15. I'm nearly through migrating from Windows, but I hit an unexpected hurdle. I rely on several scheduled batch files (some calling portable exectables) to migrate and organize files from RW 'landing zones' to RO archives. Basically, anything that doesn't get pulled into a RO share by an *ARR program, I did with a simple batch file. I found the `user scripts' plugin, and translated all my batch files to linux bash syntax, but I get 'access denied' when trying to call any of my binaries. I also don't like that all the resulting files/folders are owned by root. Is that really the only user that gets console access? Is there persistant storage anywhere on the system that root has access to? Or a simple way to run these console apps inside a docker container? Seems overkill to spin up an entire VM to run these batch files, and ideally I'd prefer not to point a SMB share to my RO storage, at all. Even docker containers for each program seems like overkill. Let alone anything with a GUI. These are a few of my sample use cases: /RW/Podcasts/* >> /RO/Podcasts/* /RW/Games/Games (in Subfolders) >> 7-zip >> /RO/Games/Folders are now 0-compression zip files. /RW/Photos/Camera Roll/* >> /RO/Photos/YY/MM/*