Taako

Members
  • Posts

    10
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Taako's Achievements

Newbie

Newbie (1/14)

15

Reputation

  1. I have a need to give someone access to upload and modify files inside a single particular directory within my appdata/ directory. Can you recommend a method (docker app or other) that gives them access to just a single directory within appdata share? I can deal with the network stuff (port forwarding, etc) but im not sure what application or method people use to allow restricted access to the server. As far as i know unraid only has a single ssh user (root).
  2. The container doesnt seem to work in bridge mode, and I need to use bridge mode to do port mapping since i want to run two instances of Foundry (two different licenses) on my unraid server and both cant have port 30000. EDIT: Hmmmm, turning ONE to bridge mode with host port 30000 mapped to container port 30000 works but the other one with a port mapping from host port 30001 to container port 30000 doesnt.... EDIT2: Looks like when i change the Host Port template variable it changes the container port..... Why wont it let me change the container port value? EDIT3: aparently unraid doesnt let you change container ports, gotta just remove and add it. All working now!
  3. I just got jellyfin up and running with transcoding on my UnRaid server using the binhex-jellyfin docker container. I am using an Intel i5-10400 with Intel UHD 630 integrated graphics and so I activated transcoding for everything except VC1 and VP9 (using this as a reference) I tested it out by streaming to my phone over the internet and it the streaming works fine but for the first few minutes of playback I was seeing my CPU go between 35% to 55% usage. Eventually this died down and it was at 1% CPU while streaming most of the time but i thought it was odd. This seems like a lot of CPU usage for transcoding a single stream that is supposedly using the iGPU, the source material is 1080p H264 SDR and i'm streaming to my phone which i have requesting 720p - 4Mbps To double check, I tried it again with a different movie (this time source was 4k HEVC HDR) and seeing spikes up to 64%, this CPU usage continued even when i paused the stream but kept the stream open (but paused) on my phone. When i exited the stream in the app it immediately went to 0-1% CPU. Unlike the 1080p H264 SDR stream this time the CPU usage never went back down to ~1% after a while of playback. I think transcoding is happening because my transcode folder is filling up sh-5.1# pwd /config/data/transcodes sh-5.1# ls b5aa8ce1478be860caa1b09e327b24c60.ts b5aa8ce1478be860caa1b09e327b24c638.ts b5aa8ce1478be860caa1b09e327b24c6100.ts b5aa8ce1478be860caa1b09e327b24c639.ts b5aa8ce1478be860caa1b09e327b24c6101.ts b5aa8ce1478be860caa1b09e327b24c63.ts b5aa8ce1478be860caa1b09e327b24c6102.ts b5aa8ce1478be860caa1b09e327b24c640.ts b5aa8ce1478be860caa1b09e327b24c6103.ts b5aa8ce1478be860caa1b09e327b24c641.ts b5aa8ce1478be860caa1b09e327b24c6104.ts b5aa8ce1478be860caa1b09e327b24c642.ts b5aa8ce1478be860caa1b09e327b24c6105.ts b5aa8ce1478be860caa1b09e327b24c643.ts ...... ls output cut off for brevity but is much longer.... Maybe this is just the limit of the UHD 630 but I wanted to double check with the community to see if there was anything more I could do to lower CPU usage while streaming/transcoding 4K. EDIT: It seems like transcoding 4K is just way harder than i thought, i had no intuition for it. Seems lame that 4K is pretty much the standard content distribution resolution these days but barely any devices support it.
  4. Original comment thread where idea was suggested by reddit user /u/neoKushan : https://old.reddit.com/r/unRAID/comments/mlcbk5/would_anyone_be_interested_in_a_detailed_guide_on/gtl8cbl/ The ultimate goal of this feature would be to create a 1:1 map between unraid docker templates and docker-compose files. This would allow users to edit the docker as either a compose file or a template and backing up and keeping revision control of the template would be simpler as it would simply be a docker-compose file. I believe the first step in doing so is changing the unraid template structure to use docker-compose labels for all the metadata that unraid uses for its templates that doesn't already have a 1:1 map to docker-compose. this would be items such as WebUI, Icon URL, Support Thread, Project Page, CPU Pinning, etc. Most of the meat of these templates are more or less direct transcriptions of docker-compose, put into a GUI format. I don't see why we couldn't take advantage of this by allowing users to edit and backup the compose file directly.
  5. Running into issues with the Navidrome docker. The page is stuck on "Loading. The page is loading, just a moment please." These are my docker settings: Got this error in the logs. | \ | | (_) | | | \| | __ ___ ___ __| |_ __ ___ _ __ ___ ___ | . ` |/ _` \ \ / / |/ _` | '__/ _ \| '_ ` _ \ / _ \ | |\ | (_| |\ V /| | (_| | | | (_) | | | | | | __/ \_| \_/\__,_| \_/ |_|\__,_|_| \___/|_| |_| |_|\___| Version: 0.41.0 (ba922bbf) 2021/04/04 04:01:38 goose: no migrations to run. current version: 20210322132848 time="2021-04-04T04:01:38Z" level=info msg="Starting scanner" interval=1m0s time="2021-04-04T04:01:38Z" level=info msg="Configuring Media Folder" name="Music Library" path=/music time="2021-04-04T04:01:38Z" level=info msg="Creating Image cache" maxSize="200 MB" path=/data/cache/images time="2021-04-04T04:01:38Z" level=info msg="Finished initializing cache" cache=Image elapsedTime="239.018µs" maxSize=200MB time="2021-04-04T04:01:38Z" level=info msg="Found ffmpeg" path=/usr/bin/ffmpeg time="2021-04-04T04:01:38Z" level=info msg="Last.FM integration not available: missing ApiKey/Secret" time="2021-04-04T04:01:38Z" level=info msg="Spotify integration is not enabled: artist images will not be available" time="2021-04-04T04:01:38Z" level=info msg="Creating Transcoding cache" maxSize="100 MB" path=/data/cache/transcoding time="2021-04-04T04:01:38Z" level=info msg="Finished initializing cache" cache=Transcoding elapsedTime="176.034µs" maxSize=100MB time="2021-04-04T04:01:38Z" level=info msg="Mounting routes" path=/rest time="2021-04-04T04:01:38Z" level=info msg="Mounting routes" path=/app time="2021-04-04T04:01:38Z" level=info msg="Login rate limit set" requestLimit=5 windowLength=20s time="2021-04-04T04:01:38Z" level=info msg="Navidrome server is accepting requests" address="0.0.0.0:4533" time="2021-04-04T04:01:40Z" level=warning msg="No admin user found!" error="data not found" time="2021-04-04T04:02:40Z" level=warning msg="No admin user found!" error="data not found" time="2021-04-04T04:03:40Z" level=warning msg="No admin user found!" error="data not found" time="2021-04-04T04:03:40Z" level=warning msg="No admin user found!" error="data not found" Tried all sorts of things like adding --user args and stuff but nothing works **Edit:** For those wondering the dev said this is a know issue and fixed in next release. There is a very easy workaround to fix it: https://github.com/navidrome/navidrome/issues/974
  6. Just learning UnRaid this week for my first UnRaid server build this weekend. I've used docker plenty on a raspberry pi homeserver but I do everything using docker-compose files. What exactly *is* the docker vdisk? I dont quite understand why it's needed. Can't docker just pull separate images down and store them? What is it doing for unraid and why is it needed? I know unraid uses some template system, but is there any way to use docker-compose instead? Is there any benefit or drawback to using compose over the template system? The template system just feels like a GUI version of compose sort of.
  7. I'm setting up my first UnRaid server this weekend. I have a 256GB SATA SSD (Samsung 840 Pro 256GB) that i'm planning to use dedicated to cache for the array (mover will only move stuff off this SSD). I want to get another SSD (maybe NVME) dedicated for docker app data and running VMs because I heard that it will keep docker/VM performance stable when a ton of write/read is happening from the cache SSD; but not sure how big to make it. I know stuff like Plex/Jellyfin metadata can get pretty big but I'm only planning on having like 1TB of TV/Movies at most in my array for Jellyfin, so i don't imagine the dockers needing *too* much space. The VMs on the other hand i have no clue how to spec out. If i want my VMs to run on the a dedicated NVME drive does that mean all the data they access needs to be on it? Or, rather, would it be a good idea to do this? For instance if I want a VM to play games on Dolphin do i need to store all my ROMs/ISOs for Dolphin on the NVME drive or should I just have the VM access the array for the ROMs/VMs? If i have the VM access the array for the game ROMs/ISOs i feel like i'm not getting any performance enhancement since it's going to be reading off the array instead of the NVME, or is the performance of reading the ISO not really the limiting factor here. Basically when people say the run their VMs/Dockers off a dedicated SSD, what exactly is being stored there besides the docker images and the VM images?
  8. Maximum size limits on shares. That way i can partition X TB on a single large disk and set the samba settings so only certain users can access Y TB of it. This is nice so i can use something like a single 12TB disk and give 3TB to everyone in the household to use exclusively so no one goes over their quota. Also big +1 to user groups with folder/share permissions
  9. +1 Why has this not been implemented? This is such a good idea and there is so many use cases for it.
  10. Im in the finishing stages of putting together my first "real" home server. Been using a raspberry pi and docker for the last three years and i've decided to go with UnRaid for my new home server. It's going to be using an i5-10400 which i admit is probably overkill for what i want to do with it but i got a good deal on it. It's going to run the following dockers: Caddie - reverse proxy Pi-Hole - adblocking and custom internal DNS Home Assistant - Home automation (this may move over to the new server.) ddclient - basic dynamic dns portainer - why not? Jellyfin - FOSS Plex, no more than 4-6 clients tbh. 95% of the time will have 1 client Navidrome - Music Streaming Bittorrent stuff - deluge, sonarr, radarr, jackett, etc Minecraft Server - max of 10 people, probably closer to 4-5 tbh NextCloud (or maybe Seafile since i don't need all NextClouds feature bloat). Gitea Ubooquity (or maybe calibre + calibre-web) OpenVPN (or alternative like wireguard) Some backup system (duplicati? i havent really done the research yet). In addition since i got the i5 and it has a bit more power (compared to my original plan of i3-10100) I was thinking of running a windows VM for very light gaming (maybe?) and other general windows stuff. I want to be able to pass through the integrated graphics when needed, I've been reading the forums and i keep seeing this thing called IOMMU come up and have no idea what it means, can someone explain it? Does the ability to pass through the iGPU depend on the motherboard? My budget is <$130 but preferably closer to $100. I almost picked up the ASRock B560M Pro4 for $99 on Newegg but I hesitated and they sold out. At this point i'm just waiting for more B560(m) motherboards to be stocked. I'm in no rush and am patient so if it means waiting a few weeks to restock then so be it. What mobo do you recommend for me?