Jump to content

Ioannis

Members
  • Content Count

    12
  • Joined

  • Last visited

Community Reputation

1 Neutral

About Ioannis

  • Rank
    Member
  1. Obviously. I 'm just saying that both OSes didn't present any network interfaces, in my case, when using Q35 machines while they did present them with the i440fx. HTH
  2. i440fx-4.1 with SeaBIOS I should also mention I had the same exact experience trying to install pfsense too (sorry, I don't know about freebsd version numbers)
  3. Maybe a bit off-topic wrt proxmox, but when I tried to install opnsense (freebsd based) in a VM, it also didn't find any network interfaces when using the Q35 machine type. With i440 machine type, the network interfaces were visible to the OS and working just fine. HTH the op.
  4. I 'll share my couple of cents on the assumption that cost is an issue. You 're not mentioning if you need transcoding. If you need up to 1 or 2 transcoding streams, then it 'd be safe to go with the 1600 w/out any GPU... If not for transcoding, the GPU is only needed to enter/configure the BIOS. Unraid itself does not need it. Unless you plan on passing a GPU through to a VM... As for the RAM, I 'd say use your existing sticks and only upgrade later if you feel the need to. For reference, my main Unraid server uses a Ryzen 2600 with 16G@3200 RAM and no GPU (I did put one just to configure the BIOS and then removed it). Emby can transcode one 4K stream on the CPU, no problem. Could possibly do two simultaneous 4K streams, didn't need it, didn't check for it That's what my needs are. YMMV
  5. I like its versatility, haven't found anything I could throw at it and wouldn't handle it. A feature I miss is built-in backup process between Unraid servers.
  6. I believe docker-compose is available in nerdpack plugin?
  7. Actually, I run my unraid server without any gpu (on-board or dedicated) so no problem there.
  8. If you want your containers to be able to resolve other containers using their names, you have to use custom networks. It's easy, provides isolation and just works. AFAIK, Unraid doesn't give you the option to create custom docker networks but it's easy to do from the command-line: docker network create <your_net_name> After you create a network, assign all the containers you want to communicate with each other using their names to this network. For example, you could put sonarr, radarr, sabnzbd in a network called "media". Those containers will then be able to communicate with the rest that are on the same network using just the container's name. If a container needs to contact another container that is on a different network (e.g. the default "bridge" network), it will have to use the server's IP. If you try this and it works and you 're happy, you need to configure Unraid to remember custom docker networks across reboots. This is a yes/no option in the docker settings page (you need to stop the docker engine first and switch to advanced view to see the option IIRC - sorry not in front of Unraid UI atm). HTH
  9. I 've come across it when searching some other PXE-related stuff. I did install it to play around with it but didn't image anything (I didn't have my server built at the time). Looked good though and based on articles/posts I read online, it seemed like a solid choice for the job. I do plan to revisit this now that I have the space to store the generated images
  10. What problems are you facing? If you don't tell, nobody can help
  11. Have you checked the FOG Project? (yes, there's an app for it in Unraid)
  12. This is how everyone does it: allocate an image file and present it to the iscsi initiator (the client). I would suggest though to allocate it on the cache, if available, because its usage will be exactly the same as VM's vdisks so performance will suffer if allocated on the array. Configuration on the iscsi target side (unraid in this case) is pretty straightforward. Each iscsi volume (image file) needs the volume path, authentication details (optional, e.g. CHAP2 username/password) and a couple of other (optional) options.