Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation on 08/01/18 in all areas

  1. 1 point
    @bonienl I love that unraid now contains a lot of networking functionality especially with regards to docker. Being able to assign separate IPs to containers via macvlan and such is very cool. However, my request is for something much simpler. As you know, by default (with the bridge option) docker containers are put on the default "bridge" network. They are allowed to connect to the host and each other via internal IPs, but not via dns. But the user defined bridge networks also allow for connections via dns, using the container name as the hostname (ie. http://sonarr). Unraid currently does not manage these networks (in fact deletes them unless the option for not deleting is selected in advanced docker settings). It would be nice if unraid supported creating a basic user defined bridge network and presented it as a drop down option for network type in the container settings. My real motivation behind this is that at linuxserver.io we are trying to create a repo of reverse proxy configs for our letsencrypt image and being able to define the proxy targets as "http://containername" works as a standard way unlike the current method of using "http://unraidip:port" which is different for every user. Thanks
  2. 1 point
    Slowly upgrading my Unraid server to 8TB drives. These gotta go. Already precleared. CDI screens included. 6TB was purchased as refurbished on newegg on 04-Jul-2018 (~ 3 weeks ago). 5TB was purchased new from Fry's Nov 2016. All prices are shipped. For quick sale, PM Paypal email address requesting invoice. $100 each or $195 for both.
  3. 1 point
    Specify "Custom", "00:00", "Sunday", "First week", "January, April, July, October". Then it will run at midnight on Sunday of first week in month on first month in each quarter.
  4. 1 point
    Nope. They're "container" or "packages" which encapsulate everything in an effort to sandbox your main (unraid) server.
  5. 1 point
    I use a ssd cache and the unassigned drive is ssd. If you use a single no raid config xfs is said to have the best performance.
  6. 1 point
    Try moving your download folder to an unassigned drive, as well as appdata to the unassigned drive. After I did my io issues are almost non existent.
  7. 1 point
    Why don't you just add it manually? It has just about as easy a docker run command as I've ever seen..... docker run -p <port>:80 uping/embystat:latest-linux
  8. 1 point
    I finally had the time to try this, and a few blue screens of despair later, it WORKS! Even without applying any previous optimization I tried before in this topic, the overhead that was causing stuttering is gone. Thanks! -- I'll try to describe the process I followed here to help someone else. I tried to pass through with this method first: SSD Passthrough (had to change the ata- prefix with nvme-, everything else the same), but I noticed no real differencies because as you suggested the entire controller needs to be passed through, so i followed this method NVME controller passthrough, included not stubbing the controller but using the hostdev xml provided in the video description, with a few differencies: 1- I used minitool partition wizard to migrate the os, selecting "copy partitions without resize" to avoid the recovery partition to be unnecessarily stretched, and immediately after stretched the C partition, leaving a 10% overprovisioning. 2- With the most recent unraid version it seems that the modified clover isn't necessary, you simply stub the controller in the syslinux configuration or add the hostdev to the vm xml and click on update, then you specify the boot order adding <boot order='1'/> after the source, so that it looks something like this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x41' slot='0x00' function='0x0'/> </source> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> then, the device should be visible and selectable inside the GUI editor. You then have to simply select "none" as the primary vdisk location, update again and check that the boot order is still there inside the xml, and then boot the vm up. I had to reboot a few times, inside the windows recovery options that followed the first blue screen telling me "no boot device" or something like that, select the "boot recovery" option (dunno if it's the correct name bc my interface isn't in english), reboot two times again, and it worked. I simply had to reinstall my nvidia drivers again, don't ask me why -- With my configuration, seen that I wanted to pass through the same SSD that was occupied by the vdisk, I had to move the vdisk on another disk with krusader and then select the new location inside the gui editor. Don't do like I did and make TWO copies on the other drive, one as backup, because something might simply go wrong and corrupt your vdisk. -- It works with the nvme drives, and now I want to try this method with the sata SSDs, too. The problem is that isolating the sata controller in it's own IOMMU group isn't that easy. With the second-last stable bios of my x399 aorus, f3g (f3j was bugged as hell), it simply isn't possible, even with the acs override patch enabled. The sata controllers are always grouped with something else. Updating to the latest f10 bios with the new agesa, it seems to be feasible. The obstacle I'm trying to overcome now, is to understand what sata controller I need to pass through without messing everything up. I installed a plugin to run scripts inside the gui with community apps, then ran this script: iommu script that i found on reddit, to try to understand what sata controller I need to pass through. It seems that now every sata drive is under the same sata controller, but later I'll try to change connectors. I'll keep this topic updated!
  9. 1 point
    My Gigabyte Gaming 5 allow the choice of witch slot to use for primary display.
  10. 1 point
    The important thing if deciding to encrypt the drives is to make sure to have a backup of the encryption key. Neither unRAID nor any other system will be able to read an encrypted volume if the key is lost. At least for the normal people who haven't a relative working at NSA.
  11. 1 point
    Hi I have put together a video guide for setting up nextcloud that maybe helpful for some users. Following this video there will be one (in a few days) on setting up the linuxserver letsencrypt container for use as a reverse proxy. Hope its useful...
  12. 1 point
    In my case, I use it to keep a bunch of open tabs of sites I use frequently. I also have some browser extensions that are configured to access local applications (e.g. Transmission). Having a container allows me to access them from different devices and potentially remotely (e.g. from work). Before I was using a VM to the same thing.
  13. 1 point
    So, Plex now has hardware transcoding as standard. It's out of beta for Plexpass users. It's really quite simple to set up. I'm assuming you're using Windows as a client, and using an Intel CPU with QuickSync enabled - this means you need any 2nd Gen Core CPU with video. Only chips this doesn't include would be the E3-12x0Vx line. E3-12x5Vx do have a GPU, so they'll work OK. Any Pentium G, i3, i5 or consuer i7 will work. Socket 2011 i7 or Xeons won't work. First, edit your go file. To do this, go to your server's flash share - //<servername>/flash then right click the go and edit with Notepad. You need to add the following: #enable module for iGPU and perms for the render device modprobe i915 chown -R nobody:users /dev/dri chmod -R 777 /dev/dri Put it above the following line: # Start the Management Utility /usr/local/sbin/emhttp & Save the go file, then go to your unRAID's webgui. Click on your Plex docker icon, and select Edit. Select Advanced View on the top right, which will bring up the extra settings. In the Extra Parameters box add: --device /dev/dri:/dev/dri Save the settings which will restart Plex. No,w you can either reboot the unRAID machine to run the go file to load the drivers, or you can telnet to your unRAID and run the three lines you inserted in to the go file. If you're using Putty, you can just copy and paste the three lines which will load the driver and set permissions for you. Finally, once that's done go to the Plex Settings and enable Hardware Transcoding. It's in Settings->Transcoder->Advnanced Settings->Enable Hardware Transcoding. Hope this helps someone. This method worked great on my 6.4rc9f running on an i5-6500T. I can transcode a 20Mb 1080p to 480p with less than 3% CPU. Very handy. Intel's Quicksync can do dozens of transcodes with basically no power consumption. nVidia's GPUs are mostly limited to two transcodes at a time.