Jump to content


Popular Content

Showing content with the highest reputation since 11/15/18 in all areas

  1. 22 points
    Since I can remember Unraid has never been great at simultaneous array disk performance, but it was pretty acceptable, since v6.7 there have been various users complaining for example of very poor performance when running the mover and trying to stream a movie. I noticed this myself yesterday when I couldn't even start watching an SD video using Kodi just because there were writes going on to a different array disk, and this server doesn't even have a parity drive, so did a quick test on my test server and the problem is easily reproducible and started with the first v6.7 release candidate, rc1. How to reproduce: -Server just needs 2 assigned array data devices (no parity needed, but same happens with parity) and one cache device, no encryption, all devices are btrfs formatted -Used cp to copy a few video files from cache to disk2 -While cp is going on tried to stream a movie from disk1, took a long time to start and would keep stalling/buffering Tried to copy one file from disk1 (still while cp is going one on disk2), with V6.6.7: with v6.7rc1: A few times transfer will go higher for a couple of seconds but most times it's at a few KB/s or completely stalled. Also tried with all unencrypted xfs formatted devices and it was the same: Server where problem was detected and test server have no hardware in common, one is based on X11 Supermicro board, test server is X9 series, server using HDDs, test server using SSDs so very unlikely to be hardware related.
  2. 18 points
    Sneak peak, Unraid 6.8. The image is a custom "case image" I uploaded.
  3. 14 points
    ***Update*** : Apologies, it seems like there was an update to the Unraid forums which removed the carriage returns in my code blocks. This was causing people to get errors when typing commands verbatim. I've fixed the code blocks below and all should be Plexing perfectly now Y =========== Granted this has been covered in a few other posts but I just wanted to have it with a little bit of layout and structure. Special thanks to [mention=9167]Hoopster[/mention] whose post(s) I took this from. What is Plex Hardware Acceleration? When streaming media from Plex, a few things are happening. Plex will check against the device trying to play the media: Media is stored in a compatible file container Media is encoded in a compatible bitrate Media is encoded with compatible codecs Media is a compatible resolution Bandwith is sufficient If all of the above is met, Plex will Direct Play or send the media directly to the client without being changed. This is great in most cases as there will be very little if any overhead on your CPU. This should be okay in most cases, but you may be accessing Plex remotely or on a device that is having difficulty with the source media. You could either manually convert each file or get Plex to transcode the file on the fly into another format to be played. A simple example: Your source file is stored in 1080p. You're away from home and you have a crappy internet connection. Playing the file in 1080p is taking up too much bandwith so to get a better experience you can watch your media in glorious 240p without stuttering / buffering on your little mobile device by getting Plex to transcode the file first. This is because a 240p file will require considerably less bandwith compared to a 1080p file. The issue is that depending on which format your transcoding from and to, this can absolutely pin all your CPU cores at 100% which means you're gonna have a bad time. Fortunately Intel CPUs have a little thing called Quick Sync which is their native hardware encoding and decoding core. This can dramatically reduce the CPU overhead required for transcoding and Plex can leverage this using their Hardware Acceleration feature. How Do I Know If I'm Transcoding? You're able to see how media is being served by playing a first something on a device. Log into Plex and go to Settings > Status > Now Playing As you can see this file is being direct played, so there's no transcoding happening. If you see (throttled) it's a good sign. It just means is that your Plex Media Server is able to perform the transcode faster than is necessary. To initiate some transcoding, go to where your media is playing. Click on Settings > Quality > Show All > Choose a Quality that isn't the Default one If you head back to the Now Playing section in Plex you will see that the stream is now being Transcoded. I have Quick Sync enabled hence the "(hw)" which stands for, you guessed it, Hardware. "(hw)" will not be shown if Quick Sync isn't being used in transcoding. PreRequisites 1. A Plex Pass - If you require Plex Hardware Acceleration Test to see if your system is capable before buying a Plex Pass. 2. Intel CPU that has Quick Sync Capability - Search for your CPU using Intel ARK 3. Compatible Motherboard You will need to enable iGPU on your motherboard BIOS In some cases this may require you to have the HDMI output plugged in and connected to a monitor in order for it to be active. If you find that this is the case on your setup you can buy a dummy HDMI doo-dad that tricks your unRAID box into thinking that something is plugged in. Some machines like the HP MicroServer Gen8 have iLO / IPMI which allows the server to be monitored / managed remotely. Unfortunately this means that the server has 2 GPUs and ALL GPU output from the server passed through the ancient Matrox GPU. So as far as any OS is concerned even though the Intel CPU supports Quick Sync, the Matrox one doesn't. =/ you'd have better luck using the new unRAID Nvidia Plugin. Check Your Setup If your config meets all of the above requirements, give these commands a shot, you should know straight away if you can use Hardware Acceleration. Login to your unRAID box using the GUI and open a terminal window. Or SSH into your box if that's your thing. Type: cd /dev/dri ls If you see an output like the one above your unRAID box has its Quick Sync enabled. The two items were interested in specifically are card0 and renderD128. If you can't see it not to worry type this: modprobe i915 There should be no return or errors in the output. Now again run: cd /dev/dri ls You should see the expected items ie. card0 and renderD128 Give your Container Access Lastly we need to give our container access to the Quick Sync device. I am going to passively aggressively mention that they are indeed called containers and not dockers. Dockers are manufacturers of boots and pants company and have nothing to do with virtualization or software development, yet. Okay rant over. We need to do this because the Docker host and its underlying containers don't have access to anything on unRAID unless you give it to them. This is done via Paths, Ports, Variables, Labels or in this case Devices. We want to provide our Plex container with access to one of the devices on our unRAID box. We need to change the relevant permissions on our Quick Sync Device which we do by typing into the terminal window: chmod -R 777 /dev/dri Once that's done Head over to the Docker Tab, click on the your Plex container. Scroll to the bottom click on Add another Path, Port, Variable Select Device from the drop down Enter the following: Name: /dev/dri Value: /dev/dri Click Save followed by Apply. Log Back into Plex and navigate to Settings > Transcoder. Click on the button to SHOW ADVANCED Enable "Use hardware acceleration where available". You can now do the same test we did above by playing a stream, changing it's Quality to something that isn't its original format and Checking the Now Playing section to see if Hardware Acceleration is enabled. If you see "(hw)" congrats! You're using Quick Sync and Hardware acceleration [emoji4] Persist your config On Reboot unRAID will not run those commands again unless we put it in our go file. So when ready type into terminal: nano /boot/config/go Add the following lines to the bottom of the go file modprobe i915 chmod -R 777 /dev/dri Press Ctrl X, followed by Y to save your go file. And you should be golden!
  4. 12 points
    Note: this community guide is offered in the hope that it is helpful, but comes with no warranty/guarantee/etc. Follow at your own risk. What can you do with WireGuard? Let's walk through each of the connection types: Remote access to server: Use your phone or computer to remotely access your Unraid server, including: Unraid administration via the webgui Access dockers, VMs, and network shares as though you were physically connected to the network Remote access to LAN: Builds on "Remote access to server", allowing you to access your entire LAN as well. Server to server access: Allows two Unraid servers to connect to each other. LAN to LAN access: Builds on "Server to server access", allowing two entire networks to communicate. May require additional settings, TBD. Server hub & spoke access: Builds on "Remote access to server", except that all of the VPN clients can connect to each other as well. Note that all traffic passes through the server. LAN hub & spoke access: Builds on "Server hub & spoke access", allowing you to access your entire LAN as well. VPN tunneled access: Route traffic for specific Dockers and VMs through a commercial WireGuard VPN provider (see this guide) Remote tunneled access: Securely access the Internet from untrusted networks by routing all of your traffic through the VPN and out Unraid's Internet connection In this guide we will walk through how to setup WireGuard so that your trusted devices can VPN into your home network to access Unraid and the other systems on your network. Prerequisites You must be running Unraid 6.8 with the Dynamix WireGuard plugin from Community Apps Be aware that WireGuard is is technically classified as experimental. It has not gone through a full security audit yet and has not reached 1.0 status. But it is the first open source VPN solution that is extremely simple to install, fast, and designed from the ground up to be secure. Understand that giving someone VPN access to your LAN is just like giving them physical access to your LAN, except they have it 24x7 when you aren't around to supervise. Only give access to people and devices that you trust, and make certain that the configuration details (particularly the private keys) are not passed around insecurely. Regardless of the "connection type" you choose, assume that anyone who gets access to this configuration information will be able to get full access to your network. This guide works great for simple networks. But if you have Dockers with custom IPs or VMs with strict networking requirements, please see the "Complex Networks" section below. Unraid will automatically configure your WireGuard clients to connect to Unraid using your current public IP address, which will work until that IP address changes. To future-proof the setup, you can use Dynamic DNS instead. There are many ways to do this, probably the easiest is described in this 2 minute video from SpaceInvaderOne If your router has UPnP enabled, Unraid will be able to automatically forward the port for you. If not, you will need to know how to configure your router to forward a port. You will need to install WireGuard on a client system. It is available for many operating systems: https://www.wireguard.com/install/ Android or iOS make good first systems, because you can get all the details via QR code. Setting up the Unraid side of the VPN tunnel First, go to Settings -> Network Settings -> Interface eth0. If "Enable bridging" is "Yes", then WireGuard will work as described below. If bridging is disabled, then none of the "Peer type of connections" that involve the local LAN will work properly. As a general rule, bridging should be enabled in Unraid. If UPnP is enabled on your router and you want to use it in Unraid, go to Settings -> Management Access and confirm "Use UPnP" is set to Yes On Unraid 6.8, go to Settings -> VPN Manager Give the VPN Tunnel a name, such as "MyHome VPN" Press "Generate Keypair". This will generate a set of public and private keys for Unraid. Take care not to inadvertently share the private key with anyone (such as in a screenshot like this) By default the local endpoint will be configured with your current public IP address. If you chose to setup DDNS earlier, change the IP address to the DDNS address. Unraid will recommend a port to use. You typically won't need to change this unless you already have WireGuard running elsewhere on your network. Hit Apply If Unraid detects that your router supports UPnP, it will automatically setup port forwarding for you: If you see a note that says "configure your router for port forwarding..." you will need to login to your router and setup the port forward as directed by the note: Some tips for setting up the port forward in your router: Both the external (source) and internal (target/local) ports should be the set to the value Unraid provides. If your router interface asks you to put in a range, use the same port for both the starting and ending values. Be sure to specify that it is a UDP port and not a TCP port. For the internal (target/local) address, use the IP address of your Unraid system shown in the note. Google can help you find instructions for your specific router, i.e. "how to port forward Asus RT-AC68U" Note that after hitting Apply, the public and private keys are removed from view. If you ever need to access them, click the "key" icon on the right hand side. Similarly, you can access other advanced setting by pressing the "down chevron" on the right hand side. They are beyond the scope of this guide, but you can turn on help to see what they do. In the upper right corner of the page, change the Inactive slider to Active to start WireGuard. You can optionally set the tunnel to Autostart when Unraid boots. Defining a Peer (client) Click "Add Peer" Give it a name, such as "MyAndroid" For the initial connection type, choose "Remote access to LAN". This will give your device access to Unraid and other items on your network. Click "Generate Keypair" to generate public and private keys for the client. The private key will be given to the client / peer, but take care not to share it with anyone else (such as in a screenshot like this) For an additional layer of security, click "Generate Key" to generate a preshared key. Again, this should only be shared with this client / peer. Click Apply. Note: Technically, the peer should generate these keys and not give the private key to Unraid. You are welcome to do that, but it is less convenient as the config files Unraid generates will not be complete and you will have to finish configuring the client manually. Configuring a Peer (client) Click the "eye" icon to view the peer configuration. If the button is not clickable, you need to apply or reset your unsaved changes first. If you are setting up a mobile device, choose the "Create from QR code" option in the mobile app and take a picture of the QR code. Give it a name and make the connection. The VPN tunnel starts almost instantaneously, once it is up you can open a browser and connect to Unraid or another system on your network. Be careful not to share screenshots of the QR code with anyone, or they will be able to use it to access your VPN. If you are setting up another type of device, download the file and transfer it to the remote computer via trusted email or dropbox, etc. Then unzip it and load the configuration into the client. Protect this file, anyone who has access to it will be able to access your VPN. About DNS The 2019.10.20 release of the Dynamix Wireguard plugin includes a "Peer DNS Server" option (thanks @bonienl!) If you are having trouble with DNS resolution on the WireGuard client, return to the VPN Manager page in Unraid and switch from Basic to Advanced mode, add the IP address of your desired DNS server into the "Peer DNS Server" field, then install the updated config file on the client. You may want to use the IP address of the router on the LAN you are connecting to, or you could use a globally available IP like This is required for "Remote tunneled access" mode, if the client's original DNS server is no longer accessible after all traffic is routed through the tunnel. If you are using any of the split tunneling modes, adding a DNS server may provide name resolution on the remote network, although you will lose name resolution on the client's local network in the process. The simplest solution is to add a hosts file on the client that provides name resolution for both networks. Complex Networks (added Oct 24) The instructions above should work out of the box for simple networks. With "Use NAT" defaulted to Yes, all network traffic on Unraid uses Unraid's IP, and that works fine if you have a simple setup. However, if you have Dockers with custom IPs or VMs with strict networking requirements, things may not work right (I know, kind of vague, but feel free to read the two WireGuard threads for examples) A partial solution is: In the WireGuard config, set "Use NAT" to No In your router, add a static route that lets your network access the WireGuard "Local tunnel network pool" through the IP address of your Unraid system. For instance, for the default pool of you should add this static route: Network: (aka with subnet Gateway: <IP address of your Unraid system> (Note that this covers the entire class B 10.253.x.x network, so you can add other WireGuard tunnels without having to modify your router setup again.) With these changes, your network should work normally. However, your WireGuard clients still may not be able to access Dockers on custom IPs or VMs. If you find a solution to this, please comment!
  5. 12 points
    Overview of what Macinabox does. This is a container that is designed to help make installing a macOS KVM Virtual Machine very easy. The VM doesnt run in a docker container but runs as a full fat Unraid KVM VM selectable in the VM tab of the webUI. However your server's hardware must be 'fairly' modern to run a macOS VM. You are going to need a CPU that supports SSE 4.2 & AVX2 for macOS Mojave and above to work. Both Intel and AMD processors are fine to use. To use just select the OS type, vdisk type and size. ( I suggest you use raw disk type) Then let Macinabox make a vdisk for the install, download the recovery media, clover boot-loader and create a vm xml file that is preconfigured to work. (The xml files created will have unique uuids and network mac addresses) Sit back and let the container do its stuff - note - To see the progress of the container, you do this by looking at the log whilst it runs. You will know when it has finished as you will see a message saying to stop then start the array. This container doesn't have a webUI (but clicking on the webUI button of this container will just take you too a video of how to use this container) - - - - - - - - - - - So after the container has done its stuff. Stop the array then start it again and the VM will become visible in the Unraid VM manger. (you will not see it if you dont do this) Click start to start the VM and you will boot into a clover boot-loader. Then press enter to continue to load the recovery media. Goto disk utility and format the vdisk. Close disk utility. Select re-install macOS then sit back and wait until done.. Please be patient when installing as the install speed will depend on your internet connection and how busy the Apple servers are. After installing the VM don't run the container again or else it will overwrite the vdisk with the install on. (I will change this so it cant happen soon) Probably best after installing to remove the container for now just to be safe. edit - I have now added checks to stop the container re downloading install media if run again. It will also check for an existing vdisk and if found not create another and therefore not overwrite it. Same goes for the xml file. However if the container is run again it will download another clover and ovmf files. I have done this so people can easily update clover and ovmf files if needed.
  6. 8 points
  7. 8 points
    PSA. It seems openvpn pushed another broken bin, tagged 2.7.3 I get the same error with it as I did with the previously pulled 2.7.2 While they/us try to figure it out, you can change your image to "linuxserver/openvpn-as:2.6.1-ls11" and it should work
  8. 7 points
    Support for Nginx Proxy Manager docker container Application Name: Nginx Proxy Manager Application Site: https://nginxproxymanager.jc21.com Docker Hub: https://hub.docker.com/r/jlesage/nginx-proxy-manager/ Github: https://github.com/jlesage/docker-nginx-proxy-manager Make sure to look at the complete documentation, available on Github ! Post any questions or issues relating to this docker in this thread.
  9. 6 points
    On Friday, August 30th, using random.org's true random number generator, the following 14 forum users we're selected as winners of the limited-edition Unraid case badges: #74 @Techmagi #282 @Carlos Eduardo Grams #119 @mucflyer #48 @Ayradd #338 @hkinks #311 @coldzero2006 #323 @DayspringGaming #192 @starbix #159 @hummelmose #262 @JustinAiken #212 @fefzero #166 @Andrew_86 #386 @plttn #33 @aeleos (Note: the # corresponds to the forum post # selected in this thread.) Congratulations to all of the winners and a huge thank you to everyone else who entered the giveaway and helped us celebrate our company birthday! Cheers, Spencer
  10. 5 points
    I have made un updated video guide for setting up this great container. It covers setting up the container, port forwarding and setting up clients on Windows, macOS Linux (ubuntu Mate) and on cell phone - Android and IOS. Hope this guide helps people new to this setting up OpenVPN
  11. 5 points
    I'm on holiday with my family, I have tried to compile it several times but there are some issues that need working on. It will be ready when it's ready, a week for something that is free is no time at all, we're not releasing the source scripts for reasons I outlined in the original script, but if someone isn't happy with the timescales that we work on, then they are more than welcome to compile and create this solution themselves and debug any issues. The source code is all out there. I've made my feelings about this sort of thing well known before, I will outline it again. We're volunteers with families, jobs, wives and lives to leads. Until the day comes where working on this stuff pays our mortgages, feeds our kids and allows us to resign our full time jobs arrives then things happen at our place and our pace only. We have a discord channel that people can join and if they want to get involved then just ask, but strangely whenever I offer, the standard reply is that people don't have enough free time. If that is the case, fine, but don't assume any of us have any more free time than you, we don't, we just choose to dedicate what little free time we have to this project.
  12. 4 points
    tldr: Starting with 6.8.0-rc2 please visit Settings/Disk Settings and change the 'Tunable (scheduler)' to 'none'. Then run with SQLite DB files located on array disk shares and report whether your databases still become corrupted. When we first started looking into this issue one of the first things I ran across was this monster topic: https://bugzilla.kernel.org/show_bug.cgi?id=201685 and related patch discussion: https://patchwork.kernel.org/patch/10712695/ This bug is very very similar to what we're seeing. In addition Unraid 6.6.7 is on the last of the 4.18 kernels (4.18.20). Unraid 6.7 is on 4.19 kernel and of course 6.8 is on 5.3 currently. The SQLite DB Corruption bug also only started happening with 4.19 and so I don't think this is coincidence. In looking at the 5.3 code the patch above is not in the code; however, I ran across a later commit that reverted that patch and solved the bug a different way: https://www.spinics.net/lists/linux-block/msg34445.html That set of changes is in 5.3 code. I'm thinking perhaps their "fix" is not properly handling some I/O pattern that SQLite via md/unraid is generating. Before I go off and revert the kernel to 4.18.20, please test if setting the scheduler to 'none' makes any difference in whether databases become corrupted.
  13. 4 points
    Here is a video that shows what to do if you have a data drive that fails and you want to swap/upgrade it and the disk that you to replace it is larger than your parity drive. So this shows the swap parity procedure. Basically you add the new larger drive then have Unraid copy the existing parity data over to the new drive. This frees up the old parity drive so it can then be used then to rebuild the data of the failed drive onto the old parity drive. Hope this is useful
  14. 4 points
    Well, it's finally happened: Unraid 6.x Tunables Tester v4.0 The first post has been updated with the release notes and download. Paul
  15. 4 points
    It's got a big LinuxServer logo at the top of the thread and lots of red warnings, and the first two posts go into some detail about how it works. The plugin is installed from Community Applications, with an author of LinuxServer.io How much clearer can we make it? Sent from my Mi A1 using Tapatalk
  16. 4 points
    Here is my banner. Think it fits unraid well!
  17. 4 points
    It's not an issue with stock Unraid, the issue is there isn't a patch available for the runc version. Due to the recent docker update for security reasons, Nvidia haven't caught up yet. Sent from my Mi A1 using Tapatalk
  18. 4 points
    Hi all, Would be lovely to have settings to configure access to shares on an individual user's page as well. Depending on the user-case, it's easier to configure things on a per-share basis, or a per-user basis. Would be nice to have the option, see wonderfully artistic rendering below:
  19. 4 points
    Programs run as abc--not www-data. Pretty sure you need to specify the PHP interpreter too. So, it should look like this: sudo -u abc php /config/www/nextcloud/occ db:convert-filecache-bigint
  20. 3 points
    LibreELEC, TBS-OS, Digital Devices v6.8.0rc1 all done. TBS-Crazy-Cat broken at the moment.
  21. 3 points
    Or, do it the non-destructive way and cover the pin(s) with Kapton tape, which is made for this type of application.
  22. 3 points
    The URLs missing is because of the multitude of mistakes the guys were making on that field, ca is now filling it out for them. Hit apply fix on each of them The update available constantly is due to a change at dockerhub. Install or update the auto update plugin which will patch the OS for this Sent from my NSA monitored device
  23. 3 points
    Which may or may not mean it's a good idea to push that version to a production environment. "Stable" unifi software has caused major headaches in the past, I'd much rather wait until it's been running on someone else's system for a while before I trust my multiple sites to it. If wifi goes down, it's a big deal. I'd rather not deal with angry users.
  24. 3 points
    Why is this your first post to our forum? There are solutions to the "dealbreaker" you mention, and probably solutions to any other problem you might encounter. There are a lot of friendly and helpful people here on this forum that give FREE support. Why haven't you taken advantage of it? There is a plugin that will run mover based on how full cache is here: https://forums.unraid.net/topic/70783-plugin-mover-tuning/ Another solution to your problem is more careful consideration of what you cache. Mover can't move to the slower array as fast as you can fill the faster cache, regardless of how frequently mover runs. So not caching some user shares or not caching very large transfers, for example are ways to deal with that. I don't get why you haven't taken advantage of our forum. This user community is one of the very best on the internet, and one of the very best features of Unraid.
  25. 3 points
    Same problem here. Temporary solution that I came up with is to edit the docker settings and change repository to an older version: "linuxserver/duplicati:v2.0.4.23-"
  26. 3 points
    Attached is a debugged version of this script modified by me. I've eliminated almost all of the extraneous echo calls. Many of the block outputs have been replaced with heredocs. All of the references to md_write_limit have either been commented out or removed outright. All legacy command substitution has been replaced with modern command substitution The script locates mdcmd on it's own. https://paste.ee/p/wcwWV
  27. 3 points
    I've bumped Unmanic to 0.0.1-beta5 This includes the following changes: Modify some historical logging of useful stats This sets us up to start adding extra info like eta during the conversion process as well as the stats mentioned below Adds new "See All Records" screen Any suggestions for stats that you would like on this screen would be great Note that due to the changes in logging, only newly converted items will show here. Old stuff wont due to missing statistics data. Sorry Create backups of settings when saving (there were some cases where the settings were invalid but still saved). This corrupted our data and made it impossible to read. So now we test prior to committing changes. FFMPEG was causing some errors on certain files. If you have noted any conversion failures in the past, can you please re-test with this version to confirm if it is now resolved or not. Log rotation If you are debugging, you are spewing a crap ton of data to the logs. This update rotates the logs at midnight every day and keeps them for 7 days. Even if you are not debugging, this is much better. The next milestone is to add extended functionality to the settings: https://github.com/Josh5/unmanic/milestone/4 This will hopefully be the last major tidy up of core functionality. I think that once this milestone is complete we can safely pull this out of beta and look at things like HW decoding and improving on data that is displayed throughout the WebUI.
  28. 3 points
    I had the opportunity to test the “real word” bandwidth of some commonly used controllers in the community, so I’m posting my results in the hopes that it may help some users choose a controller and others understand what may be limiting their parity check/sync speed. Note that these tests are only relevant for those operations, normal read/writes to the array are usually limited by hard disk or network speed. Next to each controller is its maximum theoretical throughput and my results depending on the number of disks connected, result is observed parity check speed using a fast SSD only array with Unraid V6.1.2 (SASLP and SAS2LP tested with V6.1.4 due to performance gains compared with earlier releases) Values in green are the measured controller power consumption with all ports in use. 2 Port Controllers SIL 3132 PCIe gen1 x1 (250MB/s) 1 x 125MB/s 2 x 80MB/s Asmedia ASM1061 PCIe gen2 x1 (500MB/s) - e.g., SYBA SY-PEX40039 and other similar cards 1 x 375MB/s 2 x 206MB/s 4 Port Controllers SIL 3114 PCI (133MB/s) 1 x 105MB/s 2 x 63.5MB/s 3 x 42.5MB/s 4 x 32MB/s Adaptec AAR-1430SA PCIe gen1 x4 (1000MB/s) 4 x 210MB/s Marvell 9215 PCIe gen2 x1 (500MB/s) - 2w - e.g., SYBA SI-PEX40064 and other similar cards (possible issues with virtualization) 2 x 200MB/s 3 x 140MB/s 4 x 100MB/s Marvell 9230 PCIe gen2 x2 (1000MB/s) - 2w - e.g., SYBA SI-PEX40057 and other similar cards (possible issues with virtualization) 2 x 375MB/s 3 x 255MB/s 4 x 204MB/s 8 Port Controllers Supermicro AOC-SAT2-MV8 PCI-X (1067MB/s) 4 x 220MB/s (167MB/s*) 5 x 177.5MB/s (135MB/s*) 6 x 147.5MB/s (115MB/s*) 7 x 127MB/s (97MB/s*) 8 x 112MB/s (84MB/s*) *on PCI-X 100Mhz slot (800MB/S) Supermicro AOC-SASLP-MV8 PCIe gen1 x4 (1000MB/s) - 6w 4 x 140MB/s 5 x 117MB/s 6 x 105MB/s 7 x 90MB/s 8 x 80MB/s Supermicro AOC-SAS2LP-MV8 PCIe gen2 x8 (4000MB/s) - 6w 4 x 340MB/s 6 x 345MB/s 8 x 320MB/s (205MB/s*, 200MB/s**) *on PCIe gen2 x4 (2000MB/s) **on PCIe gen1 x8 (2000MB/s) Dell H310 PCIe gen2 x8 (4000MB/s) - 6w – LSI 2008 chipset, results should be the same as IBM M1015 and other similar cards 4 x 455MB/s 6 x 377.5MB/s 8 x 320MB/s (190MB/s*, 185MB/s**) *on PCIe gen2 x4 (2000MB/s) **on PCIe gen1 x8 (2000MB/s) LSI 9207-8i PCIe gen3 x8 (4800MB/s) - 9w - LSI 2308 chipset 8 x 525MB/s+ (*) LSI 9300-8i PCIe gen3 x8 (4800MB/s with the SATA3 devices used for this test) - LSI 3008 chipset 8 x 525MB/s+ (*) * used SSDs maximum read speed SAS Expanders HP 6Gb (3Gb SATA) SAS Expander - 11w Single Link on Dell H310 (1200MB/s*) 8 x 137.5MB/s 12 x 92.5MB/s 16 x 70MB/s 20 x 55MB/s 24 x 47.5MB/s Dual Link on Dell H310 (2400MB/s*) 12 x 182.5MB/s 16 x 140MB/s 20 x 110MB/s 24 x 95MB/s * Half 6GB bandwidth because it only links @ 3Gb with SATA disks Intel® RAID SAS2 Expander RES2SV240 - 10w Single Link on Dell H310 (2400MB/s) 8 x 275MB/s 12 x 185MB/s 16 x 140MB/s (112MB/s*) 20 x 110MB/s (92MB/s*) Dual Link on Dell H310 (4000MB/s) 12 x 205MB/s 16 x 155MB/s (185MB/s**) Dual Link on LSI 9207-8i (4800MB/s) 16 x 275MB/s LSI SAS3 expander (included on a Supermicro BPN-SAS3-826EL1 backplane) Single Link on LSI 9300-8i (tested with SATA3 devices, max usable bandwidth would be 2200MB/s, but with LSI's Databolt technology we can get almost SAS3 speeds) 8 x 475MB/s 12 x 340MB/s Dual Link on LSI 9300-8i (tested with SATA3 devices, max usable bandwidth would be 4400MB/s, but with LSI's Databolt technology we can get almost SAS3 speeds, limit here is going to be the PCIe 3.0 slot, around 6000MB/s usable) 10 x 510MB/s 12 x 460MB/s * Avoid using slower linking speed disks with expanders, as it will bring total speed down, in this example 4 of the SSDs were SATA2, instead of all SATA3. ** Two different boards have consistent different results, will need to test a third one to see what's normal, 155MB/s is the max on a Supermicro X9SCM-F, 185MB/s on Asrock B150M-Pro4S. Sata 2 vs Sata 3 I see many times on the forum users asking if changing to Sata 3 controllers or disks would improve their speed, Sata 2 has enough bandwidth (between 265 and 275MB/s according to my tests) for the fastest disks currently on the market, if buying a new board or controller you should buy sata 3 for the future, but except for SSD use there’s no gain in changing your Sata 2 setup to Sata 3. Single vs. Dual Channel RAM In arrays with many disks, and especially with low “horsepower” CPUs, memory bandwidth can also have a big effect on parity check speed, obviously this will only make a difference if you’re not hitting a controller bottleneck, two examples with 24 drive arrays: Asus A88X-M PLUS with AMD A4-6300 dual core @ 3.7Ghz Single Channel – 99.1MB/s Dual Channel - 132.9MB/s Supermicro X9SCL-F with Intel G1620 dual core @ 2.7Ghz Single Channel – 131.8MB/s Dual Channel – 184.0MB/s DMI There is another bus that can be a bottleneck for Intel based boards, much more so than Sata 2, the DMI that connects the south bridge or PCH to the CPU. Socket 775, 1156 and 1366 use DMI 1.0, socket 1155, 1150 and 2011 use DMI 2.0, socket 1151 uses DMI 3.0 DMI 1.0 (1000MB/s) 4 x 180MB/s 5 x 140MB/s 6 x 120MB/s 8 x 100MB/s 10 x 85MB/s DMI 2.0 (2000MB/s) 4 x 270MB/s (Sata2 limit) 6 x 240MB/s 8 x 195MB/s 9 x 170MB/s 10 x 145MB/s 12 x 115MB/s 14 x 110MB/s DMI 3.0 (3940MB/s) 6 x 330MB/s (Onboard SATA only*) 10 X 297.5MB/s 12 x 250MB/s 16 X 185MB/s *Despite being DMI 3.0, Skylake, Kaby Lake and Coffee Lake chipsets have a max combined bandwidth of approximately 2GB/s for the onboard SATA ports. DMI 1.0 can be a bottleneck using only the onboard Sata ports, DMI 2.0 can limit users with all onboard ports used plus an additional controller onboard or on a PCIe slot that shares the DMI bus, in most home market boards only the graphics slot connects directly to CPU, all other slots go through the DMI (more top of the line boards, usually with SLI support, have at least 2 slots), server boards usually have 2 or 3 slots connected directly to the CPU, you should always use these slots first. You can see below the diagram for my X9SCL-F test server board, for the DMI 2.0 tests I used the 6 onboard ports plus one Adaptec 1430SA on PCIe slot 4. UMI (2000MB/s) - Used on most AMD APUs, equivalent to intel DMI 2.0 6 x 203MB/s 7 x 173MB/s 8 x 152MB/s Ryzen link - PCIe 3.0 x4 (3940MB/s) 6 x 467MB/s (Onboard SATA only) I think there are no big surprises and most results make sense and are in line with what I expected, exception maybe for the SASLP that should have the same bandwidth of the Adaptec 1430SA and is clearly slower, can limit a parity check with only 4 disks. I expect some variations in the results from other users due to different hardware and/or tunnable settings, but would be surprised if there are big differences, reply here if you can get a significant better speed with a specific controller. How to check and improve your parity check speed System Stats from Dynamix V6 Plugins is usually an easy way to find out if a parity check is bus limited, after the check finishes look at the storage graph, on an unlimited system it should start at a higher speed and gradually slow down as it goes to the disks slower inner tracks, on a limited system the graph will be flat at the beginning or totally flat for a worst-case scenario. See screenshots below for examples (arrays with mixed disk sizes will have speed jumps at the end of each one, but principle is the same). If you are not bus limited but still find your speed low, there’s a couple things worth trying: Diskspeed - your parity check speed can’t be faster than your slowest disk, a big advantage of Unraid is the possibility to mix different size disks, but this can lead to have an assortment of disk models and sizes, use this to find your slowest disks and when it’s time to upgrade replace these first. Tunables Tester - on some systems can increase the average speed 10 to 20Mb/s or more, on others makes little or no difference. That’s all I can think of, all suggestions welcome.
  29. 3 points
    Lots of base package updates, other fixes. Hopefully one of the last -rc's before stable release. Version 6.7.0-rc6 2019-03-25 Base distro: adwaita-icon-theme: version 3.32.0 at-spi2-atk: version 2.32.0 at-spi2-core: version 2.32.0 atk: version 2.32.0 bash: version 5.0.003 ca-certificates: version 20190308 coreutils: version 8.31 curl: version 7.64.0 (CVE-2019-8907, CVE-2019-3822, CVE-2019-3823) dhcpcd: version 7.1.1 dnsmasq: version 2.80 docker: version 18.09.3 e2fsprogs: version 1.45.0 ethtool: version 5.0 file: version 5.36 (CVE-2019-8906, CVE-2019-8907) freetype: version 2.10.0 git: version 2.21.0 glib2: version 2.60.0 glibc: version 2.29 glibc-solibs: version 2.29 glibc-zoneinfo: version 2018i-noarch-1 gnutls: version 3.6.6 gtk+3: version 3.24.7 infozip: version 6.0 (CVE-2014-8139, CVE-2014-8140, CVE-2014-8141, CVE-2016-9844, CVE-2018-18384, CVE-2018-1000035) iproute2: version 4.20.0 iputils: version 20180629 jemalloc: version 4.5.0 jq: version 1.6 (rev2) kernel-firmware: version 20190314_7bc2464 kmod: version 26 libaio: version 0.3.112 libcap-ng: version 0.7.9 libgpg-error: version 1.36 libjpeg-turbo: version 2.0.2 libpsl: version 0.20.2 libssh2: version 1.8.1 (CVE-2019-3855, CVE-2019-3856, CVE-2019-3857, CVE-2019-3858, CVE-2019-3859, CVE-2019-3860, CVE-2019-3861, CVE-2019-3862, CVE-2019-3863) libvirt: version 5.1.0 libwebp: version 1.0.2 libwebsockets: version 3.1.0 libxkbfile: version 1.1.0 libxml2: version 2.9.9 libxslt: version 1.1.33 libzip: version 1.5.2 libXcomposite: version 0.4.5 libXcursor: version 1.2.0 libXdamage: version 1.1.5 libXdmcp: version 1.1.3 libXext: version 1.3.4 libXft: version 2.3.3 libXmu: version 1.1.3 libXrandr: version 1.5.2 libXxf86dga: version 1.1.5 lvm2: version 2.02.177 lzip: version 1.21 mcelog: version 162 mozilla-firefox: version 66.0 (CVE-2018-18500, CVE-2018-18504, CVE-2018-18505, CVE-2018-18503, CVE-2018-18506, CVE-2018-18502, CVE-2018-18501, CVE-2018-18356, CVE-2019-5785, CVE-2018-18511, CVE-2019-9790, CVE-2019-9791, CVE-2019-9792, CVE-2019-9793, CVE-2019-9794, CVE-2019-9795, CVE-2019-9796, CVE-2019-9797, CVE-2019-9798, CVE-2019-9799, CVE-2019-9801, CVE-2019-9802, CVE-2019-9803, CVE-2019-9804, CVE-2019-9805, CVE-2019-9806, CVE-2019-9807, CVE-2019-9809, CVE-2019-9808, CVE-2019-9789, CVE-2019-9788) mpfr: version 4.0.2 ncompress: version ncurses: version 6.1_20190223 nghttp2: version 1.37.0 ntp: version 4.2.8p13 (CVE-2019-8936) oniguruma: version 6.9.1 (CVE-2017-9224, CVE-2017-9225, CVE-2017-9226, CVE-2017-9227, CVE-2017-9228, CVE-2017-9229) openssl: version 1.1.1b (CVE-2019-1559) openssl-solibs: version 1.1.1b (CVE-2019-1559) p11-kit: version 0.23.15 pcre: version 8.43 php: version 7.2.15 pixman: version 0.38.0 rsyslog: version 8.1903.0 sqlite: version 3.27.2 sudo: version 1.8.27 sysvinit: version 2.94 sysvinit-scripts: version 2.1-noarch-26 talloc: version 2.1.16 tar: version 1.32 tdb: version 1.3.18 tevent: version 0.9.39 ttyd: version 20190223 util-linux: version 2.33.1 wget: version 1.20.1 xprop: version 1.2.4 xtrans: version 1.4.0 Linux kernel: version: 4.19.31 CONFIG_X86_MCELOG_LEGACY: Support for deprecated /dev/mcelog character device OOT Intel 10Gbps network driver: ixgbe: version 5.5.5 Management: fstab: mount USB flash boot device with 'flush' keyword rc.sshd: only copy new key files to USB flash boot device rc.nginx: implement better status wait loop - thanks ljm42 smartmontools: update drivedb and hwdata/{pci.ids,usb.ids,oui.txt,manuf.txt} webgui: Per Device Font Size Setting webgui: Syslog: add '' entry in local folder selection webgui: Syslog: included rsyslog.d conf files and chmod 0666 webgui: Syslog: added log rotation settings webgui: Open link under Unraid logo in new window webgui: Use cookie for display setting font size webgui: prevent dashboard bar animations from queuing up on inactive browser tab webgui: Replace string "OS X" with "macOS" webgui: Updated Unraid icons webgui: Switch plugins to a compressed download
  30. 3 points
    im on it guys, looks like there has been a switch to .net core which requires changes to the code which ive now done, new image now building
  31. 3 points
    Hi Docker forum Just thought I'd share with you all, some material design icons that I made today for the containers I use in my system: https://imgur.com/a/ehRQ3 I couldn't stand the default smokeping icon looking so bad... So while I only wanted to change that single icon, it looked so nice that I had to rip out all of the other icons to make them look uniform Feel free to use any of these - I could probably add to this album if anyone really wants some more done in a similar style (The Plex icon reminds me a lot of the LSIO's Plex Request logo but it was the best I could do!) They're all 512x512 .png files & look wicked on the unRAID docker page
  32. 3 points
  33. 3 points
    btw just to be clear to anybody here, this is no longer the case, dns is 100% used over the vpn only, the only time its not is for the initial lookup of the endpoint you are connecting to (which is then cached in hosts file). If the vpn goes down name queries do not go over the lan (iptables set not to allow port 53), once the vpn tunnel is re-established (by looking up the endpoint using hosts file) name server queries are then resumed over the vpn tunnel, zero leakage.
  34. 2 points
    Overview: Support for Docker image arch-jellyfin in the binhex repo. Application: Jellyfin - https://github.com/jellyfin/jellyfin Docker Hub: https://hub.docker.com/r/binhex/arch-jellyfin/ GitHub: https://github.com/binhex/arch-jellyfin Documentation: https://github.com/binhex/documentation If you appreciate my work, then please consider buying me a beer 😁 For other Docker support threads and requests, news and Docker template support for the binhex repository please use the "General" thread here
  35. 2 points
    Sure. 2,000 blu-rays backed up at 50GB / disk. When you have well over $20,000 worth of blu ray disks, surely you would want a backup of your data, right? 🤣
  36. 2 points
    Roccat juke is what i use. Plug and play. It cost about $15.
  37. 2 points
  38. 2 points
    I am starting a series of videos on pfSense. Both physical and VM instances will be used. Topics such as using a failover physical pfSense to work with a VM pfSense. Setting up OpenVPN (both an OpenVPN server and OpenVPN multiple clients). Using VLANs. Blocking ads. Setting up squid and squid guard and other topics. T This part is an introduction part gives an overview of the series of videos and talks about pfSense and its advantages. Second part of is on hardware and network equipment Part 3 install and basic config Part 4 customize backup and aupdate Part 5 DHCP, Interfaces and WIFI Part 6 Pfsense and DNS Part 7 - Firewall rules, Portforwarding/NAT, Aliases and UPnp Part 8 Open NAT for XBOX ONE and PS4
  39. 2 points
  40. 2 points
    I used the following however, I am unable to provide the Time Machine screenshot as I did not configure my VPN to allow discovery. -MW
  41. 2 points
    OK, announcement. Any stupid posts asking why this isn't released, if they can build it themselves, etc etc prepare to hear my wrath. We're not complete noobs at this, @bass_rock and I wrote this, and when it's broken we'll do our best to fix it, any amount of asking is not going to speed it up. If anyone thinks they can do better, then by all means write your own version, but as far as I can remember nobody else did, which is why we did it. We're working on it. ETA: I DON'T KNOW When that changes I'll update the thread. My working theory why this isn't working is that there's been a major GCC version upgrade between v8.3 and v9.1 so I'm working on trying to downgrade GCC which is difficult as I can't find a Slackware package for it, so I'm trying to build it from source, and make some Slackware packages so I can keep them as static sources, which is not as easy as I hoped.
  42. 2 points
    Any LSI with a SAS2008/2308/3008 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, etc and clones, like the Dell H200/H310 and IBM M1015, these latter ones need to be crossflashed.
  43. 2 points
    ok thanks for the above, so looks like you arent low on space, so your issue must be down to one or both of the following issues:- 1. corruption to cache drive 2. corruption of docker img (this contains all docker images and containers) so its most probably docker image corruption, you will need to stop the docker service, delete your docker image and then re-create it, then restore your containers, steps to do this (stolen from a previous post):-
  44. 2 points
    Any LSI Host Bus Adapter based on the LSI SAS2008/2308/3008 chipset are the recommended SATA/SAS PCIe cards. You can find them inexpensively on eBay but avoid Chinese knock offs. I have the Dell H310 (LSI 9211-8i clone) in my server. Eight additional SATA ports for $30 Commonly used cards are: LSI 9211-8i (PCIe 2.0) LSI 9207-8i (PCIe 3.0) LSI 9300-8i (PCIe 3.0) Dell H200/H310 (PCIe 2.0) IBM M1015 (PCIe 2.0) Flash the card to IT mode without the BIOS. Many cards can be found on eBay pre-flashed for $50-$60 f you don't want to deal with that
  45. 2 points
    If bonienl doesn't pick it up, I'll have a gander at it.
  46. 2 points
    I custom built a system designed to deliver unraid VM's its a headless system that has multiple dedicated GPU's and also runs lots of dockers and vm for my IP cameras. I also have a VM for a HTPC that is connected to my loungeroom. Its a pretty amazing threadripper based system and unraid really does make it easy to manage and expand as our needs change. I told myself that I would buy a license when my trial expired.... system hasn't crashed or been unstable so I barely noticed that my trial expired 3 months ago... It still did not crash but I decided today was the day I would support a pretty fantastic product thanks and keep up the great work!
  47. 2 points
  48. 2 points
    Yea. I will need testers shortly. I feel like I should create a separate thread for this so its not hijacking spaceinvader's Sent from my ONE E1003 using Tapatalk
  49. 2 points
    Just a note for anyone that is trying to get Wallabag - I have finally had some luck with the official docker repository getting setup on unRAID. https://hub.docker.com/r/wallabag/wallabag/ For smaller installs, the basic version running on SQLite is quite nice. However, configuring it for me took some learning. When importing the container through CA, I mapped two different locations which were noted on the dockerfile: /var/www/wallabag/data was mapped to /mnt/user/appdata/wallabag/data Also /var/www/wallabag/web/assets/images was mapped to /mnt/user/appdata/wallabag/data Then I had CA create the container. Finally, I came up to a huge hurdle with the initial release of Wallabag 2.3.1 - it kept loading with no CSS - essentially unformatted text only on the screen. It turns out that they may have made a mistake in how your individual URL for Wallabag gets populated in the screen. So, I noticed that you can set that variable by editing the container. In the Wallabag container, click "Advanced Options", then in the "extra parameters" section, I added this line as it was mentioned in the Repo Info: -e SYMFONY__ENV__DOMAIN_NAME=https://my.wallabag.url As I'm using Wallabag behind a reverse proxy, I found that I needed my domain/name rather than IP address in that new parameter. Additionally, I had to change the "WebUI" to match the same address. Finally, it worked! Official Wallabag repo on unRAID, all nice and pretty. A little note of caution here as this post may exist for awhile, I totally feel like the DOMAIN_NAME issue is a simple mistake as the current docker build is only 8 days old as of this post. Changing that may not be a requirement in the future, but having to work around it was a great learning experience!
  50. 2 points
    I also wanted to get the ZFS Event Daemon (zed) working on my unRAID setup. Most of the files needed are already built into steini84's plugin (thanks!) but zed.rc needs to be copied into the file system at each boot. I created a folder /boot/config/zfs-zed/ and placed zed.rc in there - you can get the default from /usr/etc/zfs/zed.d/zed.rc. Add the following lines to your go file: #Start ZFS Event Daemon cp /boot/config/zfs-zed/zed.rc /usr/etc/zfs/zed.d/ /usr/sbin/zed & To use built in notifications in unRAID, and to avoid having to set up a mail server or relay, set your zed.rc with the following options: ZED_EMAIL_PROG="/usr/local/emhttp/webGui/scripts/notify" ZED_EMAIL_OPTS="-i warning -s '@SUBJECT@' -d '@SUBJECT@' -m \"\`cat $pathname\`\"" $pathname contains the verbose output from ZED, which will be sent in the body of an email alert from unRAID. I have this set to alert level of 'warning' as I have unRAID configured to always email me for warnings. You'll also want to adjust your email address, verbosity level, and set up a debug log if desired. Either place the files and manually start zed, or reboot the system for this to take effect. Pro tip, if you want to test the notifications, zed will alert on a scrub finish event. If you're like me and only have a large pools that takes hours/days to scrub, you can set up a quick test pool like this: truncate -s 64M /root/test.img zpool create test /root/test.img zpool scrub test When you've finished testing, just destroy the pool.