Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 12/14/18 in Posts

  1. 18 points
    Sneak peak, Unraid 6.8. The image is a custom "case image" I uploaded.
  2. 16 points
    Note: this community guide is offered in the hope that it is helpful, but comes with no warranty/guarantee/etc. Follow at your own risk. What can you do with WireGuard? Let's walk through each of the connection types: Remote access to server: Use your phone or computer to remotely access your Unraid server, including: Unraid administration via the webgui Access dockers, VMs, and network shares as though you were physically connected to the network Remote access to LAN: Builds on "Remote access to server", allowing you to access your entire LAN as well. Server to server access: Allows two Unraid servers to connect to each other. LAN to LAN access: Builds on "Server to server access", allowing two entire networks to communicate. May require additional settings, TBD. Server hub & spoke access: Builds on "Remote access to server", except that all of the VPN clients can connect to each other as well. Note that all traffic passes through the server. LAN hub & spoke access: Builds on "Server hub & spoke access", allowing you to access your entire LAN as well. VPN tunneled access: Route traffic for specific Dockers and VMs through a commercial WireGuard VPN provider (see this guide) Remote tunneled access: Securely access the Internet from untrusted networks by routing all of your traffic through the VPN and out Unraid's Internet connection In this guide we will walk through how to setup WireGuard so that your trusted devices can VPN into your home network to access Unraid and the other systems on your network. Prerequisites You must be running Unraid 6.8 with the Dynamix WireGuard plugin from Community Apps Be aware that WireGuard is is technically classified as experimental. It has not gone through a full security audit yet and has not reached 1.0 status. But it is the first open source VPN solution that is extremely simple to install, fast, and designed from the ground up to be secure. Understand that giving someone VPN access to your LAN is just like giving them physical access to your LAN, except they have it 24x7 when you aren't around to supervise. Only give access to people and devices that you trust, and make certain that the configuration details (particularly the private keys) are not passed around insecurely. Regardless of the "connection type" you choose, assume that anyone who gets access to this configuration information will be able to get full access to your network. This guide works great for simple networks. But if you have Dockers with custom IPs or VMs with strict networking requirements, please see the "Complex Networks" section below. Unraid will automatically configure your WireGuard clients to connect to Unraid using your current public IP address, which will work until that IP address changes. To future-proof the setup, you can use Dynamic DNS instead. There are many ways to do this, probably the easiest is described in this 2 minute video from SpaceInvaderOne If your router has UPnP enabled, Unraid will be able to automatically forward the port for you. If not, you will need to know how to configure your router to forward a port. You will need to install WireGuard on a client system. It is available for many operating systems: https://www.wireguard.com/install/ Android or iOS make good first systems, because you can get all the details via QR code. Setting up the Unraid side of the VPN tunnel First, go to Settings -> Network Settings -> Interface eth0. If "Enable bridging" is "Yes", then WireGuard will work as described below. If bridging is disabled, then none of the "Peer type of connections" that involve the local LAN will work properly. As a general rule, bridging should be enabled in Unraid. If UPnP is enabled on your router and you want to use it in Unraid, go to Settings -> Management Access and confirm "Use UPnP" is set to Yes On Unraid 6.8, go to Settings -> VPN Manager Give the VPN Tunnel a name, such as "MyHome VPN" Press "Generate Keypair". This will generate a set of public and private keys for Unraid. Take care not to inadvertently share the private key with anyone (such as in a screenshot like this) By default the local endpoint will be configured with your current public IP address. If you chose to setup DDNS earlier, change the IP address to the DDNS address. Unraid will recommend a port to use. You typically won't need to change this unless you already have WireGuard running elsewhere on your network. Hit Apply If Unraid detects that your router supports UPnP, it will automatically setup port forwarding for you: If you see a note that says "configure your router for port forwarding..." you will need to login to your router and setup the port forward as directed by the note: Some tips for setting up the port forward in your router: Both the external (source) and internal (target/local) ports should be the set to the value Unraid provides. If your router interface asks you to put in a range, use the same port for both the starting and ending values. Be sure to specify that it is a UDP port and not a TCP port. For the internal (target/local) address, use the IP address of your Unraid system shown in the note. Google can help you find instructions for your specific router, i.e. "how to port forward Asus RT-AC68U" Note that after hitting Apply, the public and private keys are removed from view. If you ever need to access them, click the "key" icon on the right hand side. Similarly, you can access other advanced setting by pressing the "down chevron" on the right hand side. They are beyond the scope of this guide, but you can turn on help to see what they do. In the upper right corner of the page, change the Inactive slider to Active to start WireGuard. You can optionally set the tunnel to Autostart when Unraid boots. Defining a Peer (client) Click "Add Peer" Give it a name, such as "MyAndroid" For the initial connection type, choose "Remote access to LAN". This will give your device access to Unraid and other items on your network. Click "Generate Keypair" to generate public and private keys for the client. The private key will be given to the client / peer, but take care not to share it with anyone else (such as in a screenshot like this) For an additional layer of security, click "Generate Key" to generate a preshared key. Again, this should only be shared with this client / peer. Click Apply. Note: Technically, the peer should generate these keys and not give the private key to Unraid. You are welcome to do that, but it is less convenient as the config files Unraid generates will not be complete and you will have to finish configuring the client manually. Configuring a Peer (client) Click the "eye" icon to view the peer configuration. If the button is not clickable, you need to apply or reset your unsaved changes first. If you are setting up a mobile device, choose the "Create from QR code" option in the mobile app and take a picture of the QR code. Give it a name and make the connection. The VPN tunnel starts almost instantaneously, once it is up you can open a browser and connect to Unraid or another system on your network. Be careful not to share screenshots of the QR code with anyone, or they will be able to use it to access your VPN. If you are setting up another type of device, download the file and transfer it to the remote computer via trusted email or dropbox, etc. Then unzip it and load the configuration into the client. Protect this file, anyone who has access to it will be able to access your VPN. About DNS The 2019.10.20 release of the Dynamix Wireguard plugin includes a "Peer DNS Server" option (thanks @bonienl!) If you are having trouble with DNS resolution on the WireGuard client, return to the VPN Manager page in Unraid and switch from Basic to Advanced mode, add the IP address of your desired DNS server into the "Peer DNS Server" field, then install the updated config file on the client. You may want to use the IP address of the router on the LAN you are connecting to, or you could use a globally available IP like 8.8.8.8 This is required for "Remote tunneled access" mode, if the client's original DNS server is no longer accessible after all traffic is routed through the tunnel. If you are using any of the split tunneling modes, adding a DNS server may provide name resolution on the remote network, although you will lose name resolution on the client's local network in the process. The simplest solution is to add a hosts file on the client that provides name resolution for both networks. Complex Networks (added Oct 24) The instructions above should work out of the box for simple networks. With "Use NAT" defaulted to Yes, all network traffic on Unraid uses Unraid's IP, and that works fine if you have a simple setup. However, if you have Dockers with custom IPs or VMs with strict networking requirements, things may not work right (I know, kind of vague, but feel free to read the two WireGuard threads for examples) A partial solution is: In the WireGuard config, set "Use NAT" to No In your router, add a static route that lets your network access the WireGuard "Local tunnel network pool" through the IP address of your Unraid system. For instance, for the default pool of 10.253.0.0/24 you should add this static route: Network: 10.253.0.0/16 (aka 10.253.0.0 with subnet 255.255.0.0) Gateway: <IP address of your Unraid system> (Note that this covers the entire class B 10.253.x.x network, so you can add other WireGuard tunnels without having to modify your router setup again.) With these changes, your network should work normally. However, your WireGuard clients still may not be able to access Dockers on custom IPs or VMs. If you find a solution to this, please comment!
  3. 15 points
    ***Update*** : Apologies, it seems like there was an update to the Unraid forums which removed the carriage returns in my code blocks. This was causing people to get errors when typing commands verbatim. I've fixed the code blocks below and all should be Plexing perfectly now Y =========== Granted this has been covered in a few other posts but I just wanted to have it with a little bit of layout and structure. Special thanks to [mention=9167]Hoopster[/mention] whose post(s) I took this from. What is Plex Hardware Acceleration? When streaming media from Plex, a few things are happening. Plex will check against the device trying to play the media: Media is stored in a compatible file container Media is encoded in a compatible bitrate Media is encoded with compatible codecs Media is a compatible resolution Bandwith is sufficient If all of the above is met, Plex will Direct Play or send the media directly to the client without being changed. This is great in most cases as there will be very little if any overhead on your CPU. This should be okay in most cases, but you may be accessing Plex remotely or on a device that is having difficulty with the source media. You could either manually convert each file or get Plex to transcode the file on the fly into another format to be played. A simple example: Your source file is stored in 1080p. You're away from home and you have a crappy internet connection. Playing the file in 1080p is taking up too much bandwith so to get a better experience you can watch your media in glorious 240p without stuttering / buffering on your little mobile device by getting Plex to transcode the file first. This is because a 240p file will require considerably less bandwith compared to a 1080p file. The issue is that depending on which format your transcoding from and to, this can absolutely pin all your CPU cores at 100% which means you're gonna have a bad time. Fortunately Intel CPUs have a little thing called Quick Sync which is their native hardware encoding and decoding core. This can dramatically reduce the CPU overhead required for transcoding and Plex can leverage this using their Hardware Acceleration feature. How Do I Know If I'm Transcoding? You're able to see how media is being served by playing a first something on a device. Log into Plex and go to Settings > Status > Now Playing As you can see this file is being direct played, so there's no transcoding happening. If you see (throttled) it's a good sign. It just means is that your Plex Media Server is able to perform the transcode faster than is necessary. To initiate some transcoding, go to where your media is playing. Click on Settings > Quality > Show All > Choose a Quality that isn't the Default one If you head back to the Now Playing section in Plex you will see that the stream is now being Transcoded. I have Quick Sync enabled hence the "(hw)" which stands for, you guessed it, Hardware. "(hw)" will not be shown if Quick Sync isn't being used in transcoding. PreRequisites 1. A Plex Pass - If you require Plex Hardware Acceleration Test to see if your system is capable before buying a Plex Pass. 2. Intel CPU that has Quick Sync Capability - Search for your CPU using Intel ARK 3. Compatible Motherboard You will need to enable iGPU on your motherboard BIOS In some cases this may require you to have the HDMI output plugged in and connected to a monitor in order for it to be active. If you find that this is the case on your setup you can buy a dummy HDMI doo-dad that tricks your unRAID box into thinking that something is plugged in. Some machines like the HP MicroServer Gen8 have iLO / IPMI which allows the server to be monitored / managed remotely. Unfortunately this means that the server has 2 GPUs and ALL GPU output from the server passed through the ancient Matrox GPU. So as far as any OS is concerned even though the Intel CPU supports Quick Sync, the Matrox one doesn't. =/ you'd have better luck using the new unRAID Nvidia Plugin. Check Your Setup If your config meets all of the above requirements, give these commands a shot, you should know straight away if you can use Hardware Acceleration. Login to your unRAID box using the GUI and open a terminal window. Or SSH into your box if that's your thing. Type: cd /dev/dri ls If you see an output like the one above your unRAID box has its Quick Sync enabled. The two items were interested in specifically are card0 and renderD128. If you can't see it not to worry type this: modprobe i915 There should be no return or errors in the output. Now again run: cd /dev/dri ls You should see the expected items ie. card0 and renderD128 Give your Container Access Lastly we need to give our container access to the Quick Sync device. I am going to passively aggressively mention that they are indeed called containers and not dockers. Dockers are manufacturers of boots and pants company and have nothing to do with virtualization or software development, yet. Okay rant over. We need to do this because the Docker host and its underlying containers don't have access to anything on unRAID unless you give it to them. This is done via Paths, Ports, Variables, Labels or in this case Devices. We want to provide our Plex container with access to one of the devices on our unRAID box. We need to change the relevant permissions on our Quick Sync Device which we do by typing into the terminal window: chmod -R 777 /dev/dri Once that's done Head over to the Docker Tab, click on the your Plex container. Scroll to the bottom click on Add another Path, Port, Variable Select Device from the drop down Enter the following: Name: /dev/dri Value: /dev/dri Click Save followed by Apply. Log Back into Plex and navigate to Settings > Transcoder. Click on the button to SHOW ADVANCED Enable "Use hardware acceleration where available". You can now do the same test we did above by playing a stream, changing it's Quality to something that isn't its original format and Checking the Now Playing section to see if Hardware Acceleration is enabled. If you see "(hw)" congrats! You're using Quick Sync and Hardware acceleration [emoji4] Persist your config On Reboot unRAID will not run those commands again unless we put it in our go file. So when ready type into terminal: nano /boot/config/go Add the following lines to the bottom of the go file modprobe i915 chmod -R 777 /dev/dri Press Ctrl X, followed by Y to save your go file. And you should be golden!
  4. 12 points
    Overview of what Macinabox does. This is a container that is designed to help make installing a macOS KVM Virtual Machine very easy. The VM doesnt run in a docker container but runs as a full fat Unraid KVM VM selectable in the VM tab of the webUI. However your server's hardware must be 'fairly' modern to run a macOS VM. You are going to need a CPU that supports SSE 4.2 & AVX2 for macOS Mojave and above to work. Both Intel and AMD processors are fine to use. To use just select the OS type, vdisk type and size. ( I suggest you use raw disk type) Then let Macinabox make a vdisk for the install, download the recovery media, clover boot-loader and create a vm xml file that is preconfigured to work. (The xml files created will have unique uuids and network mac addresses) Sit back and let the container do its stuff - note - To see the progress of the container, you do this by looking at the log whilst it runs. You will know when it has finished as you will see a message saying to stop then start the array. This container doesn't have a webUI (but clicking on the webUI button of this container will just take you too a video of how to use this container) - - - - - - - - - - - So after the container has done its stuff. Stop the array then start it again and the VM will become visible in the Unraid VM manger. (you will not see it if you dont do this) Click start to start the VM and you will boot into a clover boot-loader. Then press enter to continue to load the recovery media. Goto disk utility and format the vdisk. Close disk utility. Select re-install macOS then sit back and wait until done.. Please be patient when installing as the install speed will depend on your internet connection and how busy the Apple servers are. After installing the VM don't run the container again or else it will overwrite the vdisk with the install on. (I will change this so it cant happen soon) Probably best after installing to remove the container for now just to be safe. edit - I have now added checks to stop the container re downloading install media if run again. It will also check for an existing vdisk and if found not create another and therefore not overwrite it. Same goes for the xml file. However if the container is run again it will download another clover and ovmf files. I have done this so people can easily update clover and ovmf files if needed.
  5. 8 points
  6. 8 points
    PSA. It seems openvpn pushed another broken bin, tagged 2.7.3 I get the same error with it as I did with the previously pulled 2.7.2 While they/us try to figure it out, you can change your image to "linuxserver/openvpn-as:2.6.1-ls11" and it should work
  7. 7 points
    Support for Nginx Proxy Manager docker container Application Name: Nginx Proxy Manager Application Site: https://nginxproxymanager.jc21.com Docker Hub: https://hub.docker.com/r/jlesage/nginx-proxy-manager/ Github: https://github.com/jlesage/docker-nginx-proxy-manager Make sure to look at the complete documentation, available on Github ! Post any questions or issues relating to this docker in this thread.
  8. 7 points
    I've been doing this for a long time now via command line with my important VM's. First, my VM vdisk's are in the domains share, where I have created the individual VM directory as a btrfs subvolume instead of a normal directory, ie: btrfs subv create /mnt/cache/domains/my-vm results in: /mnt/cache/domains/my-vm <--- a btrfs subvolume Then let vm-manager create vdisks in here normally and create your VM. Next, when I want to take a snapshot I hibernate the VM (win10) or shut it down. Then from host: btrfs subv snapshot -r /mnt/cache/domains/my-vm /mnt/cache/domains/my-vm/backup Of course you can name the snapshot anything, perhaps include a timestamp. In my case, after taking this initial backup snapshot, a subsequent backup will do something like this: btrfs subv snapshot -r /mnt/cache/domains/my-vm /mnt/cache/domains/my-vm/backup-new Then I send the block differences to a backup directory on /mnt/disk1 btrfs send -p /mnt/cache/domains/myh-vm/backup /mnt/cache/domains/myh-vm/backup-new | pv | btrfs receive /mnt/disk1/Backup/domains/my-vm and then delete backup and rename backup-new to backup. What we want to do is add option in VM manager that says, "Create snapshot upon shut-down or hibernation" and then add a nice GUI to handle snapshots and backups. I have found btrfs send/recv somewhat fragile which is one reason we haven't tackled this yet. Maybe there's some interest in a blog post describing the process along with the script I use?
  9. 6 points
    On Friday, August 30th, using random.org's true random number generator, the following 14 forum users we're selected as winners of the limited-edition Unraid case badges: #74 @Techmagi #282 @Carlos Eduardo Grams #119 @mucflyer #48 @Ayradd #338 @hkinks #311 @coldzero2006 #323 @DayspringGaming #192 @starbix #159 @hummelmose #262 @JustinAiken #212 @fefzero #166 @Andrew_86 #386 @plttn #33 @aeleos (Note: the # corresponds to the forum post # selected in this thread.) Congratulations to all of the winners and a huge thank you to everyone else who entered the giveaway and helped us celebrate our company birthday! Cheers, Spencer
  10. 5 points
    I have made un updated video guide for setting up this great container. It covers setting up the container, port forwarding and setting up clients on Windows, macOS Linux (ubuntu Mate) and on cell phone - Android and IOS. Hope this guide helps people new to this setting up OpenVPN
  11. 5 points
    I'm on holiday with my family, I have tried to compile it several times but there are some issues that need working on. It will be ready when it's ready, a week for something that is free is no time at all, we're not releasing the source scripts for reasons I outlined in the original script, but if someone isn't happy with the timescales that we work on, then they are more than welcome to compile and create this solution themselves and debug any issues. The source code is all out there. I've made my feelings about this sort of thing well known before, I will outline it again. We're volunteers with families, jobs, wives and lives to leads. Until the day comes where working on this stuff pays our mortgages, feeds our kids and allows us to resign our full time jobs arrives then things happen at our place and our pace only. We have a discord channel that people can join and if they want to get involved then just ask, but strangely whenever I offer, the standard reply is that people don't have enough free time. If that is the case, fine, but don't assume any of us have any more free time than you, we don't, we just choose to dedicate what little free time we have to this project.
  12. 4 points
  13. 4 points
    Hi all, Would be lovely to have settings to configure access to shares on an individual user's page as well. Depending on the user-case, it's easier to configure things on a per-share basis, or a per-user basis. Would be nice to have the option, see wonderfully artistic rendering below:
  14. 4 points
    I understand you don't want to compile yourself, but I don't particularly want to compile any more than the exisiting four/five builds I do with every release. If these kernel modifications are needed then they should be pushed upstream to LimeTech.
  15. 4 points
    Here is a video that shows what to do if you have a data drive that fails and you want to swap/upgrade it and the disk that you to replace it is larger than your parity drive. So this shows the swap parity procedure. Basically you add the new larger drive then have Unraid copy the existing parity data over to the new drive. This frees up the old parity drive so it can then be used then to rebuild the data of the failed drive onto the old parity drive. Hope this is useful
  16. 4 points
    Well, it's finally happened: Unraid 6.x Tunables Tester v4.0 The first post has been updated with the release notes and download. Paul
  17. 4 points
    It's got a big LinuxServer logo at the top of the thread and lots of red warnings, and the first two posts go into some detail about how it works. The plugin is installed from Community Applications, with an author of LinuxServer.io How much clearer can we make it? Sent from my Mi A1 using Tapatalk
  18. 4 points
    Here is my banner. Think it fits unraid well!
  19. 4 points
    It's not an issue with stock Unraid, the issue is there isn't a patch available for the runc version. Due to the recent docker update for security reasons, Nvidia haven't caught up yet. Sent from my Mi A1 using Tapatalk
  20. 4 points
    Programs run as abc--not www-data. Pretty sure you need to specify the PHP interpreter too. So, it should look like this: sudo -u abc php /config/www/nextcloud/occ db:convert-filecache-bigint
  21. 3 points
    Note: To view the application lists before installing unRaid, click HERE Community Applications (aka CA) This thread is rather long (and is mostly all off-topic), and it is NOT necessary to read it in order to utilize Community Applications (CA) Just install the plugin, go to the apps tab and enjoy the freedom. If you find an issue with CA, then don't bother searching for answers in this thread as all issues (when they have surfaced) are fixed generally the same day that they are found... (But at least read the preceding post or two on the last page of the thread) - This is without question, the best supported plugin / addon in the universe - on any platform. Simple interface and easy to use, you will be able to find and install any of the unRaid docker or plugin applications, and also optionally gain access to the entire library of applications available on dockerHub (~1.8 million) INSTALLATION To install this plugin, paste the following URL into the Plugins / Install Plugin section: https://raw.githubusercontent.com/Squidly271/community.applications/master/plugins/community.applications.plg After installation, a new tab called "Apps" will appear on your unRaid webGUI. To see what the various icons do, simply press Help or the (?) on unRaid's Tab Bar. Note All screenshots in this post are subject to change as Community Applications continues to evolve Easily search or browse applications Get full details on the application Easily reinstall previously installed applications And much, much more (including the ability to search for and install any of the containers available on dockerHub (1,000,000+) USING CA CA also has a dedicated Settings section (click Settings) which will let you fine tune certain aspects of its operation. NOTE: The following video was made previously to the current user interface, so the video will look significantly different than the plugin itself. But it's still worth a watch. Buy Andrew A Beer! Note that CA is always (and always will be) compatible with the latest Stable version of unRaid, and the Latest/Next version of unRaid. Intermediate versions of various Release Candidates may or may not be compatible (though they usually are - But, if you have made the decision to run unRaid Next, then you should also ensure that all plugins and unRaid itself (not just CA) are always up to date). Additionally, every attempt is made to keep CA compatible with older versions of unRaid. As of this writing, CA is compatible with all versions of unRaid from 6.4 onward. Cookie Note: CA utilizes cookies in its regular operation. Some features of CA may not be available if cookies are not enabled in your browser. No personally identifiable information is ever collected, no cookies related to any software or media stored on your server are ever collected, and none of the cookies are ever transmitted anywhere. Cookies related to the "Look & Feel" of Community Applications will expire after a year. Any other cookies related to the operation of CA are automatically deleted after they are used.
  22. 3 points
    For anyone running into the e1000-82545em bridging-to-br0 weirdness under Catalina, I have a workaround that's working fine for me: Install AppleIntelE1000e.kext (I'm using the latest build from the fork at https://github.com/chris1111/AppleIntelE1000e) either to /Library/Extensions (the advantage being simplicity; you can install it manually or with the simple KextBeast utility) or by injecting it with Clover (the advantage being that it will likely work while installing macOS or when booted into Recovery Mode). Change the Interface definition in your XML to use the 'e1000e' virtual NIC: <model type='e1000e'/> Having done this, I can bridge to br0 under Catalina without issue, and even access the App Store and use iCloud services. I'm hoping to be able to make 'virtio-net-pci' work one of these days, but no luck so far.
  23. 3 points
    LibreELEC, TBS-OS, Digital Devices v6.8.0rc1 all done. TBS-Crazy-Cat broken at the moment.
  24. 3 points
    The problem was that in 6.8.0-rc1 I could not access the the Flash Drive ( boot drive) from Krusader. The reason being that Limetech has changed the permission to the flash drive when it is mounted in the version. The only way in this version (and all future versions) to access the drive is through SMB or as user 'root'. (Understand that they are insistent that they are not going to change.) Currently, Krusader is being run as user 'nobody'. For anyone else looking for an answer, the answer is "YES". All you have to do is edit the Krusader Docker and change the PUID and PGID to that for the 'root' user. It is currently set for the 'nobody' user. On my system, the 'root' PUID is 0 and PGID is 0. I did run some tests and I did not find any problems after making the changes.
  25. 3 points
    Or, do it the non-destructive way and cover the pin(s) with Kapton tape, which is made for this type of application.
  26. 3 points
    The URLs missing is because of the multitude of mistakes the guys were making on that field, ca is now filling it out for them. Hit apply fix on each of them The update available constantly is due to a change at dockerhub. Install or update the auto update plugin which will patch the OS for this Sent from my NSA monitored device
  27. 3 points
    Which may or may not mean it's a good idea to push that version to a production environment. "Stable" unifi software has caused major headaches in the past, I'd much rather wait until it's been running on someone else's system for a while before I trust my multiple sites to it. If wifi goes down, it's a big deal. I'd rather not deal with angry users.
  28. 3 points
    Why is this your first post to our forum? There are solutions to the "dealbreaker" you mention, and probably solutions to any other problem you might encounter. There are a lot of friendly and helpful people here on this forum that give FREE support. Why haven't you taken advantage of it? There is a plugin that will run mover based on how full cache is here: https://forums.unraid.net/topic/70783-plugin-mover-tuning/ Another solution to your problem is more careful consideration of what you cache. Mover can't move to the slower array as fast as you can fill the faster cache, regardless of how frequently mover runs. So not caching some user shares or not caching very large transfers, for example are ways to deal with that. I don't get why you haven't taken advantage of our forum. This user community is one of the very best on the internet, and one of the very best features of Unraid.
  29. 3 points
    Attached is a debugged version of this script modified by me. I've eliminated almost all of the extraneous echo calls. Many of the block outputs have been replaced with heredocs. All of the references to md_write_limit have either been commented out or removed outright. All legacy command substitution has been replaced with modern command substitution The script locates mdcmd on it's own. https://paste.ee/p/wcwWV
  30. 3 points
    I've bumped Unmanic to 0.0.1-beta5 This includes the following changes: Modify some historical logging of useful stats This sets us up to start adding extra info like eta during the conversion process as well as the stats mentioned below Adds new "See All Records" screen Any suggestions for stats that you would like on this screen would be great Note that due to the changes in logging, only newly converted items will show here. Old stuff wont due to missing statistics data. Sorry Create backups of settings when saving (there were some cases where the settings were invalid but still saved). This corrupted our data and made it impossible to read. So now we test prior to committing changes. FFMPEG was causing some errors on certain files. If you have noted any conversion failures in the past, can you please re-test with this version to confirm if it is now resolved or not. Log rotation If you are debugging, you are spewing a crap ton of data to the logs. This update rotates the logs at midnight every day and keeps them for 7 days. Even if you are not debugging, this is much better. The next milestone is to add extended functionality to the settings: https://github.com/Josh5/unmanic/milestone/4 This will hopefully be the last major tidy up of core functionality. I think that once this milestone is complete we can safely pull this out of beta and look at things like HW decoding and improving on data that is displayed throughout the WebUI.
  31. 3 points
    I had the opportunity to test the “real word” bandwidth of some commonly used controllers in the community, so I’m posting my results in the hopes that it may help some users choose a controller and others understand what may be limiting their parity check/sync speed. Note that these tests are only relevant for those operations, normal read/writes to the array are usually limited by hard disk or network speed. Next to each controller is its maximum theoretical throughput and my results depending on the number of disks connected, result is observed parity check speed using a fast SSD only array with Unraid V6.1.2 (SASLP and SAS2LP tested with V6.1.4 due to performance gains compared with earlier releases) Values in green are the measured controller power consumption with all ports in use. 2 Port Controllers SIL 3132 PCIe gen1 x1 (250MB/s) 1 x 125MB/s 2 x 80MB/s Asmedia ASM1061 PCIe gen2 x1 (500MB/s) - e.g., SYBA SY-PEX40039 and other similar cards 1 x 375MB/s 2 x 206MB/s 4 Port Controllers SIL 3114 PCI (133MB/s) 1 x 105MB/s 2 x 63.5MB/s 3 x 42.5MB/s 4 x 32MB/s Adaptec AAR-1430SA PCIe gen1 x4 (1000MB/s) 4 x 210MB/s Marvell 9215 PCIe gen2 x1 (500MB/s) - 2w - e.g., SYBA SI-PEX40064 and other similar cards (possible issues with virtualization) 2 x 200MB/s 3 x 140MB/s 4 x 100MB/s Marvell 9230 PCIe gen2 x2 (1000MB/s) - 2w - e.g., SYBA SI-PEX40057 and other similar cards (possible issues with virtualization) 2 x 375MB/s 3 x 255MB/s 4 x 204MB/s 8 Port Controllers Supermicro AOC-SAT2-MV8 PCI-X (1067MB/s) 4 x 220MB/s (167MB/s*) 5 x 177.5MB/s (135MB/s*) 6 x 147.5MB/s (115MB/s*) 7 x 127MB/s (97MB/s*) 8 x 112MB/s (84MB/s*) *on PCI-X 100Mhz slot (800MB/S) Supermicro AOC-SASLP-MV8 PCIe gen1 x4 (1000MB/s) - 6w 4 x 140MB/s 5 x 117MB/s 6 x 105MB/s 7 x 90MB/s 8 x 80MB/s Supermicro AOC-SAS2LP-MV8 PCIe gen2 x8 (4000MB/s) - 6w 4 x 340MB/s 6 x 345MB/s 8 x 320MB/s (205MB/s*, 200MB/s**) *on PCIe gen2 x4 (2000MB/s) **on PCIe gen1 x8 (2000MB/s) Dell H310 PCIe gen2 x8 (4000MB/s) - 6w – LSI 2008 chipset, results should be the same as IBM M1015 and other similar cards 4 x 455MB/s 6 x 377.5MB/s 8 x 320MB/s (190MB/s*, 185MB/s**) *on PCIe gen2 x4 (2000MB/s) **on PCIe gen1 x8 (2000MB/s) LSI 9207-8i PCIe gen3 x8 (4800MB/s) - 9w - LSI 2308 chipset 8 x 525MB/s+ (*) LSI 9300-8i PCIe gen3 x8 (4800MB/s with the SATA3 devices used for this test) - LSI 3008 chipset 8 x 525MB/s+ (*) * used SSDs maximum read speed SAS Expanders HP 6Gb (3Gb SATA) SAS Expander - 11w Single Link on Dell H310 (1200MB/s*) 8 x 137.5MB/s 12 x 92.5MB/s 16 x 70MB/s 20 x 55MB/s 24 x 47.5MB/s Dual Link on Dell H310 (2400MB/s*) 12 x 182.5MB/s 16 x 140MB/s 20 x 110MB/s 24 x 95MB/s * Half 6GB bandwidth because it only links @ 3Gb with SATA disks Intel® RAID SAS2 Expander RES2SV240 - 10w Single Link on Dell H310 (2400MB/s) 8 x 275MB/s 12 x 185MB/s 16 x 140MB/s (112MB/s*) 20 x 110MB/s (92MB/s*) Dual Link on Dell H310 (4000MB/s) 12 x 205MB/s 16 x 155MB/s (185MB/s**) Dual Link on LSI 9207-8i (4800MB/s) 16 x 275MB/s LSI SAS3 expander (included on a Supermicro BPN-SAS3-826EL1 backplane) Single Link on LSI 9300-8i (tested with SATA3 devices, max usable bandwidth would be 2200MB/s, but with LSI's Databolt technology we can get almost SAS3 speeds) 8 x 475MB/s 12 x 340MB/s Dual Link on LSI 9300-8i (tested with SATA3 devices, max usable bandwidth would be 4400MB/s, but with LSI's Databolt technology we can get almost SAS3 speeds, limit here is going to be the PCIe 3.0 slot, around 6000MB/s usable) 10 x 510MB/s 12 x 460MB/s * Avoid using slower linking speed disks with expanders, as it will bring total speed down, in this example 4 of the SSDs were SATA2, instead of all SATA3. ** Two different boards have consistent different results, will need to test a third one to see what's normal, 155MB/s is the max on a Supermicro X9SCM-F, 185MB/s on Asrock B150M-Pro4S. Sata 2 vs Sata 3 I see many times on the forum users asking if changing to Sata 3 controllers or disks would improve their speed, Sata 2 has enough bandwidth (between 265 and 275MB/s according to my tests) for the fastest disks currently on the market, if buying a new board or controller you should buy sata 3 for the future, but except for SSD use there’s no gain in changing your Sata 2 setup to Sata 3. Single vs. Dual Channel RAM In arrays with many disks, and especially with low “horsepower” CPUs, memory bandwidth can also have a big effect on parity check speed, obviously this will only make a difference if you’re not hitting a controller bottleneck, two examples with 24 drive arrays: Asus A88X-M PLUS with AMD A4-6300 dual core @ 3.7Ghz Single Channel – 99.1MB/s Dual Channel - 132.9MB/s Supermicro X9SCL-F with Intel G1620 dual core @ 2.7Ghz Single Channel – 131.8MB/s Dual Channel – 184.0MB/s DMI There is another bus that can be a bottleneck for Intel based boards, much more so than Sata 2, the DMI that connects the south bridge or PCH to the CPU. Socket 775, 1156 and 1366 use DMI 1.0, socket 1155, 1150 and 2011 use DMI 2.0, socket 1151 uses DMI 3.0 DMI 1.0 (1000MB/s) 4 x 180MB/s 5 x 140MB/s 6 x 120MB/s 8 x 100MB/s 10 x 85MB/s DMI 2.0 (2000MB/s) 4 x 270MB/s (Sata2 limit) 6 x 240MB/s 8 x 195MB/s 9 x 170MB/s 10 x 145MB/s 12 x 115MB/s 14 x 110MB/s DMI 3.0 (3940MB/s) 6 x 330MB/s (Onboard SATA only*) 10 X 297.5MB/s 12 x 250MB/s 16 X 185MB/s *Despite being DMI 3.0, Skylake, Kaby Lake and Coffee Lake chipsets have a max combined bandwidth of approximately 2GB/s for the onboard SATA ports. DMI 1.0 can be a bottleneck using only the onboard Sata ports, DMI 2.0 can limit users with all onboard ports used plus an additional controller onboard or on a PCIe slot that shares the DMI bus, in most home market boards only the graphics slot connects directly to CPU, all other slots go through the DMI (more top of the line boards, usually with SLI support, have at least 2 slots), server boards usually have 2 or 3 slots connected directly to the CPU, you should always use these slots first. You can see below the diagram for my X9SCL-F test server board, for the DMI 2.0 tests I used the 6 onboard ports plus one Adaptec 1430SA on PCIe slot 4. UMI (2000MB/s) - Used on most AMD APUs, equivalent to intel DMI 2.0 6 x 203MB/s 7 x 173MB/s 8 x 152MB/s Ryzen link - PCIe 3.0 x4 (3940MB/s) 6 x 467MB/s (Onboard SATA only) I think there are no big surprises and most results make sense and are in line with what I expected, exception maybe for the SASLP that should have the same bandwidth of the Adaptec 1430SA and is clearly slower, can limit a parity check with only 4 disks. I expect some variations in the results from other users due to different hardware and/or tunnable settings, but would be surprised if there are big differences, reply here if you can get a significant better speed with a specific controller. How to check and improve your parity check speed System Stats from Dynamix V6 Plugins is usually an easy way to find out if a parity check is bus limited, after the check finishes look at the storage graph, on an unlimited system it should start at a higher speed and gradually slow down as it goes to the disks slower inner tracks, on a limited system the graph will be flat at the beginning or totally flat for a worst-case scenario. See screenshots below for examples (arrays with mixed disk sizes will have speed jumps at the end of each one, but principle is the same). If you are not bus limited but still find your speed low, there’s a couple things worth trying: Diskspeed - your parity check speed can’t be faster than your slowest disk, a big advantage of Unraid is the possibility to mix different size disks, but this can lead to have an assortment of disk models and sizes, use this to find your slowest disks and when it’s time to upgrade replace these first. Tunables Tester - on some systems can increase the average speed 10 to 20Mb/s or more, on others makes little or no difference. That’s all I can think of, all suggestions welcome.
  32. 3 points
    im on it guys, looks like there has been a switch to .net core which requires changes to the code which ive now done, new image now building
  33. 2 points
    @CHBMB If you ever put us internet nerds before your daughters movie watching I will be forced to uninstall the nvidia builds.
  34. 2 points
    My number 1 wish is better security https://forums.unraid.net/topic/80192-better-defaults/
  35. 2 points
    @theDrell The fix is now live in the latest beta and might get it's own point release as v1.50.2. If the beta fixes your problem please report back in this thread
  36. 2 points
    Are you willing to offer paid unraid setup through remote control?) My unraid skills are low, and there is a lot off people who not moved to unraid yet because of lack of knowledge and information. Or Unraid will be only system for geeks?
  37. 2 points
    There are a lot of us you do not really trust MS, Firefox and Chrome to be our password manager! They already have told us that they snoop into our personal lives and collect as much data about everyone of us as they can accumulate and that they plan on marketing that information. . Perhaps, we are paranoid but with their history and business plan, I would rather err on the paranoid side then truly trusting them with 'protecting' the passwords to my financial and personal life!
  38. 2 points
    Not quite, here is the explanation of how parity is calculated: https://wiki.unraid.net/index.php/UnRAID_Manual_6#Network_Attached_Storage Let's assume that you have an 8TB parity drive. Your array consists of several different size data drives (-- all 8TB or smaller). One of these data disks is a 500GB hard drive and that HD has the drive motor fail. To be rebuild this disk, only the first 500GB of the parity data is used (or needed). (A small bit of trivia knowledge for you-- The the actual calculation that is performed on the data to get the parity bit is called is an XOR operator. The XOR operator has been a member of the basic microprocessor instruction set since the 8008 days-- 1972.) While you may think that building parity (by writing 'zeros' to the portion of the parity drive that is not be actively used for calculating data parity) is a waste of time and resources, it makes sense from the logical software development standpoint of what has to happen when you add a data drive that has a larger capacity than any of the currently installed data disks. When this happens, you don't have to 'adjust' parity if you write all zeros to the drive being installed. Parity will always be correct. This 'zeroing' of the new drive is the first thing that Unraid will do. Then, if it finishes without error, Unraid will add the disk to the array and format it (can't remember if it asks permission or not). As this formatting (adding the basic file system structure) occurs, parity (less than 1% of the total data disk's capacity) will be updated as this formatting is being done.
  39. 2 points
    Edit: This is not a problem with an easy solution at all. I can monitor the transcode processes and make sure that everything is killed - but the only solution is to kill Plex: https://forums.plex.tv/t/stuck-in-p-state-p0-after-transcode-finished-on-nvidia/387685/24 I can user fuser -vk /dev/nvidia* and it will immediately switch to a P8 state. The only process using the card when this is run is "Plex Media Server" It's not hard to write a script that will only do this if: There are no processes using the card and the card is in a P0 state. I just don't know if there are any undesirable side-effects of doing it this way. Here is such a script: #!/bin/bash while true; do cur_pstate=$(nvidia-smi --query-gpu=pstate --format=csv,noheader) running_processes=$(ps --no-headers "$(nvidia-smi |tail -n +16 | head -n -1 | sed 's/\s\s*/ /g' | cut -d' ' -f3)" | wc -l) 2>/dev/null if [[ $cur_pstate = "P0" && $running_processes -eq 0 ]]; then # if we got here, the card is only running the Xorg process and is in the P0 state, let's fix that. fuser -kv /dev/nvidia* echo "Reset Power State" fi #sleep so we aren't blocking a thread constantly. sleep 1 done Starting the X server on Unraid does allow one to open nvidia settings; to do this you can use a script like this to start the X server (note, that since chvt and fgconsole aren't available, you will have to switch back to VT7 by pressing Ctrl+Alt+F7): #!/bin/bash ##This will only work on single GPU systems: GPUID=$(nvidia-xconfig --query-gpu-info | grep BusID | sed 's/^[^:]*: //') #Now that we know the PCI BusID of the card we can create the X server with a fake display: nvidia-xconfig -s -a --allow-empty-initial-configuration --use-display-device=None --virtual=640x480 --busid "$GPUID" -o /dev/stdout | X :99 -config /dev/stdin& Once you have that server running, you can return to the default unraid GUI and run: nvidia-settings -c :99 To open nvidia-settings on the card. You could also store an xorg configuration file and use that for the virtual X display, and to set persistent nvidia settings. The only way I can think of to fix this properly is to figure out why the Plex process is claiming the card and prevent that from happening. I'll look into it some more, but this needs to be fixed properly by Plex/nVidia. The linked thread at the Plex forums has more information. I may be able to detach the Plex Transcoder process with the wrapper script, making it it's own entity, and then trapping the SIGINT/SIGKILL in the wrapper and using it to kill the transcoder, effectively using the wrapper script to separate the Plex Media Server process from the Plex Transcoder process. It's pretty kludgy, but might work. Oh Boy: We're in idle P-State while transcoding territory!
  40. 2 points
    Why it's not appearing, I don't know, but effectively there's no point in any mappings when you're running on a custom bridge, as all mappings are ignored anyways
  41. 2 points
    So I have been watching this thread for a while...as I was the guy that originally had the problem. Since I downgraded back to 6.6.7, I have had zero problems with database corruption. I have NOT changed the data to point to a single disk, although I planning on doing it this weekend and testing. From the answers here though...that is not going to fix the issue. The corruption is still occurring for some people. I've read through some of the other threads that are "just plex" or some other application...and people are pointing them back to the application creators for fixes. It is NOT just happening for me with Plex, but with every application that uses the sqlite database. Like some of you, I'm questioning things in the kernel or something else that changed in 6.7. And I'm not crazy about updating again until I see an iteration of the OS that provides some fix. Just my thoughts....and yes, I am an absolute newbie to the system. Less than 1 year. thanks rm
  42. 2 points
    You canuseTools->New Config to reset the array, make any drive assignments you want and then start the array. If you have a parity drive assigned then Unraid will build parity based on the current assignments. This will not actually erase any data on the drives, but it will allow you to specify the current drive set that Unraid is using. If you want to erase the data on a drive you need to do the following: stop the array click on any drive you want to erase the data on and change its file system type start the array. The drive(s) will now show as unmountable and there is a check box to allow you to format unmountable drives (and gives you their serial numbers so you can check they are the ones expected). Click the check box and tell the system to format the drive(s). This should only take a few minutes. stop the array Click on a drive and change the file system to the one you want to end up with. start the array and repeat the format step At this point your disk(s) will show that they are basically empty. There will be a small amount of space showing as used but that is the overhead of creating the empty file system on the drive.
  43. 2 points
    Any LSI with a SAS2008/2308/3008 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, etc and clones, like the Dell H200/H310 and IBM M1015, these latter ones need to be crossflashed.
  44. 2 points
    The drive has to 32GB or less. There is no real reason to be concerned about what particular drive might be the absolute best. Any name brand drive (preferably USB2 as there are occasional compatibility problems with USB3 drives) will work fine. Don't worry about speed as the drive is basically used only during the boot process. (This is the reason why most Unraid flash drives will last for years.) Even if the drive should fail, LimeTech makes changing to a new drive a virtually automatic process. See Here: https://wiki.unraid.net/UnRAID_6/Changing_The_Flash_Device You should always have a backup copy of your flash drive. You can do that by Main >>> Boot Device (click on 'Flash') >>> Flash Device Settings (Click on 'FLASH BACKUP'). Having a current backup can have you up and running from the unlikely event of of a flash drive failure in a few minutes.
  45. 2 points
    ^This. In fact you are warned a number of times not to write to the disk you're trying to recover. I haven't used it on XFS disks but I've successfully used it to recover photos from a corrupt SD card. You need first to choose which OS you're going to run it on. My MacBook Pro has an SD card reader so I chose the macOS version. Other versions are available to run under Windows and Linux. They can all read the same file systems. You need to choose which edition of the software you want to download. The Standard Recovery edition is likely to be the one you want. It's free to install and test but unless you buy a licence (€49.95 for personal use) you won't we able to recover any but the smallest of files. So install it on your PC and let it scan the disk. This will take a long time but you get an indication of how long it's going to take and a progress bar. It will show you what it finds as a reconstructed virtual file system and then you can decide whether it's worth paying the money for the licence. I decided it was. You simply select the files you want to recover and choose where to save them.
  46. 2 points
    Hi Docker forum Just thought I'd share with you all, some material design icons that I made today for the containers I use in my system: https://imgur.com/a/ehRQ3 I couldn't stand the default smokeping icon looking so bad... So while I only wanted to change that single icon, it looked so nice that I had to rip out all of the other icons to make them look uniform Feel free to use any of these - I could probably add to this album if anyone really wants some more done in a similar style (The Plex icon reminds me a lot of the LSIO's Plex Request logo but it was the best I could do!) They're all 512x512 .png files & look wicked on the unRAID docker page
  47. 2 points
    Would be nice to see this at the bottom like it is on the desktop. As it stands now, you have to scroll back to the top, click the hamburger, and some other stuff. Its a new year and im trying to limit my mindless scrolling, and going back to the top is an easy way to cut down on that. how about it???
  48. 2 points
    I can confirm that Plex hardware transcoding continues to work with the above settings. Very happy with my setup now.
  49. 2 points
    same issue ubtunu 18.04 by running: sudo dhclient enp3s0 I was able to get a connection again
  50. 2 points
    Why does Sonarr keep telling me that it cannot import a file downloaded by nzbGet? (AKA Linking between containers) This problem seems to continually be brought back up, and the reasons all go back to host / container volume mapping. (note that I'm using nzbget / sonarr as an example, but the concept is the same for any apps that communicate via their APIs and not by "blackhole" method) First and foremost, within Sonarr's settings, tell it to communicate to nzbGet via the IP address of the server, not via localhost Here is how a file gets found by Sonarr, downloaded by nzbGet, post-processed by Sonarr, and moved to your array. Sonarr searches the indexers for the file, and then tells nzbGet (utilizing its API key) to download the file. Very few users have trouble with this section. nzbGet downloads the file, and then tells Sonarr the path that the file exists at. This is the section this FAQ entry is going to deal with. Sonarr performs whatever post processing you want them to do (see their appropriate project pages for help with this. Sonarr then moves the file from the downloaded location to the array. Once again, very few users have trouble with this section. nzbGet downloads the file, and then tells Sonarr the path that the file exists at Let's imagine some host / container volume mappings set up as follows (this set up seems to be a common set up for users having trouble) App Name Container Volume Host Volume nzbGet /config /mnt/cache/appdata/nzbget sonarr /config /mnt/cache/appdata/sonarr /downloads /mnt/cache/appdata/nzbget/downloads/completed/ Within nzbGet's settings, the downloads are set to go to /config/downloads/completed So after the download is completed, nzbGet tells sonarr that the file exists at /config/downloads/completed Sonarr dutifully looks at /config/downloads/completed and sees that nothing exists there and throws errors into its log stating that it can't import the file. Error will be something like can't import /config/downloads/completed/filename Why? Because the mappings don't match. Sonarr's /config mapping is set to /mnt/cache/appdata/sonarr, whereas nzbGet's /config mapping is set to /mnt/cache/appdata/nzbget. Ultimately, the file is stored at /mnt/cache/appdata/nzbget/downloads/completed/, and sonarr winds up looking for it at /mnt/cache/appdata/sonarr/downloads/completed. Another common set up issue (which is closer to working) App Name Container Volume Host Volume nzbGet /config /mnt/cache/appdata/nzbget /downloads /mnt/cache/appdata/downloads sonarr /config /mnt/cache/appdata/sonarr /downloads /mnt/cache/appdata/downloads/completed Here's what happens with this setup nzbGet is setup to download the files to /downloads/completed After a successful download, the file exists (as far as nzbGet is concerned) at /downloads/completed/...., and nzbGet tells sonarr that. Sonarr then looks for the file, and is unable to find it and throws errors into the logs. And the kicker is that the error states something along the lines of "Can't import /downloads/completed/whateverFilenameItIs" Everything kinda looks right. After all the error message is showing the correct path... No its not, because the mappings don't match between sonarr and nzbGet for the downloads nzbGet puts the file to /downloads/completed/filename (host mapping of /mnt/cache/appdata/downloads/completed/filename) sonarr is looking for the file at /downloads/filename (host mapped of /mnt/cache/appdata/downloads/completed/completed/filename) Huh? I don't get it. -> The container paths match between the two apps, but the host paths are different, which means that communication isn't going to work correctly. Think of the "container" path as a shortcut to the host path. Proper way to set up the mappings: App Name Container Volume Host Volume nzbGet /config /mnt/cache/appdata/nzbget /downloads /mnt/cache/appdata/downloads sonarr /config /mnt/cache/appdata/sonarr /downloads /mnt/cache/appdata/downloads Tell nzbget to store the files in /downloads/completed, and sonarr will be able to find and import the files because both the host and container volume paths match. TLDR: Trust me the above works