Leaderboard

Popular Content

Showing content with the highest reputation on 03/31/20 in all areas

  1. Greetings, I'm still trying to figure out how to author my own CA apps. But for now here's an easy setup that I'm pretty sure a number of you will appreciate. Searx, is a self hostable meta search engine with a focus on privacy and complete control. Here's their description from the site: "Searx is a free internet metasearch engine which aggregates results from more than 70 search services. Users are neither tracked nor profiled. Additionally, searx can be used over Tor for online anonymity." Features: (Also pilfered from their site) Self hosted No user tracking No user profiling About 70 supported search engines Easy integration with any search engine Cookies are not used by default Secure, encrypted connections (HTTPS/SSL) Hosted by organizations, such as La Quadrature du Net, which promote digital rights Links: Homepage: https://asciimoo.github.io/searx List of publicly hosted engines: https://searx.space/ Wiki: https://github.com/asciimoo/searx/wiki Source Code: https://github.com/asciimoo/searx Twitter Account: https://twitter.com/Searx_engine Ok, Now down to the setup You'll need to "Enable additional search results from dockerhub" (fig.01) Head on over to the community apps tab and search for "searx" Next click the text "Click Here To Get More Results from DockerHub" There's a number of results. We'll want to choose the actual author of the build. So look for the result below: In the setup we'll want to assign a verify the host adapter is set to "Bridge" We'll need a port to access it by so click "Add another Path, Port, Variable, Label or Device" and select "Port" from the drop down I've named it: Web UI set the Container Port to 8080 (this is the port searx listens to by default) I set the Host port to "8843" (any unused port will do here, just remember it for later) Click "ADD" And now before we finish lets make a few tweaks. We'll be adding a WebUI to the drop down in the dashboard and we'll assign it an Icon For the Icon URL paste in the following link: https://asciimoo.github.io/searx/_static/searx_logo_small.png For the WebUI paste the following: http://[IP]:[PORT:8843]/ (remember I mentioned that port number?) Click "Apply" Your Dashboard Icon should look like this: You now have your own self-hosted private search engine! In the preferences you'll be able to configure which search engines you'll want to use by default. It even searches those "Linux ISO" sites. I'll leave the rest up to your imagination. (I'm personally a fan of the legacy theme, as shown above) Enjoy your privacy! Hope this helps everyone. ~Iron
    2 points
  2. Hello! Now that it is possible to add your own images as case icons, I thought I'd start taking requests to make icons that match the style of the other icons. To make a request, please consider the following: 1. I do this on my own spare time, so I will probably not give an ETA 2. I will need the case manufacturer and model name. 3. I will need a picture (preferably straight from the front) 4. If you have something custom I'll give it a shot but I can't make any promises. For reference, these are the icons I've currently made. (some of them will appear in a later update). Update: It may be getting a bit cumbersome to find all the icons I've added so here is an updated overview: Cheers! Mex
    1 point
  3. This are now three docker container. aria2-daemon = Only the Aria2 daemon aria2webui = Aria2 daemon and webui-aria2 aria2-with-ariang = Aria2 daemon and AriaNG Application Name: Aria2 Daemon Only Docker Container: aria2-daemon Application Site: https://aria2.github.io/ Docker Hub: https://hub.docker.com/r/fanningert/aria2-daemon/ Github: https://github.com/fanningert/aria2-daemon Template-Repository: https://github.com/fanningert/unraid-docker-templates Application Name: Aria2 + WebUI Docker Container: aria2webui Application Site: https://aria2.github.io/ + https://github.com/ziahamza/webui-aria2 Docker Hub: https://hub.docker.com/r/fanningert/aria2-with-webui/ Github: https://github.com/fanningert/aria2-with-webui Template-Repository: https://github.com/fanningert/unraid-docker-templates Configuration after installation: After the first start you need to add your HOST, RPC-Port and SECRET code in the connection settings of the webUI (Settings -> Connection Settings). Application Name: Aria2 + AriaNg Docker Container: aria2-with-ariang Application Site: https://aria2.github.io/ + https://github.com/mayswind/AriaNg Docker Hub: https://hub.docker.com/r/fanningert/aria2-with-ariang/ Github: https://github.com/fanningert/aria2-with-ariang Template-Repository: https://github.com/fanningert/unraid-docker-templates Configuration after installation: After the first start you need to add your HOST, RPC-Port and SECRET code in the connection settings of the AriaNg (AriaNg Settings -> RPC). Features of all three docker containers single files for every hook of aria2, add there you can add your custom code. (on-bt-download-complete.sh, on-download-complete.sh, on-download-error.sh, ...) Many aria2 options are added to the unRaid docker settings page. You can add aria2 options into the file aria2_ext.conf, it will be append to the aria2.conf on the next start. But check first if the option is not already accessible over unRaid docker ui. Add a simple script to download all torrents of a rss feed. Simple add your rss feed urls (one url per line) into the file "rss_feed.txt" (Location config directory). And execute the script /config/rss_downloader.sh in the docker container. Beware currently the script download every torrent of the file and add it to the aria2 queue. There is no check if the torrent already loaded. Hints When you change the secret code, you need to change the secret also in the webui. TODO Execute rss_downloader.sh on startup of the docker, plus option to deactivate this feature.
    1 point
  4. Hello All, I'm currently running Plex as a docker and haven't been able to get WebTools to work as I would like to add the unsupported app store to my server. Referencing these two threads: WebTools Unsupported App Store V2 I searched within the threads and was unable to find any mentions of unraid, posted on the plex forum, and figured I would check here as well to see if anyone has experience with these. I installed the webtool channel successfully (as in it appears within Plex) however it does not allow me to access it using the URL as described in the guide. Does anyone here have these setup on their docker server? If so would you mind providing any tips? Thank you in advance!
    1 point
  5. I suspect your problem is caused by hardware stability issues (BIOS settings, overclocked RAM, etc. ) in general and has nothing to do with the UnRaid Nvidia plugin/build. There are many, many unRAID users running that build with hardware similar to yours. As @johnnie.black pointed out, with all four RAM slots on the MB populated, the fastest RAM speed a 3rd Gen Ryzen can support is DDR4-2667. If you are attempting to run the RAM at its rated (overclocked speed) of DDR4 3600 that will cause crashes.
    1 point
  6. Just did and it worked perfectly, thank you!
    1 point
  7. Folding @ Home seems to have a steady supply; particularly for GPU's 👍 ************************************************************************************** This is what the unRAID Docker image looks like under 'APPS'
    1 point
  8. This is solved. For anyone having similar issues, it seems Steam is woefully slow and isn't updating servers properly. It must have taken 6 or 7 full reboots, (and 2 full validations) for the updates to finally take effect.
    1 point
  9. According to ethtool you have no ethernet connected
    1 point
  10. That's not recommended. Unraid's gui should be protected from general access, use a VPN if you need a WAN connection. The other services you expose should be evaluated on a case by case basis. Unraid's gui is not yet ready to be exposed. That's the end goal, but we're not there yet.
    1 point
  11. In other words, not just the log. Did you read the whole link I gave? It tells you how to get diagnostics even when you can't reach the server over the network. I always use DHCP (the default) and then configure my router to give all my wired connections a MAC reserved IP. That way everything is managed in one place. Are you sure that static IP you have assigned will actually work with the rest of your network now? Nothing changed on your network, nothing else has already grabbed that IP? Everything on that same subnet?
    1 point
  12. Thx. It fixed itself after putting both paths to cache.
    1 point
  13. The "Need Help?" sticky pinned near the top of this same subforum: https://forums.unraid.net/topic/37579-need-help-read-me-first/ Diagnostics preferred if you can get them. What does it say the IP address is (top right of webUI)?
    1 point
  14. So last night I had a power outage. Before I went to sleep I had set my machine to move a bunch of files from cache and went to sleep. Woke up to the batteries all beeping. My battery gives me about 15 minutes to "do something". The server is set to shutdown with 5 minutes remaining but hung due to mover. I was wondering if we could have a check box that if shutdown is initiated the mover will finish whatever file it's working with then terminate to allow the server to shutdown when ordered? My main battery survived the ordeal but my secondary for all the server peripheral like monitor/switch and stuff didn't ;-;
    1 point
  15. Best bet would be to grab a spare HDD, or even a USB and make that the 'array' (also disable mover etc). Then put your two drives into a btrfs cache pool (raid1 or raid0), that way TRIM is still supported and everything else is 'out of the box' functionality.
    1 point
  16. You need to add another slash to both paths or it will fail, like so: rsync -narcv /mnt/disks/WDC_WD20EZRZ-00Z5HB0_WD-WCC4M3FD32PK/unraid/divx/Blindness/ /mnt/disk2/unraid/divx/Blindness/
    1 point
  17. You can go directly to the correct support thread for any of your dockers by simply clicking on its icon and selecting Support.
    1 point
  18. The Realtek should be fine. IF you want max write performance, you should be looking at bigger 7200rpm drives. (Higher bit density means faster read/write speed.) Again, avoid shingled drives for max write performance. Shingled drives are common with drives of 8TB or larger but there may be a few 6TB shingled drives. Many folks look for the 'sweet spot' by calculating the cost-per-TB.
    1 point
  19. Thanks for the plugin it gave me reason to jump back to Unraid.
    1 point
  20. We should just agree to disagree then. At least the OP gets to see it from both sides to make a better decision. 😉
    1 point
  21. if you use pia and you are seeing the above in your log then the issue is that the pia api is down, looks like they are having technical difficulties right now, see here:- https://www.reddit.com/r/PrivateInternetAccess/comments/fs7ja0/cant_get_forwarded_port/ For now your only option is to set STRICT_PORT_FORWARD to 'no', this will allow you to connect but you will NOT have a working incoming port, so speeds will be slow at best. Just to be clear, there is nothing i can do about this guys, its a vpn provider issue.
    1 point
  22. Nevermind, it's moved. Try Settings > Management Access
    1 point
  23. Pretty much anything works. Just stick with the better known names and recent hardware. (Exception is the recommended LSI SAS/SATA cards) If you are looking at running a VM, be sure to read the VM sections in both the update guide and the manual for the current version 6. Hardware is bit more restrictive depending on how close you want to get to the 'bare metal' experience.
    1 point
  24. You could try and set the "CALIBRE_TEMP_DIR" variable in the Docker container (Unraid -> Docker -> calibre -> Edit -> Add another Path, Port, Variable, Label or Device) and set it to e.g. "/config/tmp". Maybe you have to create the "tmp" folder manually first. Variable name taken from: https://manual.calibre-ebook.com/customize.html#environment-variables Than restart your calibre container. I don't know yet if this fixes the conversion error, but it fixes the same behavior when importing a large library. Hope this helps.
    1 point
  25. SMART isn't working because you're using a raid controller, it *might* work if you select "HP cciss" on the SMART controller type (Settings -> Disk Settings)
    1 point
  26. This is why you setup WireGuard (currently included in unRAID), OpenVPN-AS docker container or ZeroTier docker container. I have used all three, but WireGuard is the most convenient. They all work great for remote access without the risks inherent in opening standard ports on your server directly to the Internet.
    1 point
  27. It would appear your VM was infiltrated, if it were me I'd delete that vdisk and start over. Unraid itself is a little more resilient, but since they were in your VM, who knows where else in your network they poked around. Each device on your network needs to be examined and possibly reset, including your router.
    1 point
  28. You can add more parity later when you need it. However, considering your array consists of 3-4TB drives, you should first look to replace them with high capacity drives (e.g. 8TB+) instead of adding more low capacity drives + more parity. HDD fails in statistical patterns (after the "infant mortality" have been weeded out by stress testing (e.g. run a preclear cycle *hint* *hint*) each drive before adding to the array) so the more drives you have, the more likely you will have a failed drive. With regards to the cache pool, you can add more drives to the cache pool later as long as it is btrfs format. If your format your cache as xfs, it can only be used in a single-drive cache pool. If you increase the number of drives to >1 (even if the extra slot is not assigned), it will ask you to format the pool as btrfs if it isn't btrfs. My personal recommendation is that you should only use multi-drive cache pool if you want to run RAID-1 (i.e. mirror protection). Otherwise, you almost always have a better arrangement mounting the extra drives as unassigned device. For instance, instead of increasing the size of the cache pool by adding more drive, you can mount the additional SSD as unassigned and separate read-heavy and write-heavy data. That will improve the lifespan of both SSDs by reducing wear leveling and write magnification. Tip on getting SSD: avoid QLC and DRAM-less. They are cheap but in some cases can be even worse than a HDD.
    1 point
  29. Hi, Has anyone succeeded in passing a 5700 XT through to the MacInABox VM? It works great when I run it with VNC, but when I pass through my graphics card I get the Clover screen, then it glitches out when it shows the Apple boot screen and I never get a desktop. I've tried tweaking some options in Clover, but so far no luck. Any assistance would be appreciated if anyone has gotten this working. Thank you
    1 point
  30. Yes 2 things to note: Dual parity would be an overkill for a 6-drive array (assuming you stress test (e.g. run a preclear cycle) each drive before adding to the array). If you are really risk-averse then fair enough but you have to be super unlucky to not be fine with single parity in a 6-drive array. In case there's any misunderstanding, you can't migrate the 3x 3TB FreeNAS RAID-0 from your current rig to Unraid directly. You have to get data out of that first e.g. into some other drives. Unraid will have to format the 3x 3TB drive before they can be used in the array.
    1 point
  31. Bring up the edit page for the container and change the repository from "binhex/arch-qbittorrentvpn" to "binhex/arch-qbittorrentvpn:4.2.1-1-05". By default the "latest" tag is pulled unless you specify a tag in the repository field.
    1 point
  32. https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.6-Released
    1 point
  33. If you set it to only one gpu's id, f@h will still see both gpus, but it won't be able to start the job on one of them. You'll see an error in the log, something like " no compute devices matched gpu #0 blah blah you may need to update your graphics drivers". I paused that gpu so it no longer receives jobs it won't be able to complete.
    1 point
  34. This can be caused by the RFS file system used with 4.7 if the drives are nearly full. Sorry but there is no fix for this problem except to convert to another file system. (Basically, the fix is copy the data from one of the RFS drives to a new drive with one of the new formats. Format that old RFS drive to a new format and repeat with the next RFS drive.)
    1 point
  35. Here are the hardware requirements: I would recommend at least 4GB of RAM. Some folks have had problems updating from one version to the next version with only 2GB of RAM. (Updates now happen much quicker because of security patches required for those using VM's and Dockers.)
    1 point
  36. Yes, you will need to setup a SSL certificate and switch to HTTPS mode. This video covers a few topics, but from the 16:45 to 18:50, it shows how to set this up.
    1 point
  37. Kinda interesting since this morning my large server with dual 2GHz processors has been at an idle while connected to the Rosettta@home client. I restarted the docker to make sure nothing broke and after it connected again it is still not being utilized. My faster machines are still hard at work leading me to believe that they now have so much surplus of available machines they now may be utilizing only the faster equipment in the pool. This is a good thing and more than enough in this effort is a blessing. If I don't see activity by tomorrow I may assign the slower server to another service so it too will be getting used in a productive way. Either way I'm tickled that our group has shown so much compassion and human spirit in this crisis.
    1 point
  38. I don't think I've seen it mentioned anywhere, but I am updating this way: Go into the Docker's console through the webgui or ssh docker exec -it nextcloud bash. sudo -u abc php7 config/www/nextcloud/updater/updater.phar Common OCC commands: sudo -u abc php7 /config/www/nextcloud/occ db:add-missing-indices sudo -u abc php7 /config/www/nextcloud/occ db:convert-filecache-bigint Other than that it should be straight forward how to proceed. It's mostly a note for myself, but it's really easy like this. Not that Nextcloud doesn't jump from 15 to 18 for example, but goes through every major version upgrade before you get there in the end. I recommend looking at the overview page in the Nextcloud webgui and fix every single warning/issue before proceding to the next version, and do reboots of the docker in between.
    1 point
  39. What about mapping the different watch folders to the same folder on unRAID ? Make sure that you configure the automatic video converter to not touch the source files. You can then manually clean the source files once all conversions are done.
    1 point
  40. Look at the container's log: you should have a message telling the pre-conversion hook being invoked. Also, make sure you have the latest version of the container.
    1 point
  41. Are you running the latest version of the container? Because the OUTPUT_DIR should be set: https://github.com/jlesage/docker-handbrake/blob/master/rootfs/etc/services.d/autovideoconverter/run#L371
    1 point
  42. my 2 most requested is multiple arrays and snapshots.
    1 point
  43. I've been doing this for a long time now via command line with my important VM's. First, my VM vdisk's are in the domains share, where I have created the individual VM directory as a btrfs subvolume instead of a normal directory, ie: btrfs subv create /mnt/cache/domains/my-vm results in: /mnt/cache/domains/my-vm <--- a btrfs subvolume Then let vm-manager create vdisks in here normally and create your VM. Next, when I want to take a snapshot I hibernate the VM (win10) or shut it down. Then from host: btrfs subv snapshot -r /mnt/cache/domains/my-vm /mnt/cache/domains/my-vm/backup Of course you can name the snapshot anything, perhaps include a timestamp. In my case, after taking this initial backup snapshot, a subsequent backup will do something like this: btrfs subv snapshot -r /mnt/cache/domains/my-vm /mnt/cache/domains/my-vm/backup-new Then I send the block differences to a backup directory on /mnt/disk1 btrfs send -p /mnt/cache/domains/myh-vm/backup /mnt/cache/domains/myh-vm/backup-new | pv | btrfs receive /mnt/disk1/Backup/domains/my-vm and then delete backup and rename backup-new to backup. What we want to do is add option in VM manager that says, "Create snapshot upon shut-down or hibernation" and then add a nice GUI to handle snapshots and backups. I have found btrfs send/recv somewhat fragile which is one reason we haven't tackled this yet. Maybe there's some interest in a blog post describing the process along with the script I use?
    1 point
  44. Snapshots (For VMs and shares) would be huge. Right now it absolutely stops me from being able to recommend this to even a small business. iSCSI would be huge for me in a home scenario. Right now I run a VM on top of unRAID just to do iSCSI, which is an unnecessary pain. A secondary array that uses ZFS is again something I have to use a VM for at the moment, which I would love to see baked into unRAID natively.
    1 point
  45. So what did you say to Apple? I have Mojave installed, with icloud working, but imessage and FaceTime are no joy. Both are showing "an error occurred during activation" like above. Certainly you can't say, "I'm trying to boot this in an image, I have built a hackintosh, but this thing won't authenticate with imessage server". Many thanks for your post.
    1 point
  46. Ah whoops, that must have gotten overwritten somehow when I was tweaking stuff via the GUI, good catch! Updated and it seems to be displaying better but still getting stuck here:
    1 point
  47. @SpaceInvaderOne, this thread really needs to be policed better. If we don't watch it, it's going to start showing up in web searches for the magic phrase, and that would be a bad thing to bring attention. I'm sure @trurl is getting rather tired of dealing with mod editing posts on a daily basis.
    1 point
  48. This is a nice tool! Once it is installed it handles the installation of other Plex channels for you. I pretty much followed the instructions from here: https://github.com/dagalufh/WebTools.bundle/wiki/Installation Updated 9/14/2017 Guide to installing the WebTools 2.4.1 channel in Plex High level overview These instructions appear quite long, so here is a high level overview to show that it isn't really that complicated: Download the WebTools zip file, unzip it, and place it in the Plex appdata plugins directory. If the copy fails, fix the permissions and try again. After a delay the channel will automatically be installed and you can access WebTools here: http://<unraid IP address here>:33400/ Short instructions Download WebTools.bundle.zip from https://github.com/dagalufh/WebTools.bundle/releases/latest and extract it to your desktop. Locate the "Webtools.bundle" folder. Copy the Webtools.bundle folder to your Plex Plug-ins directory here: \\<tower>\appdata\<Plex appdata>\Library\Application Support\Plex Media Server\Plug-ins If you get a permission denied error, you'll need to SSH to the server and: cd "/mnt/user/appdata/<Plex appdata>/Library/Application Support/Plex Media Server/" chmod a+w Plug-ins then try copying the directory again When it is done you should be able to navigate to: \\<tower>\appdata\<Plex appdata>\Library\Application Support\Plex Media Server\Plug-ins\WebTools.bundle\Contents If you are able to pull up the Content directory directly under Plug-ins\WebTools.bundle, then everything is in the right place. Wait a few minutes, then login to Plex. Go to the Channels area and you should see WebTools (although if you click the image the wrong url will be displayed.) If it isn't listed, wait longer and try again. If you want to kick-start it, restart the Plex docker, wait some more and check again. Once you see the channel listed, you can access WebTools at this url: http://<unraid IP address here>:33400/ Long instructions: "tower" is the default name of an unRAID server, but you may have renamed yours. Wherever you see <tower> written here, substitute the name of your server, without the <> characters. Open your unRAID webgui and go to the Dockers tab. Click on the Plex icon and choose Edit. Find the "host path" that corresponds to the container path "/config". It will look like this: /mnt/user/appdata/<Plex appdata>/ Make note of the folder name that appears after "appdata" and use that wherever you see <Plex appdata> in this guide. In other words, if your host path looks like this: /mnt/cache/appdata/Plex Media Server/ then your <Plex appdata> is "Plex Media Server". From your desktop computer, navigate to the Plug-ins directory here: \\<tower>\appdata\<Plex appdata>\Library\Application Support\Plex Media server\Plug-ins Download Webtools.bundle.zip from https://github.com/dagalufh/WebTools.bundle/releases/latest and extract it to your desktop. Locate the "Webtools.bundle" folder. Copy the WebTools.bundle directory from step 4 to the Plug-ins directory you located in step 3. If that copy works, great! If you get a permissions error you will need to SSH in to the server to fix it. Once you have SSH'd in, you need to cd to the Plug-ins directory. We'll do this in steps. I recommend copy/pasting what you type and the results into a text file so you can ask for help if there are problems. If you get an error message from mistyping something, it should be fine to simply retry that step. If you get completely lost, type "exit" and start over. You should now be able to copy the WebTools.bundle directory into the Plug-ins directory. To make sure you put the directory in the right place, confirm you can navigate here from your desktop computer: \\<tower>\appdata\<Plex appdata>\Library\Application Support\Plex Media server\Plug-ins\WebTools.bundle\Content If you are able to pull up the Content directory directly under Plug-ins\WebTools.bundle, then everything is in the right place. Wait a few minutes, then wait some more, then login to Plex. Go to the Channels area and you should see WebTools (although if you click the image the wrong url will be displayed.) If it isn't listed, wait longer and try again. If you want to kick-start it, restart the Plex docker, wait some more and check again. Be patient, wait some more Once you see the channel listed, you can access WebTools at this url: http://<unraid IP address here>:33400/ Need Help? If you are still unable to get this working, please provide the following information: A. From step 1 - What is the name of the server you are using in place of <tower>? Also, is your system working fine other than this? If you have underlying issues with Plex or your server, you should resolve those before installing a new channel. B. From step 2 - What is the full host path that corresponds to the container path for "/config"? And what are you using for the <Plex appdata> value? C. From step 3 - Are you able to access \\<tower>\appdata\<Plex appdata>\Library\Application Support\Plex Media server\Plug-ins ? If not, where does it break? i.e. which of the follow does not work: \\<tower>\ \\<tower>\appdata\ \\<tower>\appdata\<Plex appdata> \\<tower>\appdata\<Plex appdata>\Library \\<tower>\appdata\<Plex appdata>\Library\Application Support \\<tower>\appdata\<Plex appdata>\Library\Application Support\Plex Media server \\<tower>\appdata\<Plex appdata>\Library\Application Support\Plex Media server\Plug-ins D. From step 4 - Look in your "Webtools.bundle" folder. It should contain subfolders for "http" and "Contents", plus a few other files. If it doesn't you should re-download / re-extract and try again. E. From step 5 - Do you get any errors when copying Webtools.bundle to the Plug-ins directory? F. From step 6 - If you had to do step 6, attach the text file where you were saving the results of all the commands you typed. G. From step 7 - Are you able to access these directories? \\<tower>\appdata\<Plex appdata>\Library\Application Support\Plex Media server\Plug-ins\WebTools.bundle \\<tower>\appdata\<Plex appdata>\Library\Application Support\Plex Media server\Plug-ins\WebTools.bundle\Content H. Attach "Plex Media Server.log" from the logs directory: \\<tower>\appdata\<Plex appdata>\Library\Application Support\Plex Media Server\Logs If "com.plexapp.plugins.WebTools.log" exists in the the "PMS Plugin logs" sub-directory, attach that as well.
    1 point