Leaderboard

Popular Content

Showing content with the highest reputation on 10/05/20 in all areas

  1. I would suggest removing the plug in, then moving everything. I'm working on getting a Dev box up and running so I can continue working on this for the new Beta.
    2 points
  2. THIS PROJECT IS DEPRECATED Due to loss in interest in maintaining Docker container, this project is no longer being worked on and no updates can be expected. ------------------------------------- Welcome to my fourth Docker Container that I've ever created. qbittorrentvpn, a fork of MarkusMcNugen's qBittorrentvpn, but with WireGuard support! Overview: Docker container which runs the latest qBittorrent-nox client while connecting to WireGuard or OpenVPN with iptables killswitch to prevent IP leakage when the tunnel goes down. This project is based of my DyonR/jackettvpn, which is based of MarkusMcNugen's qBittorrentvpn. Base: Debian 10.5-slim Automated Build: Not yet Application: https://github.com/qBittorrent/qBittorrent Docker Hub: https://hub.docker.com/r/dyonr/qbittorrentvpn/ GitHub: https://github.com/DyonR/docker-qbittorrentvpn Because this project is quite similar to MarkusMcNugen's Project, I asked permission of him beforehand.
    1 point
  3. Hey 👋 This started off as a bit of a hobby project that slowly become something that I thought could be useful to others. It was an exercise for me in writing a new app from scratch and the different choices I would make, compared to having to constantly iterate on an existing (large) code base. After sharing this with some of the community in the unofficial discord channel, I was encouraged to get it into a state where it makes sense for others to use. https://play.google.com/store/apps/details?id=uk.liquidsoftware.companion I've already received some great feedback as well as a number of issues and requests for new features, that I hope to add soon. I hope others will find this as useful as I do in managing your UNRAID servers. Enjoy
    1 point
  4. Hola, ¿Cuál crees que sería una buena traducción para Unraid "Unleash Your Hardware"? "Libera tu hardware"? Para fines de marketing. Gracias!
    1 point
  5. I think it works fine. It does sound weird but it work, unless you want to replace hardware for a different word altogether. "Libera tu maquina" = free your machine/unleash your machine "Libera tu systema" = Free your system/unleash your system (honestly sounds just as rough as "libera tu hardware")
    1 point
  6. My versions are different because I take another approach of updating the containers, the containers update on every start/restart and download the binaries directly from the official repos/github itself. Optimized for Unraid in my Github and on Dockerhub means that the variables are tweaked to fit Unraid nicly. Also some people got problems with moved downloads or if the Containers create a folder for a TV Show or Movie, sometimes you can't delete or edit a file, this problem should now be resolved, hopefully. Thank you appreciate that! I do my best to answer questions and resolve problems as quick as possible. If you got any further questions feel free to contact me again.
    1 point
  7. Actually it's not. 3.6M + 930G is not equal 1.4T Up to and including beta29 code uses 'stat -f', ie, statvfs(), to fetch Total, Free, and Available. In all file systems other than btrfs Free==Available. What's used by webGUI is only Total and Free, it gets Used = Total - Free (like any reasonable person would think it should work). Starting with next release, emhttpd exports fsSize - same as 'size' reported by 'df' and 'Total' reported by 'stat -f' (f_blocks) fsFree - same as 'avail' reported by 'df' and 'Available' reported by 'stat -f' (f_bavail) fsUsed= same as 'used' reported by 'df' and 'Total'-'Free' reported by 'stat -f' (f_blocks - f_bfree) - note this is how 'df' calculates 'used'. Seems to work except there are cases where "Used" + "Free" displayed on Main do not equal "Size".
    1 point
  8. Dear mr ich777, Today I noticed that your versions of Sonarr/ SabNZBd/ Lidarr and Radarr are now at the top of New Apps in the 'appstore'. I see in the description that it is optimised for Unraid but can you explain how these versions are different then the Linuxserver or Binhex versions? In any way, thanks for your efforts and all your templates that we can use!
    1 point
  9. I World reccomend checking out: https://forums.unraid.net/topic/94549-sanoidsyncoid-zfs-snapshots-and-replication/ Or https://forums.unraid.net/topic/84442-znapzend-plugin-for-unraid/ Sent from my iPhone using Tapatalk
    1 point
  10. No. BTRFS RAID levels are not quite like traditional ones, but with mismatched sizes this can affect the available space. You can use this site to see what is the available space for different combinations of disk size and RAID profiles
    1 point
  11. If you keep getting the same sectors reported in the syslog as having errors then this is a sign that the drives are reading OK.
    1 point
  12. Thanks - that worked! So I can test while I'm waiting for german language and OCR.
    1 point
  13. I use both Kodi (Main Home Theater Screen) and Plex (In Home/Remote Steaming). I do all scraping with Kodi via .NFO files and then I use the Plex Web Tools plugin with the XBMCNFOMovie/TV apps that allow you to ingest Kodi NFO files into Plex (Plex still refuses to use native .NFO files for ridiculous reasons...) Now I just point Plex to my media folders and everything JUST WORKS using the custom XBMC NFO scrapers. I could care less if the Plex database gets corrupted or I need to reinstall the whole thing (I don't even back it up). Since the metadata is stored locally with the media itself, it is the end all be all in my opinion. No more needing to keep 2 sources in sync with Metadata. Obviously the workflow here has a specific order (don't update PLEX until your KODI NFO is out there), but once you learn it, you can't beat it. I also LOVE KODI's custom skins/UI, so I use that with custom player (MPC-BE with MADVR) and it is GLORIOUS in 4K HDR on my OLED. They both serve their purposes, but for 4K content, Plex is just not there with HDR > SDR tone mapping yet. And even if it was, I still prefer my beautiful and fully custom KODI UI.
    1 point
  14. Added these two lines to this file in appdata.
    1 point
  15. Yeah, you can go ahead and get everything (InfluxDB, Telegraf, Grafana) installed and configured that is listed in the Dependencies section on Page/Post #1.
    1 point
  16. That looks like it did the trick for me. Thank you for looking into it.
    1 point
  17. Restart your container and watch the logs
    1 point
  18. Hmm that's really strange. But glad to hear that it works again...
    1 point
  19. I also tested it and no problem here... Can it be something with your network configurations or your permissions, have you changed anything lately? On what Unraid version are you?
    1 point
  20. Salut Romain, au plaisir de te lire sur le forum, j'espère qu'unraid sera au niveau de tes attentes.
    1 point
  21. Found the problem, NAT reflection for anyone else not able to view their dedicated servers on the same network. In pfsense, I set the port forward rules to be 'Pure' https://paste.hardnet.nz/?1cbf9acb427e7875#2RAEoCYZMZz8kzuv9Qg286gQSQKNEmKZgAHq3ScBRXQx
    1 point
  22. I think I finally got it working. I did try the newest windows insider preview build but that didn't help my specific issue. Today I updated from 6.9 beta 25 to beta 29 and while that created a few new issues i'll have to deal with (vnc broke for all vms, plex transcoding) it seems to have fixed nesting in ryzen, or at least with Zen 2. I created a new vm using an existing windows 10 image that was a fresh install. This instance does not have the insider preview build with the new hyper-v nesting support. I was able to use host-passthrough for the cpu, OVMF, i1440fx-5.1 with no manual modifications to the xml and its NOW WORKING! I have vmware workstation pro 15.5 installed in the windows vm running an instance of esxi 6.5 running inside that. Going to install a few guest vms inside esxi now and see if it'll continue to nest without any issues. This is a GAME CHANGER for doing esxi labs under unraid. If this all works out in the windows vm i'll go back to trying a straight esxi 6.5/7 guest directly on unraid which will end up being my real lab environment.
    1 point
  23. Thanks. Glad you’re liking it so much! We’ll be here when you take the plunge. I highly recommend waiting until next weekend when 1.4 drops. Well worth the short wait!
    1 point
  24. This looks so good. The 1.4 update looks great. Also can't wait for 1.5 from the sounds of the features in it. Will give it a go to install some time over the next few days. Will probably come back here for some help with it all as I haven't done this type of stuff before.
    1 point
  25. Sure is. When Big Sur is released officially a new Macinabox will be out. With a bunch of new features and choice between opencore and clover bootloaders.
    1 point
  26. Sorry for the delay. I simply lowered the level of logs and only important ones are now logged. I wasn't aware of this command, that's neat - Thanks!
    1 point
  27. Issue is definitely related to SSD and controller interaction. 72hours running without ssd and no ata errors. All dockers running as before. Will update when my new ssd arrives in a couple of days
    1 point
  28. Look at the marked items below: For a drive with only 549 hours on it, these are unacceptable number of problems/failures. I would recommend replacement! Most drives spend the majority of their lives with the counts for all three of these items being ZERO!!! A few 5_Reallocated_Sector_Ct (say, < 20 or so) can be acceptable as the drive gets older but rash of them is an indication that the drive is starting to fail. See here for more on SMART attributes: https://en.wikipedia.org/wiki/S.M.A.R.T.
    1 point
  29. Hi, I have had this problem as well and it started here and I managed to fix it temporarily it seems. Four days ago I woke up to my server switching on and off trying to get a parity check going. Up until now I am not sure what had happened during the parity check with the Kingston 1TB nvme but it was so severe that my system would switch off and back on even with a rescue disk. I was convinced that the power supply was broken, even though I checked it with a PSU meter. To be safe I ordered a new one. I am hoping I can still cancel it. After a memtest, HDD test, I noticed the problems with the reboot would start when the nvme was inserted in the system. Somehow I managed to clone the contents of the Kingston to a different M.2 I had laying around. I formatted the Kingston and cloned the information back onto it. After putting it back into the server I could mount it with Unassigned Devices but UNRAID would not accept it. ‘unmountable -Unsupported partition layout' After searching the forum I stumbled onto this great post and found the format button all the way down on the MAIN page of the UNRAID GUI. The correct way to upgrade a cache drive is to format it with the format button on the MAIN page and rsync the files back from a backup. Cloning will just not work. Propositions to fix some problematic things: I would urgently ask for the feature that if the system has detected 3 concessive attempts to do a parity check but could not be completed. That UNRAID starts with the array switched off and a error message of some sorts. A more user friendly way to let people know how to upgrade a cache drive? I know and understand why the format button was placed there because it can format all unmountable -Unsupported partition layouts. But for the sake of someone sanity is it possible to have the format button inside the same row for all applicable cases just like Unassigned Devices has it? Best, Noobspy
    1 point
  30. Sorry. it's not available yet. You can see here when it will be available. The readme is at both github and docker hub, which you already found.
    1 point
  31. It's not the socks credentials you need, it's your main username (pXXXXXXX) & password for PIA.
    1 point
  32. You can only have 1 main array that uses the Unraid traditional separate parity with individual format data drives. The pools can either be single device per pool XFS, or multi device BTRFS RAID volumes, using any RAID level you feel comfortable with. Typically the main array would be your bulk slow storage, all spinning rust of various sizes. SSD's would be arranged in different pools, with different RAID levels to suit each pool's specific purpose. By default all newly defined BTRFS pools are initialized as RAID1, but you can change that with a command line. Hopefully sometime in the next year you will be able to change RAID levels with a drop down selection.
    1 point
  33. I have actually gone ahead and added this as an extra option in the Parity Check Tuning plugin as this ends up both being a logical place to put this option and also the least work for me. A parity check or other long running array operation is also probably the most likely thing to trigger such a level of overheating. Despite the name of the plugin if you activate this option it will apply regardless of whether any long running array operation is currently in progress.
    1 point
  34. I did to 2.0.4dev38 and it wasnt working, but then BAM!!! Now it works so not sure why it was being so stubborn. I would like to say thanks to @Binhex. I appreciate your hard work and endless hours making sure idiots like me can use the dockers without issue.
    1 point
  35. Just popping in to salute you @binhex - great work at implementing fallback servers, and nextgen PIA ! Keep up the great work, it is very much appreciated
    1 point
  36. Check out this guide on smb: https://forum.level1techs.com/t/zfs-on-unraid-lets-do-it-bonus-shadowcopy-setup-guide-project/148764 (this part-> ZFS Snapshots – Part 2, The Samba bits)
    1 point
  37. Had the same issue, you need to edit that line and replace "-f16" with "-f14". Install works great after this edit. Github probably modified its HTML.
    1 point
  38. Hi Everyone, Thank you for the great information in this thread. I am adding a few more tweaks and notes below in order to run the native UNRAID Dynamics Wireguard simultaneously with the Linuxserver Wireguard Docker. I now have two working versions of Wireguard running on my machine with one specifically for use with whatever Dockers I decide to add to the new Wireguard VPN. When intially created, I named my new docker "wireguard4dockers" as shown below. When downloaded, you have to add a lot of the variables into the template, so this takes time, and if you have an error (like I did the first time), you might think you have lost the data after you click the "apply" button and the template disappears; but if you go to the CA "APPS" tab, you can reinstall the template and pick right back up where you left off. First off, since you are adding this as a new docker and probably have Wireguard set up on UNRAID already, when you begin to enter your specific information into the template, change the ListenPort so you don't have a conflicting port between this Wireguard docker and the built in Wireguard in UNRAID. By Default the UNRAID Wireguard listenport is 51820, which is also the standard listenport of the Linuxserver docker. Secondly, make sure that you set your config properly so the docker saves into your "appdata" folder using, Container Path: /config and Host Path to "your specific location". I initially did not set it up properly and couldn't figure out why my folder was blank, until I realized that I did not put the slash in front of "config". Also, don't forget to add the "config" folder inside your own "wireguard4docker" folder. I also changed my internal SUBNET to something completely different from the built-in Wireguard to avoid any conflicts. Not sure if this was necessary, but I thought it couldn't hurt. Also, take note that once you get the template created and it has saved as an operational Docker, if you import a pre-made config file into your config folder for this docker, you need to change the name of the file to "wg0" (that lower case w, g and a zero) or create your new template named as "wg0". This was noted on one of the many pages of posts in the links that danofun included above. Lastly, I had to include my specific Local LAN IP Address in the config file in the "PostUp" and "PostDown" lines ...part of another tip mentioned to be added to the config file in previous posts; in my file these two lines are: PostUp=ip route add 195.168.4.0/24 via $(ip route |awk '/default/ {print $3}') dev eth0 PostDown=ip route del 195.168.4.0/24 via $(ip route |awk '/default/ {print $3}') dev eth0 .....and going this route I did NOT need to add the following environmental variable into the docker template: "LAN_NETWORK ....populated with your LAN (i.e. 192.168.1.0/24)" noted to add in the above posts. Here are a few snippets of my "wireguard4docker" template. Please note, I also downloaded the Firefox docker to use to check out connectivity, following other posts on how to link other dockers to your vpn docker. Firefox Ports to add while you are setting up the "wireguard4docker" VPN created under the "advanced view": port 7814 will be your port to use to get into the firefox webgui. Using the posts in this thread, as well and the links provided by everyone, I was able to create a fully functioning secondary Wireguard Docker VPN running in less than an hour. Proving that we can in fact use an off the shelf existing Wireguard docker template to be used as a VPN for specific Docker Containers, while at the same time utilize the built in Wireguard Controls for your other VPN needs. Thus overcoming the bottleneck and limitation of not being able to have a "VPN tunneled access" running along with another tunnel instance within the built-in UNRAID Dynamics Wireguard Program. I hope this helps others. I really haven't done anything different other than compile a few critical pieces of information under the same thread. I spend a lot of time browsing the forum for information and am always amazed at what can be found here; but having run through this process this evening I thought this additional data might be helpful for other to have. Kudo's and a big thankyou to everyone prior who pave the way for amateur's like myself who are able to stumble through to make something work and confirm what others have accomplished does in fact work. Thanks,
    1 point
  39. You can try this https://forums.unraid.net/topic/57181-docker-faq/#comment-564326
    1 point
  40. Delete the file config/vfio-pci.cfg from the flashdrive and reboot.
    1 point
  41. Been researching for a couple of weeks now trying to figure this all out. Looking to replace/consolidate the following aging systems: Home PC (4770K): Blue Iris, Dockers, VMs, Plex Synology DS1812: 27 TBs storage (family photos, etc), additional dockers, web server, etc Here's what I've come up with. Addl comments below. PCPartPicker Part List CPU: Intel Core i9-10900K 3.7 GHz 10-Core Processor ($529.99 @ Best Buy) CPU Cooler: Noctua NH-D15 82.5 CFM CPU Cooler ($109.99 @ B&H) Motherboard: Asus ROG STRIX Z490-E GAMING ATX LGA1200 Motherboard ($283.99 @ Best Buy) Memory: G.Skill Ripjaws V 64 GB (2 x 32 GB) DDR4-3200 CL16 Memory ($219.99 @ Newegg) Storage: 2x Crucial MX500 1 TB 2.5" Solid State Drive ($112.00 @ Amazon) Storage: 4x Western Digital Red 10 TB 3.5" 5400RPM Internal Hard Drive ($269.98 @ Amazon) Video Card: Asus GeForce GTX 1650 SUPER 4 GB TUF Gaming OC Video Card ($169.99 @ B&H) Case Fan: 3x Noctua NF-A8 PWM 32.66 CFM 80 mm Fan ($15.95 @ Amazon) Total: $1743.78 (+extra drive costs) Prices include shipping, taxes, and discounts when available Generated by PCPartPicker 2020-08-02 21:15 EDT-0400 Additional purchases: Case: I've got a rack in an vented AV closet but my rack is only 19" deep so I was limited on case options. I went with the Chenbro Rackmount 4U Server Chassis RM42300-F I'm going to replace the 5.25 bays (right) and fan (left) with 2 Rosewill 4x3.5 cages. 2nd case bracket is shipping now. Also adding 2 Icy Dock Express Cages for SSDs for VMs Space will be tight, but I think I can slip the motherboard under the fan at the back of the cages and tuck the power cord under. SATA card: would appreciate feedback on the SAS9211-8I 8PORT Int 6GB Sata+SAS Pcie 2.0 card. Will this work to expand from 6 SATA to 14 SATA ports? Haven't used one of these before. I know there are cheaper options on eBay but looks like a lot of overseas sellers and scams too. NVMe: I'll be adding a couple of these too. Need to read up on exactly how to use them but sounds like they would be cache drive. Mobo/CPU: Went back and forth on AMD vs Intel, LGA1200 vs LGA1151, etc. I know this will take a lot of tinkering, but if i do it right I'm hoping this beast can just sit in the closet and run for 8-10 years so buying current gen. Happy to be talked out of it if that doesn't make sense. Only downside I can see is price and no ECC. Fans: Not sure how efficient the Rosewill 120mm cage fans will be but I guess I can swap them out if needed. Are the 3 Noctua 80mm fans and the Noctua CPU cooler good choices? That's about it. I still have lots to figure out about how to migrate and will likely keep all three systems running until I've ported everything over. Other than the case/cages, I'm not really tied to anything here so don't be shy about suggesting alternatives. Don't have a ton of time for trial and error so any feedback is appreciated. thx!
    1 point
  42. vmunich, did you ever get around to writing something for this? I'd be interested in doing the same. Edit: NM just found this on reddit: https://technicalramblings.com/blog/monitoring-your-ups-stats-and-cost-with-influxdb-and-grafana-on-unraid-2019-edition/
    1 point
  43. Eh, sure. Effectively you just have to execute the command from my other post. If you don't want to do that manually every time you open a ssh connection then you have to add it to this file: /root/.bash_profile To make it persistent across reboots (that's how I did it, not saying it's the most ideal way): Edit /boot/config/go and add: # Customise bash cat /boot/config/bash_extra.cfg >> /root/.bash_profile Create the /boot/config/bash_extra.cfg (e.g. with nano) and add: #docker-compose as container alias docker-compose='docker run --rm \ -v /var/run/docker.sock:/var/run/docker.sock \ -v "$PWD:$PWD" \ -w="$PWD" \ docker/compose:latest' And that's it. After a reboot that will add the command to the .bash_profile file meaning it'll automatically get executed one you open a shell.
    1 point
  44. I added the following to my reverse proxy for the admin panel location /admin { return 404; } I only access the panel locally using the direct ip.
    1 point
  45. For interface eth1 change "Enable bonding" setting to NO and apply. Then add eth1 as member to bond0
    1 point
  46. Axel F: beep -f 659 -l 460 -n -f 784 -l 340 -n -f 659 -l 230 -n -f 659 -l 110 -n -f 880 -l 230 -n -f 659 -l 230 -n -f 587 -l 230 -n -f 659 -l 460 -n -f 988 -l 340 -n -f 659 -l 230 -n -f 659 -l 110 -n -f 1047-l 230 -n -f 988 -l 230 -n -f 784 -l 230 -n -f 659 -l 230 -n -f 988 -l 230 -n -f 1318 -l 230 -n -f 659 -l 110 -n -f 587 -l 230 -n -f 587 -l 110 -n -f 494 -l 230 -n -f 740 -l 230 -n -f 659 -l 460
    1 point
  47. Mario.... beep -f 330 -l 137 -n -f 330 -l 275 -n -f 330 -l 137 -d 137 -n -f 262 -l 137 -n -f 330 -l 275 -n -f 392 -l 550 -d 550 -n -f 262 -l 412 -n -f 196 -l 137 -d 275 -n -f 164 -l 137 -d 137 -n -f 220 -l 275 -n -f 247 -l 137 -d 137 -n -f 233 -l 137 -n -f 220 -l 275 -n -f 196 -l 205 -n -f 330 -l 205 -n -f 392 -l 275 -n -f 440 -l 275 -n -f 349 -l 137 -n -f 392 -l 137 -d 137 -n -f 330 -l 275 -n -f 262 -l 137 -n -f 294 -l 137 -n -f 247 -l 412 -n -f 262 -l 412 -n -f 196 -l 137 -d 275 -n -f 164 -l 275 -d 137 -n -f 220 -l 275 -n -f 247 -l 137 -d 137 -n -f 233 -l 137 -n -f 220 -l 275 -n -f 196 -l 205 -n -f 330 -l 205 -n -f 392 -l 275 -n -f 440 -l 275 -n -f 349 -l 137 -n -f 392 -l 137 -d 137 -n -f 330 -l 275 -n -f 262 -l 137 -n -f 294 -l 137 -n -f 247 -l 412 -d 275 -n -f 392 -l 137 -n -f 370 -l 137 -n -f 349 -l 137 -n -f 311 -l 275 -n -f 330 -l 137 -d 137 -n -f 207 -l 137 -n -f 220 -l 137 -n -f 262 -l 137 -d 137 -n -f 220 -l 137 -n -f 262 -l 137 -n -f 294 -l 137 -d 275 -n -f 392 -l 137 -n -f 370 -l 137 -n -f 349 -l 137 -n -f 311 -l 275 -n -f 330 -l 137 -d 137 -n -f 523 -l 275 -n -f 523 -l 137 -n -f 523 -l 550 -n -f 392 -l 137 -n -f 370 -l 137 -n -f 349 -l 137 -n -f 311 -l 275 -n -f 330 -l 137 -d 137 -n -f 207 -l 137 -n -f 220 -l 137 -n -f 262 -l 137 -d 137 -n -f 220 -l 137 -n -f 262 -l 137 -n -f 294 -l 137 -d 275 -n -f 311 -l 275 -d 137 -n -f 294 -l 275 -n -f 262 -l 550 -d 550
    1 point