Leaderboard

Popular Content

Showing content with the highest reputation on 10/12/21 in all areas

  1. Hallo zusammen, ich weiß, dass es hier eventuell nicht hingehört, aber ich möchte mich bei allen aktiven Mitgliedern dieses Forums bedanken. Ihr habt mir alle den Einstieg in die unRAID-Welt sehr erleichert. Ich war seit einigen Wochen stiller Mitleser und habe mir mein System zusammengebaut. Seien es die Hardware-Ratschläge oder die Lösungen zu kleinen/großen Problemen. Als Neuling, wie ich einer bin, konnte ich mir bis jetzt hier im Forum zu jeder Frage eine Antwort erlesen. Klasse! Mein System läuft und ich fange schon jetzt an mehr damit machen zu wollen 🙈 Vielen, lieben Dank! 👏 Wünsche euch allen nur das Beste.
    4 points
  2. I saw that too, looks like it's the same width for both. (because why hide it under a fake link ? 😁 )
    2 points
  3. One thing I should probably also mention is to make sure that you don't use the (default) 'latest' tag in the docker image, but instead explicitly state a version (even if it's the latest one). That way you can check the Unifi forums (or here) to see if there's any bugs in a new version before upgrading to it, but still receive updates to the Docker image itself. As such, for the current latest version use this tag in the Docker 'Repository' value on the 'edit' page: linuxserver/unifi-controller:version-6.4.54 If you have already installed the Docker using the 'latest' tag, don't fret: just change the 'repository' value to the above, and you'll be fine. Note that if you have a 'latest' tag, you shouldn't try and downgrade to an earlier version: you're best off nuking the install and starting from scratch if you want to downgrade (just coz configs often change between versions). Info on tags here: https://hub.docker.com/r/linuxserver/unifi-controller/tags
    2 points
  4. FYI for those with this chip in their system, I decided to write a driver for this chip since I plan on getting a QNAP TS-X73A unit for me Unraid setup and it has this chip. It will be a full hwmon driver that will work with the fan control plugin in Unraid. A good amount of progress has already been done and can be found here: https://github.com/Stonyx/QNAP-EC. I'll provide updates once it's actually usable. Thanks, Harry
    2 points
  5. Compose Manager Beta Release! This plugin installs docker compose and compose switch. Use "docker compose" or "docker-compose" from the command line. See https://docs.docker.com/compose/cli-command/ for additional details. Install via Community Applications This plugin now adds a very basic control tab for compose on the docker page. The controls allow the creation of compose yaml files, as well as allowing you to issue compose up and compose down commands from the web ui. The web ui components are based heavily on the user.scripts plugin so a huge shoutout and thanks goes to @Squid. Future Work: Add scheduling for compose commands? Allow more options for configuring compose commands.
    1 point
  6. Today I'm releasing a beta / release candidate of a new GUI for Community Applications (thanks to @jonp and @Mex) Since this is a complete overhaul of the GUI, I need to get feedback / bugs / UX experience etc prior to actually releasing this. Please (please) try and break it, let me know of any issues, display aberrations etc. Outside of the GUI changes, the other notable changes are Enable Reinstall Default is now renamed to Install 2nd Instance, and it will automatically rename the app to make things easier when doing this Help Text is completely gone, because it's just not needed at all with this GUI (I hope) This release does require Unraid 6.9.0+ in order to work. More features are being added in the coming weeks, but it's time to get the user base's opinions and issues on this. Translations for the new text aren't available yet, so running in a language other than English will work but you will also see English present in spots. To install, first uninstall Community Applications. Then within Plugins - Install Plugin, paste in the following URL https://raw.githubusercontent.com/Squidly271/betaCA/main/plugins/community.applications.plg Any issues when running this should be posted in this thread, not the general CA thread.
    1 point
  7. Thanks. Fixed (it depended upon what your font-size was) Legacy "feature". Clearing or searching on blank now does nothing. Already fixed The Opinion section is up to @Mex
    1 point
  8. I did some more digging on my end and I think I have found the culprit, my VPN client. I'm using AirVPN with the eddie client. Since you said it was a DNS lookup wait I started thinking about my VPN. I'm quite certain that all of the HAR files I have provided have been with my VPN turned off. However after I wasn't having any issues this morning, I went and turned on my VPN and the slow loading was immediately present. I guess that probably leaves two options: I had my VPN on and didn't realize it and that is what has been causing issues. However that would beg the question why is FF slow to load and Chrome isn't when tested at the same time. The other being that when I turn off my VPN something is erroring and leaving my network connection in an incorrect state, possibly using the wrong DNS server or something along that line. In the eddie client, I was using the wintun driver option and have reverted back to the standard driver. I then turned off the VPN and the loading issue went away. I'm going to keep an eye on it and see if the loading issue comes back and under what circumstances. But I guess for now I think I can focus on my VPN client being the root issue. I'll update if I find out otherwise. Thanks for digging into the issue.
    1 point
  9. move the 2 files to the appdata folder from your mysql docker and run the commands from the sql docker shell you use, not from unraid and not from guacamole docker ...
    1 point
  10. @Hugh Jazz You’ll have to run from within the container since it has the mysql command line utilities.
    1 point
  11. Argh. Not sure what happened but will poke around. Looks like my youtube width changes reverted too....
    1 point
  12. This is great info, thank you for that!
    1 point
  13. Nur mit extremen Umwegen soweit ich weiß, mit dem Aufsetzen bist du vermutlich schneller. Hier aus dem Hilfetext von unRAID:
    1 point
  14. wow oh really! I think that might been the case, but i'm using PIA with socks5 now to avoid things
    1 point
  15. Ist bei Plex nicht anders. Mehrere Platten laufen nur beim Indexieren, aber das ist ja zu erwarten.
    1 point
  16. Nur wenn du beim erstellen qcow2 genommen hast, wenn du raw genommen hast belegt die vdisk den Speicher den du angegeben hast.
    1 point
  17. Dooph! I'm still living in 2020 don't ruin my reality and don't tell me what happens in 2021.... Fixed...
    1 point
  18. It seems that in May, the release notes decided to travel back in time
    1 point
  19. MySQL is the correct choice. I’ll try and create some detailed instructions on how to create the database. But in short you need to create a database, user, attach the user to the database, and then run the schema sql scripts. If you’re not comfortable with this just use the container that already includes MariaDB.
    1 point
  20. Thanks for the good info my dude, reassuring to know that it works with the hardware, I have just had a gigabit line switched on - so will be keeping DPI disabled (I had gleaned enough to know the usg wasn't beefy enough to handle this). Will give the 6.x version a go, since this is a fresh network I can experiment a bit without much consequence. Thanks for the tip around the current bandwidth and the android app, very useful stuff to know.
    1 point
  21. I guess most of you who tried to test the final Windows 11 build might allready noticed, the installer checks for the presence of a TPM module and enabled SecureBoot. If one of them is not present, the installer will halt with the following error directly after the edition selection With shift+F10 open a commandline Enter "regedit" into the cmd and press enter. The registry editor should open up. Navigate to "HKEY_LOCAL_MACHINE\SYSTEM\Setup" create a new key called "LabConfig" on the left side bei right clicking on "Setup" and select new > key ad the following 3 DWORD (32-Bit) entries inside LabConfig on the right. BypassTPMCheck BypassSecureBootCheck BypassRAMCheck Double click each entry and change the hex entry from 0 to 1 After that close the registry editor and the commandline and go one step back to the edition selection screen, select the version you wanna install, press next et voila, Windows 11 should install fine. Doing all this straight after the installer has started on the language selection menue also works. No need to go to the edition selection first. For this to work I used the default Win10 template with a 70G vdisk 4G RAM and 8cores. Nothing else on the template or in the xml has changed. Keep in mind this is only a solution if you want to try the new Windows version. No one at this point can guarantee how long this will work and if a future update will break something. We all know MS is randomly "breaking" things with every update. Some reports say updates might not even possible with this reg tweak in the future, therefore my warning for all those can't await the next Unraid version which propably will come with a fix and an emulated TPM module. only use this for testing Windows 11! Happy testing folks
    1 point
  22. Thank you, I want to add that this can be the solution if one doesn't wan't to mess with installing swtpm in unraid, ovmf with sb and tpm and xml template. Once unraid will be released that registry hacks can be reverted and windows updates will work.
    1 point
  23. What you could do is: Go to Tools -> New Config and use the option to retain all current assignments Return to the Main tab and move the disk mentioned from being a data drive to a parity drive. Start the array to commit the new assignments and build parity based on the assigned data disks. The array will show as unprotected until the parity build completes.
    1 point
  24. what is the difference between: and: there obviously is a difference but without any detail its hard to say, what is 'the same subnet' ?, are you sharing networks with other containers with the rtorrentvpn network?, are you talking about the docker network?, are you talking about a seperate vlan?, as much technical detail as possible is required here, screenshots of it failing would be useful. if a machine running your web browser is NOT in your defined LAN_NETWORK, in your case 10.0.20.0/24 then it will be blocked, this is a security feature of the container and prevents ip leakage, if you have multiple networks then you have to add them into LAN_NETWORK (comma separated). also please do the following:- https://github.com/binhex/documentation/blob/master/docker/faq/help.md EDIT - ahh i see your more detailed post here, ok let me read up on your comments:- https://forums.unraid.net/topic/46127-support-binhex-rtorrentvpn/?do=findComment&comment=1042395
    1 point
  25. If you mean this it just represents the total allocated chunks, they are allocated (and removed) as needed, GUI should show correct total, used and free space for this type of pool.
    1 point
  26. You are quite right. I should update the instructions removing that warning as I highly doubt anyone is still running the older version of Unraid and so there is no need to keep those instructions. Just follow the instructions in the template adding the --runtime=nvidia option and the other things that it says to.
    1 point
  27. Simple fix for anyone still searching, open console via unraid web-gui, paste the following: cd /usr/local/bin/nzbget/nzbget && curl https://curl.se/ca/cacert.pem > cacert.pem
    1 point
  28. Usually the Exporter is stopping if the connection to PiHole is lost and there is nothing I can do about that. I really don't want to wrap the Exporter in a script that tries to restart the Exporter if the connection is lost. Sent from my C64
    1 point
  29. Scheint zu funktionieren Gibt jetzt ein Update im Kernel dazu: https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.15-rc5-x86
    1 point
  30. Actually, you can do it more than that but not with the Automatic Web tool. IF you should ever need a second replacement (within the one year period), you would have to contact Customer Service and explain the situation.
    1 point
  31. Should be fixed now, please update the container and it will install the new version afterwards. If you experience any issues please feel free to contact me again.
    1 point
  32. I can confirm this was my missing piece. I was able to instantly get this running through your directions using a custom network. But I wanted to route this container through one of Binhex's VPN containers. Once I changed the container to none and routed through the VPN, it never resolved 'postgres'. I kept changing the config to host IPs / container IPs with no luck and was stuck for a while. - I should have read through the posts. Making the above change fixed my problem instantly. The moment I changed the path variable and restarted the container it picked up the IP and instantly loaded. Invidious looks like it works through a reverse proxy wither or not it has the port, domain or https_only config values set. I'm guessing it doesn't correctly link without them set, and instead would give the IP:Port/video id. But since it looked like it worked, I assumed it was reading the config. Knowing this, it most likely would work on bridge connections by directly pointing to the postgres container IP. Anyways, thanks for putting this container together!
    1 point
  33. You should be able to mount and/or repair the disk as an unassigned device. Can't see any reason to get a VM involved.
    1 point
  34. This should help https://wiki.unraid.net/Manual/Shares#Allocation_method
    1 point
  35. OK, so your question is really about why disk1 is the only one being used so far. Allocation Method is one of the settings for each of your user shares. Highwater is the default and is a good compromise between using all disks eventually without constantly switching between disks just because one disk might briefly have more free space than another. For your specific disk assignments, since disk1 is 4TB, the first decision point is at half of that, or 2TB. Then, since disk1 would still have as much free space as any other disk, it would continue to be used until the next halfway point, at 3TB. Then then next disk would be chosen.
    1 point
  36. @milfer322 @thebaconboss @pieman16 @dest0 I have created a new release on the development branch! Most important fixes are, added 2FA support and fixed the "new device spam" https://github.com/PlexRipper/PlexRipper/releases/tag/v0.8.7 Next high priority fixes are these: - Download media files are being wrongly named due to a parsing error and given the incorrect file permissions. - Download speed limit configurable per server as not to fully take up all available bandwidth - Slow download speeds with certain servers - Disabling the Download page commands for TvShow and Seasons download tasks, these are currently not working and might confuse users I've also added the feedback I received here to the Github issues, please subscribe to those for updates!
    1 point
  37. Don't recommend Marvell, also don't know of any 6 port Marvell controller without using SATA port multipliers and those are not good, Asmedia is available in the various Amazon and Ebay stores, if you can't get that look for an LSI HBA, there are various non RAID models referenced here:
    1 point
  38. That's the maximum usable speed when using that controller with 6 devices at the same time, since current disks can't reach 300MB/s it still won't be much of a bottleneck for the 2 SSD's, you can also connect the SSDs to onboard SATA, it will have a little more bandwidth, and connect only HDDs on the add-on controller.
    1 point
  39. Look for an Asmedia ASM1166, only PCIe 3.0 x2 but that's enough for 6 HDDs, even if they are used simultaneously.
    1 point
  40. This is already in the works from @limetech itself but will need an additional Kernel module and will be most likely included in the next or one of the next RCs. But keep in mind if you pass through the TPM from the host you can use it only for one VM at the same time.
    1 point
  41. I see you posted the steps for the software, but what version worked with slackware? can you provide the file you used for arcconf and where you put it? sorry to necropost but I must know.
    1 point
  42. 1 point
  43. This will be fixed in Unraid v6.10.
    1 point
  44. For me multiple arrays would me more useful. You can already obtain ZFS with the plugin or passing a controller card to a vm Plenty of guides out there for both Large Drives in today's market are insanely expensive. 8TB for $150 ouch. Allows use of already obtained hardware I have around 50x 2tb sas drives that are being unused. Helps with the slow write performance by splitting data types to their respective arrays Dual parity drops array transfers to 40mb/s As a future features, i would prefer: parity reworked. No idea if it is even possible, but if having 1x parity drive, and then that drive is in a mirror set with drive(s). This removes the write penalty with having two parity drives, but actually gives you as many parity drives as you want to mirror. There are likely problems with this setup, that i do not see. Tiered storage. Have a live file system. t0 for nvme, for high transfer rate items (10gbit). t1 for ssds, where VMs live. t2 is an r10 array, used for most transfers. r3 is the parity array. Data is moved between the four as necessary based upon frequency of access. There is a manual way of setting this up now with the multiple cache pools, but I would prefer an automatic method.
    1 point
  45. Figured it out; here is a brief How-to for Unraid specifically; alternately @SpaceInvaderOne made an excellent video about the plugin a few years ago, which inspired me to try the following: Pull the container using CA, and make sure you enter the mount name like "NAME:" In the host (aka Unraid) terminal run the provided command on page 1 of this post: docker exec -it Rclone-mount rclone --config="/config/.rclone.conf" config Follow the onscreen guide; most flow with other tutorials and the video referenced above. UNTIL - you get to the part about using "auto config" or "manual". Turns out it is WAY easier to just use "manual" as you'll get a one-time URL to allow access for rclone to your GDrive. After logging in and associating the auth request to your gmail account you'll get an auth key with a super easy copy button. Paste the auth/token key into the terminal window Continue as before, and complete the config setup CRITICAL - go to: cd /mnt/disks/ ls -la Make sure the rclone_volume is there, and then correct the permissions so the container can see the folder as noted previously in this thread chown 911:911 /mnt/disks/rclone_volume/ *assuming you're logged in as root, otherwise add "sudo" Restart the container, and verify you're not seeing any connection issues in the logs From the terminal cd /mnt/disks/rclone_volume ls -la Now you should see your files from GDrive I was just testing to see if I could connect without risking anything in my drive folder, so everyting was in read only including the initial mount creation with the config. As such, I didn't confirm any other containers could see the mount, but YMMV. Have a great evening and weekend.
    1 point
  46. Just wanted to add my thanks for this image. What a powerful little addition. One thing that wasn't entirely clear when using Unifi USG was using netboot.xyz.efi instead of netboot.xyz.kpxe if you want a UEFI pxe boot, the efi image is mentioned further down but specifying the option to use it under the USG section would've saved me a bit of time googling how to get a UEFI pxe boot (once i found the USG section i didn't think to look further down). Having all the various linux OS and a myriad of tools is great, i have a few missing because I'm running UEFI but i can live without (Although not sure why netbooyxyz doesn't include memtest, googling seems to say that memtest can be booted through UEFI too). I installed MDT on one of my Windows Servers, configured that and then modified the windows menu in netbootxyz to automatically load the WIM file generated by MDT, so now as well as the built in boot options i can load into MDT and deploy any of my Windows OS' too. This will seriously save me some time with future rebuilds and deployments. Thank you!
    1 point