Leaderboard

Popular Content

Showing content with the highest reputation on 04/01/21 in Posts

  1. Ever wanted more storage in your Unraid Server. See what a 2 petabytes of storage looks like using Nimbus data 100TB ssds. (check todays date )
    7 points
  2. Big News from NVIDIA Just a few hours ago, NVIDIA added an article to its support knowledge base regarding GPU passthrough support for Windows VMs. While we've supported this functionality for some time, it was done without official support from the vendor themselves. This move by NVIDIA to announce official support for this feature is a huge step in the right direction for all of our VM pass through users. This should also help instill confidence in users that wish to pass through these GPUs to virtual machines without the worry that a future driver update would break this functionality. Let us know what you think about this announcement here!
    2 points
  3. This is absolutely brilliant @SpaceInvaderOne!
    2 points
  4. Ist im Dockerfile das Port das für das WebGUI verwendet wird mit EXPOSE definiert? Wenn nicht ist das der Auslöser. Warum brauchst du das? Braucht der Container Zugang zum Host? Bzw hast du eineige container mit Bridge und einige im Custom Netzwerk? Container die im gleichen Netz sind können miteinander kommunizieren. Genau, aber das ist eigentlich ein Sicherheitsfeature von Docker damit das Netzwerk vom Host und Docker getrennt wird. Mit dem VLAN "umgeht" Unraid dieses Feature und alles kann mit allem kommunizieren.
    2 points
  5. Back about ten years ago, I decided I had a storage space problem. So I looked around to see what was available. I don't remember what was around at the time but I came across the unRAID product. So you load something on an USB and then boot to it and it then sees all the attached drives. Create shares, add data and you're good. It has data recovery as well in case of drive failure. Sounded pretty much agnostic as far as motherboard hardware was concerned. Interesting concept. And an online community with all kinds of info. Like any IT guy with 20+ years of hardware experience I had accumulated assorted old/unused small drives and out of date motherboards. Let me slap together something as a proof of concept thing real quick and see what happens. This was just as SATA was coming out. But my old boards only had IDE Master/Slave stuff. It was just stuff I had laying around. Got four assorted drives of unknown heritage from the junk box and stuffed them into an old case. Made the bootable USB and fired the whole mess up. Got the web console up and poked around. Yea, all the drives were listed. OK, biggest drive will be parity and the assorted leftovers will be drive space. I liked that idea because you could use what you had on hand, they didn't need to be the same size. Probably had a total of less than one TB of drive space scattered across the entire box. UnRAID got done prepping all the space and I set up some shares and copied test files to them. That was pretty straight forward. So I let it run a few days as I went on to other stuff in life. I brought the console up a few days later and saw that I had a drive failure. The drives did come from the junk box so that was sorta expected. Ok, let's see how this data protection things works. The shares were still there and so was the data. The parity drive was doing it's job. Nice. That's working as advertised. Almost got me sold at this point. Back to the online community and the assorted FAQ's. How do you recover the drive/data? Shut down, pull bad drive and insert another drive of same or larger size. Click a few buttons and the system kicks in and the data recovery begins and then new drive is repopulated with data. Only downtime was swapping out hardware. From the user point of view I couldn't tell anything was going on in the background as the data was being recovered. Ok, I'm sold. Take my money now. Bought a couple keys and a pair of USB drives in September of 2011. Don't remember for sure but I think they had a sale or something going at the time. I'm on my second specially built production server now. Got the second USB on the test box. it's just just an old desktop stuffed with the smaller drives I have pulled from the production box as I upsize it. 23 TB scattered across 8 drives with Plex running on the SSD. Yes, I'm happy with the product.
    2 points
  6. This is the support thread for multiple Plugins like: AMD Vendor Reset Plugin Coral TPU Driver Plugin hpsahba Driver Plugin Please always include for which plugin that you need help also the Diagnostics from your server and a screenshots from your container template if your issue is related to a container. If you like my work, please consider making a donation
    1 point
  7. Welcome to IBRACORP Support = Support Us = Membership Help support my work by subscribing to our site and our Youtube Channel. It's free with paid options. There are no fees involved and it really helps me give back to you. Become a free subscriber of our site to: Receive the latest YouTube videos first, before going public on YouTube. Read our articles which go with our videos and other work we do. Emails directly to your inbox with the latest content. No spam, no bs. More Become a paid subscriber of our site to: Get exclusive videos only for supporters. Ask for direct support with helping install or provide consultancy to you. Receive advanced tutorials and articles for your IT needs. Help support indie creators (and a father of two) to bring you the best content possible! = PayPal = Prefer to donate via PayPal? You can donate to us right HERE. We really appreciate your support in any shape or form. = IBRACORP = IBRACORP - https://ibracorp.io/ YouTube: https://youtube.com/c/IBRACORP GitHub - https://github.com/ibracorp Discord - https://discord.gg/VWAG7rZ Twitter - https://twitter.com/IBRACORP_IO == Contact Us == If you require support or have any questions you can contact us at [email protected]. All questions/issues related to getting any of my images running on Unraid can be asked here. If you think a template needs improvement, feel free to post that here too. <-------------------------------------------------------------------------------------------------------------------------------------------------------> Authelia Authelia is an open-source authentication and authorization server providing 2-factor authentication and single sign-on (SSO) for your applications via a web portal. It acts as a companion of reverse proxies like nginx, Traefik or HAProxy to let them know whether queries should pass through. Unauthenticated users are redirected to Authelia Sign-in portal instead. IBRACORP Links: Guide: unRAID Template: https://github.com/ibracorp/authelia.xml/blob/master/authelia.xml unRAID Installation instructions: https://github.com/ibracorp/authelia This documentation will help users who have NGINX Proxy Manager and want to use Authelia to secure their endpoint. i.e. radarr etc. Official Links: Authelia: https://www.authelia.com/ Docs: https://www.authelia.com/docs GitHub: https://github.com/authelia/authelia Docker Hub: https://hub.docker.com/r/authelia/authelia
    1 point
  8. Finally got this monster up and running after weeks of finding the right parts new and used. It took a while but I am now there. This super beefy machine now has excellent cooling, super silent and currently runs everything I need with room to spare. Specs: Ryzen 3900 65w TDP 32GB DDR4 3000Mhz Quadro P600 LSI 8I SAS with expander for 16 HDDs + 6 from mobo (active cooled mod) 1TB NVME cache 256GB NVME on PCI-e 1x MSI Gaming pro motherboard x470 650W PSU Quad Port Intel Nic for isolating VMs 10GB SFP+ Nic for main connection 25 drive in 4 drive cages D5 pump with reservoir 240mm radiator Water temp Display Fans controlled by dynamix auto fan control All weighs about 50kgs
    1 point
  9. In a strange change of heart Nvidia has released as part of their 465.89 driver, official VM support when their GPU is passed through towards a VM. Currently in BETA more detail in the help link below: https://nvidia.custhelp.com/app/answers/detail/a_id/5173/~/geforce-gpu-passthrough-for-windows-virtual-machine-%28beta%29 I haven't been able to test at the moment as I use proxmox to do all my VM heavylifting but may switch back to Unraid if this works properly
    1 point
  10. It may work, but that's no guarantee. Some vega20 cards seem to refuse baco reset (bus active chip off)... If you dont mind, could you share your vendor/model with us and maybe try it out and share your results with us? We would highly appreciate that
    1 point
  11. For real, at least for a sample size of 2 posts so far.
    1 point
  12. 1) yes 2) any name (it appears as a name for the worker, so you can separate multiple GPUs) 3) Probably, google it 4) Perfectly normal for Quadro cards as they can't be overclocked/they are locked.
    1 point
  13. From what I know it should work. Eventually @giganode can help.
    1 point
  14. This will never work because macOS kernel nor the hypervisor for macOS support AMD-V (SVM) cpu feature. Basically put, anything that uses macOS's hypervisor requires Intel's vmx cpu feature.
    1 point
  15. The two biggest causes of problems, in my experience, as long as the disk SMART reports are OK.
    1 point
  16. Richtig gut - Dank dir, @ich777 !
    1 point
  17. Switch endpoint, it's just likely a pia outage for whatever endpoint you are attempting to connect to
    1 point
  18. Kannst du bitte mal bei Gelegenheit das Mellanox Firmware Tools Plugin updaten, hoffe das erleichtert das flashen nächstes mal.
    1 point
  19. .....und da unraid eben kein vollwertiges Interface für die Linux-Firewall hat, sollte man das Feature eben nicht an dieser Stelle, auf dem unraid Host verwenden. Mit einem VLAN fähigen Router kann man das auch, mit etwas Tüftelei, erzwingen/hinbiegen in der dortigen Firewall. Das ist genau der Punkt...."Zugang zum Host" meint einen IP basierten Dienst auf dem unraid Server selbst (Plugin, Web-UI). Alles andere ist in einem der weiteren VLANs. Einen VLAN fähigen Switch gibt es schon für ein paar Kröten mehr. Damit lässt sich zumindest eine Trennung erreichen. Wenn es um Kopplung bzw. Nicht-Kopplung geht muss der Router allerdings mitspielen...in DE sind alle mit einer Fritz da erstmal raus, weil die keine VLANs kann und die Firewall nicht konfigurierbar ist. Das unraid Host-System mit eth0/br0 ist daher in meinem Management-LAN...Docker, VMs, aber auch die Kids haben darauf nix verloren. Dienste stelle ich eben genau da nicht bereit und SMB/SAMBA lauscht zwar standardmässig auf allen interfaces, aber das kann man mit den SAMBA Parametern einstellen, zB SMB Extras section: bind interfaces only = yes interfaces = lo br0 br0.10 ...für SMB nur auf dem Host und VLAN-10
    1 point
  20. Please mark this as solved. I have no idea why the upgrade removed br0 for me. But simply adding it back in network settings did work. Thank you for the help!
    1 point
  21. Bonjour, merci pour cette proposition, j'ai pu démarrer l'OS.
    1 point
  22. The plugin allows for remote connection to your server. It uses the form via SSL to connect to your machine so you can admin it remotely.
    1 point
  23. Thanks op. The answer was so obvious, I cannot understand why it did not occur to me. Works for me now. One thing I am wondering -- it looks like the files downloaded are owned by root. Is that to be expected?
    1 point
  24. Heh, alright thanks. I seem to be getting the correct hash now anyways. Cheers
    1 point
  25. Doch, natürlich funktioniert das, wenn Du die Docker nicht auf das Host-Network br0 drauflegst, sondern auf eine andere Bridge. Ich habe "Host access to custom networks" weiterhin auf "disabled". Was hat der unRaid Host denn auch in meinen Dockern zu suchen? Bei mir habe ich, wie gesagt, VLANs aktiviert und Docker können lustig untereinander und ins I-Net kommunizieren...nur nicht zum unRaid Host selbst. Das Inter-VLAN-routing macht mein Router.
    1 point
  26. Yup i can connect now both ways without an issue. Good lord.... so sorry to bother you for something so stupid. Thank you for being patient and trying to help
    1 point
  27. OK..... problem solved i believe. So i got a message from my ISP last week that they were doing maintenance to the entire township and i was included in that. What i didnt know is that my IP address changed 😆. Steam seems to be able to see it both ways now. I will have a friend try to connect and see what happens
    1 point
  28. Tools - New Config will let you assign disks however you want and then rebuild parity. The disks will be taken as they are with their existing contents, and parity will be rebuilt based on the data disks assigned to the parity array.
    1 point
  29. I'm not an IT Guy; Don't work in a computer related field ... Guess you could say I'm a Prosumer who tends to dream bigger than my current needs. I've quickly become a home lab junkie, having dabbled in ESXI, FreeNas, Proxmox, stand alone proprietary NAS units, and the like. With Unraid I finally feel like I have landed at a place where most of my current (and future) needs should be met. I planned my current NAS (Nerve Centre) around using Unraid and so far have not been disappointed. I will continue to use some of the other "stuff" but I can see Unraid taking care of 90% of my needs for the foreseeable future!
    1 point
  30. I did see it behave like it should a few minutes after Squid replied with his success, I just wanted to wait a few hours to confirm. It does look solved, I would say. As for the cause, all the token stuff worked everywhere else I use them, so its a mystery to me.
    1 point
  31. At the time of the original posting, it was saying "Not Available", and tracing out the dockerMan code revealed Token errors. As of today, it is returning "Up To Date", so I think the problem is fixed (by docker / gitHub themselves), but no one posted in response to my implied question which was the same as yours.
    1 point
  32. There is a plugin. Do you have Community Applications installed? Have you searched the Apps page?
    1 point
  33. Frnt_Fan1 and Frnt_Fan 2 are my two HDD cage fans and are setup identically. On Hard drives to poll, I just have the HDDs selected as they are ones in the cages and I have deselected the SSD/NVMe drives. The way my case is designed, the air from the HDD cage fans flows through the HDD cages and then is redirected back over the MB. I have set the minimum HDD fan speed to 50% to keep air flowing back to the MB and it helps with CPU and PCH temps (nothing can fix the MB temp reading) to have air flow at at least 50% even when all HDDs are spun down. CPU Fan settings:
    1 point
  34. several of us are using this: and using the hpssa option to enable it I use it on a p420i that does not have native hba mode. If yours supports hba mode, then it should be ok and you'll save yourself a pci slot for other things.
    1 point
  35. Yeah the pfSense-sponsored WireGuard implementation for FreeBSD had some issues. Does not affect Unraid.
    1 point
  36. On Settings -> Network Settings -> Eth0, is bridging enabled? Would you mind posting a screenshot?
    1 point
  37. Nice tips, I just wish it would be easier to setup KeysFile authentication and disable password authentication for the SSH. Just placing your pupkey in the UI and setting a checkbox to disable password auth would be nice. I currently have it setup like ken-ji describes here. Then i edited PasswordAuthentication to "no". Also think about a secure by default approach with future updates. Why not force the user to set a secure password on first load? Why even make shares public by default? Why allow "guest" to access SMB shares by default? Why create a share for the flash in the first place? I get that some of those things make it more convenient, but imo convenience should not compromise security.
    1 point
  38. Anyone know where to find instructions on how to get this working with the ASRock motherboards? I noticed that it says there needs to be a kernel patch and that is above my knowledge 😄 so if anyone has an easy idiots guide to get this working please let me know. I'd like to get the RGBs on my memory modules, and CPU fan working right.
    1 point
  39. you'd have to dump the bios of the 1030 gpu. Spaceinvaderone has a video on how to do it
    1 point
  40. I started in February 2011. Same as you - I put together a lash-up comprising a random mini-ITX board (probably a 32bit VIA CPU at that time) and three or four random drives. I specifically did this to see for myself how the missing disk scenario would be handled. It was just so easy to get it working, pull a drive and still have the data available, and to rebuild as needed. That was Unraid version 4.7 I think. I now have four licenses, three servers (main, backup and my daughter's). I am also very happy with the software.
    1 point
  41. I had the same problem that JasonP had, and running "unraid-api restart" in a terminal fixed it also. But mine is still showing "Access unavailable" in the my server list... it says it is online and shows all my stats... EDIT: It figures it would change to "Remote access" right after i posted this.
    1 point
  42. 1 point
  43. Konnte wie folgt behoben werden: https://forums.unraid.net/topic/88504-support-knex666-nextcloud-20/ Hinzufügen im Nextcloud Template von folgenden Werten: ExtraParams: --user 99:100 --sysctl net.ipv4.ip_unprivileged_port_start=0 PostArgs: && docker exec -u 0 Nextcloud /bin/sh -c 'echo "umask 000" >> /etc/apache2/envvars' Konnte ich leider nicht beheben und läuft noch immer. Dadurch NPM ja die SSL-Zertifikate verwaltet, dürfte Nextcloud ja danach nicht suchen oder? Kann man das irgendwie deaktivieren? Die zwei Warnungen kommen interessanterweise nur wenn ich mich über meinen Windows oder Linux Rechner anmelde. (Bei den mobilen Geräten nicht. Habe folgende 2 Einträge in den NPM Advanced Einstellungen beim ProxHost für Nextcloud ergänzt: (nun läuft es) location /.well-known/carddav { return 301 $scheme://$host/remote.php/dav; } location /.well-known/caldav { return 301 $scheme://$host/remote.php/dav; } Siehe hier: https://forums.unraid.net/topic/102633-nextcloud-mit-ngnix-proxy-manager-und-cloudflare/?tab=comments#comment-947522 Ich musste den Befehl auf folgendes umändern: docker exec -u 99 Nextcloud php occ db:add-missing-indices Leider habe ich keine Ahnung, wie man beim knex666 Container irgendwelche Pakete nachinstallieren kann. Kennt da jemand eine Lösung?
    1 point
  44. Sooo... I might be of some help, for a change. I had the problem with login to my Apple ID and had various error messages like "could not communicate withe server" or "this computer already registered to many ID's" (or something like that. I did two things : 1 - I fixed the En0 following this guide. The problem was thas I had a network adapater as En0 but it was not marked as built-in. 2 - I notice that even if I filled out the "PlatformInfo" tab in the OpenCore Configurator as shown in the video (with mac model, serial number etc...) when I clicked on "About this mac" I had no model specified... just "mac". And Hackintool also showed Standard PC (Q35 + ICH9, 2009)" as Model indentifier. The error I made (or the thing I did not corrected) was that under the PlatformInfo tab the scroll down menu on the bottom right was set to "Overwrite"... But it obviously had nothing to overwrite and changing it to "create" did the trick ! On reboot I had the right model identifier and I could log in to icloud with no problem... Hope that will help someone !
    1 point
  45. I found by accident another tweak: Direct disk access (Bypass Unraid SHFS) Usually you set your Plex docker paths as follows: /mnt/user/Sharename For example this path for your Movies /mnt/user/Movies and this path for your AppData Config Path (which contains the thumbnails, frequently updated database file, etc): /mnt/user/appdata/Plex-Media-Server But instead, you should use this as your Config Path: /mnt/cache/appdata/Plex-Media-Server By that you bypass unraid's overhead (SHFS) and write directly to the cache disk. Requirements 1.) Create a backup of your appdata folder! You use this tweak on your own risk! 2.) Before changing a path to Direct Disk Access you need to stop the container and wait for at least 1 minute or even better, execute this command to be sure that all data is written from the RAM to the drives: sync; echo 1 > /proc/sys/vm/drop_caches If you are changing the path of multiple containers, do this every time after you stopped the container, before changing the path! 3.) This works only if appdata is already located on your SSD which happens only if you used the cache modes "prefer" or "only": 4.) To be sure that your Plex files are only on your SSD, you must open "Shares" and Press "Compute" for your appdata share. It shows if your data is located only on the SSD or on SSD and Disk. If its on the Disk, too, you must stop the docker engine, execute the mover and recheck through "Compute", after the mover has finished its work. You can not change the path to Direct SSD Access as long files are scattered or you will probably loose data! 5.) And you should set a minimum free space in your Global Share Settings for your SSD cache: This setting is only valid for Shared Access Paths and ignored by the new Direct Access Path. This means it reserves up to 100GB for your Plex container, no matter how many other processes are writing files to your SSD. Whats the benefit? After setting the appdata config path to Direct Access, I had a tremendous speed gain while loading covers, using the search function, updating metadata etc. And its even higher if you have a low power CPU as SHFS produces a high load on single cores. Shouldn't I update all path to Direct Access? Maybe you now think about changing your movies path as well to allow Direct Disk Access. I don't recommend that because you would need to add multiple paths for your movies, tv shows, etc as they are usually spreaded across multiple disks like: /mnt/disk1/Movies /mnt/disk2/Movies /mnt/disk3/Movies ... And if you move movies from one disk to another or add new disks etc this probably cause errors inside of Plex. Furthermore this complicates moving to a different server if this maybe uses a different disk order or an smaller amount of bigger disks. In short: Leave the other Shared Access paths as they are. Does this tweak work for other Containers? Yes. It even works for VM and docker.img paths. But pay attention to the requirements (create backup, flush the linux write cache, check your file locations, etc) before applying the Direct Access path. And think about, if it could be more useful to stay with the Shared Access Path. The general rule is: If a Share uses multiple Disks, do not change this path to Direct Access.
    1 point
  46. 1) If necessary, generate an SSH key on your Mac or Linux machines, using ssh-keygen. 2) Create an authorized_keys file for the unRAID server, using the id_rsa.pub files on all the machines which require access. 3) Copy this file to your server's /root/.ssh/ folder. This will work until a reboot. To handle a persistent setup: 1) Copy the authorized_keys file to /boot/config/ssh/. 2) Add this to the end of.your /boot/config/go, using your preferred editor: mkdir /root/.ssh chmod 700 /root/.ssh cp /boot/config/ssh/authorized_keys /root/.ssh/ chmod 600 /root/.ssh/authorized_keys
    1 point