Leaderboard

Popular Content

Showing content with the highest reputation on 04/21/21 in all areas

  1. Good news, no more AMD reset bug with RX 6xxx XT series GPUs in macOS 11.4 beta 1 on UnRAID.
    3 points
  2. The number of writes in the GUI is basically meaningless, it varies with the devices, what matters are the TBW, and that is fine, only a few GB were written.
    2 points
  3. Is anybody using docker compose? Are there any plans to integrate it with unRAID?
    1 point
  4. I'm trying to get GPU passthrough working on Windows 10The VNC connection was used to setup the virtual machine, which worked fine (installed the virtio display driver). When switching the VM to display through the video card, after the setup through VNC, the UNRAID login command line interface disappears and the screen turns black. Nothing happens after that. What's interesting is that when I use the GPU passthrough with a Ubuntu 18.10 server VM, the command line appears for that VM on the screen. Heard trying to switch from QEMU to SeaBIOS could work. However, the option to switch the BIOS is greyed out. Solved: Turns out that the VirtIO driver hates the GPU, so rolling back the driver to the windows basic display driver through the VNC connection fixed the problem.
    1 point
  5. Great Odin's Raven! Today's blog is a guide on how to set up a dedicated Valheim server on Unraid by @spxlabs! Want to set up a co-op with fellow Unraiders? Need help troubleshooting? Let us know below in the comments. SKOL! 🛡️ https://unraid.net/blog/unraid-valheim-dedicated-server
    1 point
  6. ,,,ist mir noch nie aufgefallen, das man die Description dafür nutzen könnte 😅 Aber so wie Du es beschreibst, hast Du VLANs garnicht aktiviert in unRaid, sondern einfach je Interface eine Bridge erstellt. Wenn Du VLANs aktivierst, taucht die VLAN.ID im Interface (das was Du oben als Gateway bezeichnest) auf. Ich habe meine Interface in einem Bond gebündelt. Alle VLANs sind also in der Bridge zu diesem Bond getagged und ich sehe dann zB br0.10, br0.20, br0.30, .... ich habe "nur" 6 VLANs, aber kann die so gut auseinanderhalten.
    1 point
  7. .. nicht dass ich wüsste, das war ein uraltes Teil aus dem Fundus. Hätte Unraid den Stick beim Installieren nicht komplett platt gemacht? Aber egal, ich mach morgen eine Neuinstallation, nur zur Übung.
    1 point
  8. simply on br0 with a different ip in the same subnet
    1 point
  9. Usual cause of VMs pausing is either the VDisk is over-provisioned for the device (ie: it can't fit onto the device), or the VM is actually hibernating.
    1 point
  10. Unraid doesn't run from the USB, it's only used at boot time to unpack the OS into RAM and read the saved settings. If properly managed, the USB stick is very reliable, there are many examples of 10+ years of service, moved from system to system as needed for motherboard upgrades. A full sized USB 2.0 stick of 32GB capacity is ideal. In the rare event of failure, it's simple to take the latest backup and create a new stick, as soon as Unraid gets online it will prompt you through the license replacement process.
    1 point
  11. Looks like others are now having the issue that I have struggled with. I have had the issue with two different motherboards and on versions of unRAID since 6.8.3. I swapped to a motherboard with IPMI and now I have the macvlan issue return. I resolved the issue on my MSI Pro Carbon x370 by removing my 10 gig PCIE NIC and using the onboard NIC. All of my dockers are currently in host mode or br0.2 with a static IP. I had six days of uptime and then just this morning I had another macvlan call trace come up. Since I installed my new motherboard in I check my logs every morning before I go to work and usually after I get off work. It seems like this has been an issue for a long time but something changed and caused more users to be affected than before. Current Unraid Version: 6.9.2 Original Motherboard: MSI Pro Carbon X370 Current Motherboard: ASRockRack X470D4U2-2T tower-syslog-20210421-1728.zip
    1 point
  12. Hey @JorgeB, Thanks for your help again. I went ahead and ran XFS_Repair on MD6 and it seems to have resolved that issue. I did find that in that latest set of panics I saw and in some more today that it kept referencing the qbittorrent-nox process from the qbittorrent docker. I stopped that docker and I'm going to let the server run for a few days and see if the crashes stop.
    1 point
  13. Unplug the cable feeding the unusable front panel USB and plug the key in to that socket.
    1 point
  14. Well, I owe you a big thank you for this one! The Thecus N5550 I had my disks in was at least 8 years old, so I decided to chuck it. I have lots of parts laying around, so I constructed a server based on: Asus X99 motherboard Xeon e5-1620 CPU 16 GB Ram ...and all my disks. I migrated everything to the new machine. I poked around for a bit trying to get the machine to boot from the USB key, but when it did, it fired right up, automatically recognized all my disks, and presented everything to me exactly as it had been set up on the Thecus. I instructed it to add Disk 4 to the mix (using the Seagate Iron Wolf Pro I had first tried), and 12 hours later it's all good! Unbelievable. I'm sold on UnRAID forever.
    1 point
  15. Ah, Vivaldi has these things called web-panels which are like a mini web-browser in the sidebar. Had the WebUI open in one of these. Closed it, seems to have solved it. Thanks!
    1 point
  16. By default, the mapping of /storage is read-only. If you want to restore to /storage, you need to edit the container settings and change /storage to be read/write (you need to enable advanced view to do this).
    1 point
  17. @methanoid of you are interested I now have a plugin ready for the soundcard. Hook me up with a short message so I can send you the details.
    1 point
  18. New to the community, thought I'd share my icons for the Dell PowerEdge T610. Hope they prove useful!
    1 point
  19. Forget using the container, and instead install Plex within the Ubuntu Kodi VM.
    1 point
  20. I just check the total TBW once a month or so to see if it's increasing normally.
    1 point
  21. Thanks, happy to report it is fixed.
    1 point
  22. i was looking at a 7th gen when i wanted to build a server but i ended up getting a older server off of ebay that has a xeon
    1 point
  23. Diags are after rebooting so we can't see what happened, but disk looks fine and since it's mounting you can rebuild on top, before doing it you may want to replace/swap cables just to rule them out if it happens again to the same disk.
    1 point
  24. Not currently. At the moment, if a VM has access to the hardware of a GPU, it can't also be used by the host. Multiple containers can use the GPU hardware if the drivers are loaded in the host, but then VM's can't use it.
    1 point
  25. This should not affect the spin down, is the USB device capable of spin down and also does it report that correctly, eventually this is the problem. I think half the issue is the UD plugin since I confirmed it didn't spin down the drive after 15 minutes. But apparently upgrading to 6.9.X (from 6.8.3) may fix the problem. So I will try that a little later
    1 point
  26. Den Screenshot habe ich gemacht als der RAM Cache gerade geleert wurde. Du könntest auch dirty_ratio auf 1 stellen oder dirty_bytes auf 1000000 und eine größere Datei hochladen, aber bitte nicht ratio auf 0. Da bricht die Performance massiv ein.
    1 point
  27. Me too, I thought it couldn't possibly be the plugin but it seems that when extracting plugins it overwrites the filesystem permissions with those from the package: It should be fixed now, the problem was I forgot to run the build command as root so it couldn't change permissions before packaging. Sorry for that! Unrelated: @JoergHH the ability to ignore a specific unhealthy status is in the works, I'm working on how plugins are supposed to use settings, hang on a couple more days You will be able to flag your current status and it will report as healthy until the status of any pool changes, so you won't miss any warning/errors/etc.
    1 point
  28. Ha, that makes sense as an explanation. I kinda forgot about the switch. That's what I get for looking for a technical solution to a simple problem. Thanks.
    1 point
  29. Parity2 will be calculated using all the data drives and as such does not need parity1 (it is perfectly valid to have parity2 without parity1). While calculating parity2 if parity1 is also present then the opportunity is taken to check it and correct any errors found. it is also possible to have a scenario with parity1 present and 1 data disk missing. In such a case parity1 is being read to emulate the missing data disk while building parity2.
    1 point
  30. Those are not correct, you can see them on the SMART report: Data Units Written: 114,252,166 [58.4 TB] and Data Units Written: 114,251,764 [58.4 TB]
    1 point
  31. Parity1 is still checked, and corrected if there are errors.
    1 point
  32. Just want to thank the developers for Organizr its made management of services so much easier. I would like to share a procedure for new peeps to secure their systems when using a reverse proxy, in this case NGINX Proxy Manager. The following I think is a typical new proxy setup: https://domain.com Then the docker apps are setup either as subdomains https://sub.domain.com or sub-folders https://domain.com/subfolder In both these instances the docker is exposed to the internet for hacking (we are dismissing firewalls in this example) You can use Organizr to collaborate them together but its still the same. https://organizr.com might be the location to access all the apps but https://sub.domain.com or sub-folders https://domain.com/subfolder is still there providing the address is used directly. This is where we can use Organizr API to lock down the access to the sub domains and sub-folders. When I access https://sub.domain.com or sub-folders https://domain.com/subfolder I get this: This is because in this session I'm not logged into Organizr and so there is no authentication session to authorise me. If I did this in a browser I was logged into Organizr it would take me directly to the app but I could always go through Organizr too; thus I can do both but the public cannot. Add this with SSO and even MFA for Organizr and we now have a secure login wall. The following procedure explains how to do this for the easier NGINX app NGINX Proxy Manager (GUI based): First you will need to have configured your NGINX PM docker, this is outside the scope of this guide so please review relevant procedures on their support forum; you will also need a configured Organizr using sub domains or sub-folders. Sub-folder setup: This setup is easier to configure and reduces complexity on domains/wildcards and SSL 1. In NGINX PM, edit the host relating to your Organizr which should have Organizr set on the "Details" screen and not a sub-folder; this ensures Organizr works effectively. 2. You have two options to configure the access: Enter the following (with your own edit using examples below) auth_request /auth-4; into the "Advanced" tab if you want a global group block for all sub-folders. OR Enter the following (with your own edit using examples below) auth_request /auth-4; into the the custom configuration field (cog icon) for each sub-folder you want to restrict. This method allows for granularity. **Remember the restriction you place also applies to the user accessing the resources from within Organizr. Replace the 4 with the match group level required to access the resource: 0=Admin 1=Co-Admin 2=Super User 3=Power User 4=User Logged In Users=998 Guests=999 3. Create a new location which will contain the API call to Organizr for this to work: Location: ~ /auth-(.*) Scheme: http Forward Hostname: 0.0.0.0/api/v2/auth?group=$1 Forward Port: 8040 **Replace the IP address values to your docker running Organizr and if you changed your default port also adjust that. In the custom configuration field (cog icon) for that same location enter the following without edits: internal; proxy_set_header Content-Length ""; I have found this no longer works for newer NGINX versions breaking everything! It seems the underlying config code has change which causes the 500 error. I cannot verify what version the issue started, but if you are on version v2.9.11 you need to do this: Edit the proxy host, select the far right advance tab and enter the following: location ~ /auth-(.*) { proxy_set_header Host $host; proxy_set_header X-Forwarded-Scheme $scheme; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $remote_addr; proxy_pass http://0.0.0.0:8999/api/v2/auth?group=$1; internal; proxy_set_header Content-Length ""; } **Replace the IP address values to your docker running Organizr and if you changed your default port also adjust that. 4. [Optional] if you have your Organizr connected to dockers using API (homepage items) via the proxy (don't know why you would, instead of local IP) then you will need to exclude the API call from needing authentication. For each location pointing to a docker using API in this configuration enter the following in the custom config (cog icon): Example: location /tautulli/api { auth_request off; proxy_pass http://0.0.0.0:8181/tautulli/api; } **You might also have the auth_request /auth-4 text in there too which is fine. Replace the IP and port to your docker, replacing also the app names. 5. If you save it here you will block all users from accessing Organizr. This is because you need to be logged into Organizr to access Organizr, catch 22. We fix this with one final location: Location: / Scheme: http Forward Hostname / IP: 0.0.0.0 Forward Port: 8040 **Replace the IP address values to your docker running Organizr and if you changed your default port also adjust that. In the custom configuration field (cog icon) for that same location enter the following without edits: auth_request off; This means when we go to https://domain.com it will automatically append a "/" to the end and the location bypasses the need to authenticate so we can login. 6. Setup complete Sub-domain setup: This is similar but with more repeating steps. 1. In NGINX PM, edit the host relating to your sub domain you want to restrict. 2. Enter the following (with your own edit using examples below) auth_request /auth-4; into the "Advanced" tab. **Remember the restriction you place also applies to the user accessing the resources from within Organizr. Replace the 4 with the match group level required to access the resource: 0=Admin 1=Co-Admin 2=Super User 3=Power User 4=User Logged In Users=998 Guests=999 3. Create a new location which will contain the API call to Organizr for this to work: Location: ~ /auth-(.*) Scheme: http Forward Hostname: 0.0.0.0/api/v2/auth?group=$1 Forward Port: 8040 **Replace the IP address values to your docker. In the custom configuration field (cog icon) for that same location enter the following without edits: proxy_pass_request_body off; proxy_set_header Content-Length ""; I have found this no longer works for newer NGINX versions breaking everything! It seems the underlying config code has change which causes the 500 error. I cannot verify what version the issue started, but if you are on version v2.9.11 you need to do this: Edit the proxy host, select the far right advance tab and enter the following: location ~ /auth-(.*) { proxy_set_header Host $host; proxy_set_header X-Forwarded-Scheme $scheme; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $remote_addr; proxy_pass http://0.0.0.0:8999/api/v2/auth?group=$1; internal; proxy_set_header Content-Length ""; } **Replace the IP address values to your docker running Organizr and if you changed your default port also adjust that. 4. [Optional] if you have your Organizr connected to dockers using API (homepage items) via the proxy (don't know why you would, instead of local IP) then you will need to exclude the API call from needing authentication. Click the "Advanced" settings tab and enter the following: Example: location /tautulli/api { auth_request off; proxy_pass http://0.0.0.0:8181/tautulli/api; } **Replace the IP and port to your docker, replacing also the app names. 5. Repeat steps 1-4 for each sub domain. 6. Setup complete Additions: For those wanting to get deluge working in a subfolder environment configure the following: Remove your Deluge location, it needs to be added manually. In the advance tab for the proxy host site add the following: location /deluge { proxy_set_header Host $host; proxy_set_header X-Forwarded-Scheme $scheme; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $remote_addr; proxy_pass http://0.0.0.0:8112/; proxy_set_header X-Deluge-Base "/deluge/"; include /nginxv/proxy-control.conf; add_header X-Frame-Options SAMEORIGIN; auth_request /auth-1; } **Replace the IP address values to your docker running Deluge and if you changed your default port also adjust that. Its ok if you already have code in the box, add a few lines and add the new code. Download the proxy-control.conf attached to the guide and put in the main NGINX folder. You can change the location but you will need to change the include /nginxv/proxy-control.conf; to the correct location. Restart the NGINX container as its need to load the addition config and Deluge will now work.
    1 point
  33. I meant "When you are able to update your system", not LT's code. I supposed something is preventing you to go to 6.9.2 where the spindown works well for those drives.
    1 point
  34. I got it. you first need to create a share on the array, select prefere : plex (or the pool name you choose) then change in the plex conf the path to the new share. and everything is ok..... easy... 🙂
    1 point
  35. Vr2Io post this on another thread yesterday. I think it allows you to create a bootable USB. I would suggest you try new UEFI memtest86 ( I just try it yesterday ), some test will use all CPU core ( you need enable it ) which I doubt legacy memtest won't use all core ( may be wrong and I don't trust legacy memtest ) Anyway, it could be CPU issue too. https://www.memtest86.com/download.htm
    1 point
  36. Sorry I never posted my actual solution ages ago, I was actually going to fork the plug-in and submit a PR but I thought the author was going to do it. Ill post my changes that have a working solution for these boards later in the day, essentially it was a change to the ‘ipmi2json’ and the ‘ipmifan’ scripts.
    1 point
  37. Sieh dir dazu das iSCSI Plugin an *WERBUNG* , ich verwende das auch weil ich einfach zu faul war in meinen PC eine SSD einzubauen... (ich benutz es aber nur als "normales" Laufwerk und nicht zum booten) iSCSI wird auch von Windows nativ unterstütz und du kannst sogar davon booten.
    1 point
  38. If you have a way to create these call traces on demand that would be helpful (of course we need diagnostics to further investigate). I have host access enabled but don't have any of these call traces, and as such it is hard to reproduce the issue for me. In the next Unraid version some more conntrack modules will be loaded, it would hopefully help to tackle the problem in more detail.
    1 point
  39. Hi everyone, Thank you for your patience with us on this and @bonienl for taking point on trying to recreate the issue. We are discussing this internally and will continue to do so until we have something to share with you guys. Issues like these can be tricky to pin down, so please bare with us while we attempt to do so.
    1 point
  40. This saved me as well ! Currently running 6.8.3. Thank you so much !!
    1 point