Leaderboard

Popular Content

Showing content with the highest reputation on 04/29/21 in all areas

  1. Being new to Unraid (and just teaching people in general), I try to lay out my steps of problem solving so others can learn. I can say I have solved the issue above. Midnight Commander is a similar "program" to Krusader. Essentially MC is a built in Krusader. Some will say Krusader is fasters, others say no difference, but it's a way to move/delete/copy/etc. files from one share to another or one disk to another (don't do disk to share or vice versa - i've been told). Midnight Commander is accessed by "SSH'ing into the server. I use Putty but when you log in via SSH, you simply just type mc and it will open Midnight Commander. Within Midnight Commander, I was able to find this rogue "cache" drive and delete it. The problem has been fixed and I figured out the issue. Essentially, my dockers were looking for a folder like "/mnt/cache/appdata" when really the dockers needed to be lookcing for "/mnt/cache_nvme/appdata". All my docker data (plex and valheim) were stored in "cache_nvme" however server was calling for "cache" and was just creating new files because dockers were not linked to cache_nvm. Once updating the docker appdata info to reach for cache_nvme, dockers work as they used to and issue resolved. I deleted "cahce" via MC and all is well.
    2 points
  2. Hallo, I deleted @Hoddl's upload. Please anonymize sensitive info and reupload 🙂 Danke @i-B4se
    2 points
  3. Nvidia-Driver (only Unraid 6.9.0beta35 and up) This Plugin is only necessary if you are planning to make use of your Nvidia graphics card inside Docker Containers. If you only want to use your Nvidia graphics card for a VM then don't install this Plugin! Discussions about modifications and/or patches that violates the EULA of the driver are not supported by me or anyone here, this could also lead to a take down of the plugin itself! Please remember that this also violates the forum rules and will be removed! Installation of the Nvidia Drivers (this is only necessary for the first installation of the plugin) : Go to the Community Applications App and search for 'Nvidia-Drivers' and click on the Download button (you have to be at least on Unraid 6.9.0beta35 to see the Plugin in the CA App) : Or download it directly from here: https://raw.githubusercontent.com/ich777/unraid-nvidia-driver/master/nvidia-driver.plg After that wait for the plugin to successfully install (don't close the window with the , wait for the 'DONE' button to appear, the installation can take some time depending on your internet connection, the plugin downloads the Nvidia-Driver-Package ~150MB and installs it afterwards to your Unraid server) : Click on 'DONE' and continue with Step 4 (don't close this window for now, if you closed this window don't worry continue to read) : Check if everything is installed correctly and recognized to do this go to the plugin itself if everything shows up PLUGINS -> Nvidia-Driver (if you don't see a driver version at 'Nvidia Driver Version' or another error please scroll down to the Troubleshooting section) : If everything shows up correctly click on the red alert notification from Step 3 (not on the 'X'), this will bring you to the Docker settings (if you are closed this window already go to Settings -> Docker). At the Docker page change 'Enable Docker' from 'Yes' to 'No' and hit 'Apply' (you can now close the message from Step 2) : Then again change 'Enable Docker' from 'No' to 'Yes' and hit again 'Apply' (that step is only necessary for the first plugin installation, you can skip that step if you are going to reboot the server - the background to this is that when the Nvidia-Driver-Package is installed also a file is installed that interacts directly with the Docker Daemon itself and the Docker Daemon needs to be reloaded in order to load that file) : After that, you should now be able to utilize your Nvidia graphics card in your Docker containers how to do that see Post 2 in this thread. IMPORTANT: If you don't plan or want to use acceleration within Docker containers through your Nvidia graphics card then don't install this plugin! Please be sure to never use one card for a VM and also in docker containers (your server will hard lock if it's used in a VM and then something want's to use it in a Container). You can use one card for more than one Container at the same time - depending on the capabilities of your card. Troubleshooting: (This section will be updated as soon as more someone reports an issue and will grow over time) NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.: This means that the installed driver can't find a supported Nvidia graphics card in your server (it may also be that there is a problem with your hardware - riser cables,...). Check if you accidentally bound all your cards to VFIO, you need at least one card that is supported by the installed driver (you can find a list of all drivers here, click on the corresponding driver at 'Linux x86_64/AMD64/EM64T' and click on the next page on 'Supported products' there you will find all cards that are supported by the driver. If you bound accidentally all cards to VFIO unbind the card you want to use for the Docker container(s) and reboot the server (TOOLS -> System devices -> unselect the card -> BIND SELECTED TO VFIO AT BOOT -> restart your server). docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused "process_linux.go:432: running prestart hook 0 caused \"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: device error: GPU-9cfdd18c-2b41-b158-f67b-720279bc77fd: unknown device\\n\""": unknown.: Please check the 'NVIDIA_VISIBLE_DEVICES' inside your Docker template it may be that you accitentally have what looks like a space at the end or in front of your UUID like: ' GPU-9cfdd18c-2b41-b158-f67b-720279bc77fd' (it's hard to see that in this example but it's there) If you got problems that your card is recognized in 'nvidia-smi' please check also your 'Syslinux configuration' if you haven't earlier prevented Unraid from using the card during the boot process: Click Reporting Problems: Please be sure if you have a problem to always include a screenshot from the Plugin page, a screenshot of the output of the command 'nvidia-smi' (simply open up a Unraid terminal with the button on the top right of Unraid and type in 'nvidia-smi' without quotes) and the error from the startup of the Container/App if there is any.
    1 point
  4. Overview: Support thread for lnxd/XMRig in CA. Application: XMRig - https://github.com/xmrig/xmrig Docker Hub: https://hub.docker.com/r/lnxd/xmrig GitHub: https://github.com/lnxd/docker-xmrig Please ensure that you know what you're doing before setting this up, as excessively high temperatures are BAD for computers and could damage your hardware / eventuate in data loss. Instructions: Install lnxd's XMRig via CA. Add in your XMR receive address to the wallet field. Update the pool address to your closest node or preferred pool. Don't forget to pay attention to the port if you change pools, as they tend to use arbitrary ports. Set the --donate-level you would like to enable. Eg. entering a 1 causes XMRig to mine for 99 minutes for you, and then 1 minute for the fee option chosen in the next step. Setting the --donate-level flag to 0 will not work unless you follow the steps below. There are 3 fee options enabled by a custom build from my fork of the latest release source. This allows for some options that aren't available in the base version: no-fee: Makes it possible to set the --donate-level flag to 0%. Unless you set it to 0%, the fee goes to the developer of XMRig. dev-fee: The fee goes to the developer of XMRig. lnxd-fee: The fee goes to me 🙃 Thank you in advance if you choose this option. Turn on advanced mode for the template and select CPU core / thread pairs that you would like to use in CPU pinning. I recommend leaving core 1 and its thread pair unselected, as it could possibly cause Unraid to unexpectedly slow down / the Docker engine to quit if things get too intense while mining. Run the container and check the temperature of your CPU and other hardware regularly for at least 20-30 minutes to ensure everything is safe and stable. If you get any errors that you can't decipher, feel free reach out and I'll take a look for you. (Optional) To increase your hash rate, you can add and run the following User Script. At the moment, in order to reset your MSR values to default you need to restart your Unraid host. For this reason, it needs to be re-run after every boot as the updated values do not survive reboots. The script installs msr-tools and then updates the registry values to optimise your CPU for XMRig. This may have performance implications for other functions on your server. The logs will also show that XMRig is being run with MSR MOD disabled, but if you run this script it serves the same purpose and you should get a higher hash rate. #!/bin/bash # Write XMRig Optimised MSR values # https://github.com/xmrig/xmrig/blob/master/scripts/randomx_boost.sh VERSION=1.3 echo "Installing msr-tools v${VERSION}" echo "" echo "(don't!) blame lnxd if something goes wrong" echo "" curl -fsSL https://packages.slackonly.com/pub/packages/14.2-x86_64/system/msr-tools/msr-tools-${VERSION}-x86_64-1_slonly.txz -o /tmp/msr-tools-${VERSION}-x86_64-1_slonly.txz upgradepkg --install-new /tmp/msr-tools-${VERSION}-x86_64-1_slonly.txz rm /tmp/msr-tools-${VERSION}-x86_64-1_slonly.txz echo "" echo "Optimising register values for XMRig" echo "" modprobe msr if cat /proc/cpuinfo | grep "AMD Ryzen" >/dev/null; then if cat /proc/cpuinfo | grep "cpu family[[:space:]]:[[:space:]]25" >/dev/null; then echo "Detected Ryzen (Zen3)" wrmsr -a 0xc0011020 0x4480000000000 wrmsr -a 0xc0011021 0x1c000200000040 wrmsr -a 0xc0011022 0xc000000401500000 wrmsr -a 0xc001102b 0x2000cc14 echo "MSR register values for Ryzen (Zen3) applied" else echo "Detected Ryzen (Zen1/Zen2)" wrmsr -a 0xc0011020 0 wrmsr -a 0xc0011021 0x40 wrmsr -a 0xc0011022 0x1510000 wrmsr -a 0xc001102b 0x2000cc16 echo "MSR register values for Ryzen (Zen1/Zen2) applied" fi elif cat /proc/cpuinfo | grep "Intel" >/dev/null; then echo "Detected Intel" wrmsr -a 0x1a4 0xf echo "MSR register values for Intel applied" else echo "No supported CPU detected" fi echo "" echo "Done!" echo "To reset values, please reboot your server." If you get stuck, please feel free to reply to this thread and I'll do my best to help out 🙂
    1 point
  5. This repo was created to update the original piHole DoT/DoH by testdasi https://forums.unraid.net/topic/96233-support-testdasi-repo/ Official pihole docker with added DNS-over-TLS (DoT) and DNS-over-HTTPS (DoH). DoH uses cloudflare (1.1.1.1/1.0.0.1) and DoT uses google (8.8.8.8/8.8.4.4). Config files are exposed so you can modify them as you wish e.g. to add more services. This docker supercedes testdasi's previous Pi-Hole with DoH and Pi-Hole with DoT dockers. For more detailed instructions, please refer to Docker Hub / Github links below. Docker Hub: https://hub.docker.com/r/flippinturt/pihole-dot-doh Github: https://github.com/nzzane/pihole-dot-doh Please make sure you set a static IP for this docker, as DHCP will not work! FAQ: Q: Can this be installed on top of testdasi's current pihole DoT-DoH? A: Yes, this can be installed over, without any problems Q: How do I change the hostname? A: Use the '--hostname namehere' parameter, under 'extra parameters' in the containers settings Q: Is there a list of good block lists? A: https://firebog.net/ Initial Upload: 20/1/21 Latest Update: 27/04/22 (dd/mm/yy) Current FTL Version: 5.15 Current WEB Version: 5.12 Current PiHole Version: 5.10
    1 point
  6. As a long time Unraid user (over a decade now, and loving it!), I rarely have issues (glossing right over those Ryzen teething issues). It is with that perspective that I want to report that there are major issues with 6.9.2. I'd been hanging on to 6.8.3, avoiding the 6.9.x series as the bug reports seemed scary. I read up on 6.9.2 and finally decided that with two dot.dot patches it was time to try it. My main concern was that my two 8 TB Seagate Ironwolf drives might experience this issue: I had a series of unfortunate events that makes it extremely difficult to figure out what transpired, and in what order, so I'll just lay it all out. I'd been running 6.9.2 for almost a week, and I felt I was in the clear. I hadn't noticed any drives going offline. Two nights ago (4/27), somehow my power strip turned off - either circuit protection kicked in, or possibly a dog stepped on the power button, regardless, I didn't discover this before my UPS was depleted and the server shut itself down. Yesterday, after getting the server started up again, I was surprised to see my two Ironwolf drives had the red X's next to them, indicating they were disabled. I troubleshot this for a while, finding nothing in the logs, so it's possible that a Mover I kicked off manually yesterday (which would have been writing to these two drives) caused them to go offline on spin-up (according to the issue linked above), but that the subsequent power failure caused me to lose the logs of this event. [NOTE: I've since discovered that the automatic powerdown from the UPS failure was forced, which triggered diagnostics, and those logs were lost after all - diagnostics attached!!!] I was concerned that the Mover task had only written the latest data to the simulated array, so a rebuild seemed the right path forward to ensure I didn't lose any data. I had to jump through hoops to get Unraid to attempt to rebuild parity to these two drives - apparently you have to un-select them, start/stop the array, then re-select them, before Unraid will give the option to rebuild. Just a critique from a long-time user, this was not obvious and seems like there should be a button to force a drive back into the array without all these obstacles. Anyways, now to the real troubles. Luckily, I only have two Ironwolf drives, and with my dual parity (thanks LimeTech!!!), this was a recoverable situation. The rebuild only made it to about 46 GB before stopping. It appeared that Unraid thought the rebuild was still progressing, but obviously it was stalled. I quickly scanned through the log, finding no errors but lots of warnings related to the swapper being tainted. At this point, I discovered that even thought the GUI was responsive (nice work GUI gang!), the underlying system was pretty much hung. I couldn't pause or cancel the data rebuild, I couldn't powerdown or reboot, not through the GUI, and not through the command line. Issuing a command in the terminal would hang the terminal. Through the console I issues a powerdown, and it said it was doing it forcefully after awhile, but hung on collecting diagnostics. I finally resorted to the 10-second power button press to force the server off (and diagnostics are missing). I decided that the issue could be those two Ironwolf drives, and since I had two brand new Exos drives of the same capacity, I swapped those in and started the data rebuild with those instead. I tried this twice, and the rebuild never made it further than about 1% (an ominous 66.6 GB was the max rebuilt). At this point, I really didn't know if I had an actual hardware failure (the power strip issue was still in my thoughts), or software issue, but with a dual-drive failure and a fully unprotected 87 TB array, I felt more pressure to quickly resolve the issue rather than gather more diagnostics (sorry not sorry). So I rolled back to 6.8.3 (so glad I made that flash backup, really wish there was a restore function), and started the data rebuild again last night. This morning, the rebuild is still running great after 11 hours. It's at 63% complete, and should wrap up in about 6.5 hours based on history. So something changed between 6.8.3 and 6.9.2 that is causing this specific scenario to fail. I know a dual-drive rebuild is a pretty rare event, and I don't know if it has received adequate testing on 6.9.x. While the Seagate Ironwolf drive issue is bad enough, that's a known issue with multiple topics and possible workarounds. But the complete inability to rebuild data to two drives simultaneously seems like a new and very big issue, and this issue persisted even after removing the Ironwolf drives. I will tentatively offer that I may have done a single drive rebuild, upgrading a drive from 3TB to an 8TB Ironwolf, on 6.9.2. Honestly, I can't recall now if I did this before upgrading to 6.9.2 or after, but I'm pretty sure it was after. So on my system, I believe I was able to perform a single drive rebuild, and only the dual-drive rebuild was failing. I know we always get in trouble for not including Diagnostics, so I am including a few files: The 20210427-2133 diagnostics are from the forced powerdown two nights ago, on 6.9.2, when the UPS ran out of juice, and before I discovered that the two Ironwolf drives were disabled. Note, they might be disabled already in these diags, no idea of what to look for in there. The 20210420-1613 diagnostics is from 6.8.3, the day before I upgraded to 6.9.2. I think I hit the diagnostics button by accident. Figured it won't hurt to include it. And finally the 20210429-0923, is from right now, after downgrading to 6.8.3, and with the rebuild still in progress. Paul tower-diagnostics-20210427-2133.zip tower-diagnostics-20210429-0923.zip tower-diagnostics-20210420-1613.zip
    1 point
  7. 各位大佬好,还请进来看看😭 os:Unraid 6.8.2 ifconfig: Network Setting: docker Setting: docker已经使用Custom br0 问题: 1. 无法使用ipv6地址进行直接访问Nas界面或者任意docker 2. qbi里面显示正常连接,可以使用ipv6进行下载或者上传
    1 point
  8. Hey all, I'm looking for a way to view the current read/writes and MB/s to a specific Unassigned Device. Basically I have started to do Chia plotting, and found a way to get it running via docker, but I want to monitor the disk I/O to see if its running faster than in a VM. Is there any app or anything I can run to see how my drive I/O and general server is performing?
    1 point
  9. If the thing is unreliable (at best) I'll most likely send it back to the store under warranty since it's barely over a year old. That's a real shame since I won't be seeing it back for a few weeks but oh well, better safe than sorry! Cheers for clearing it up.
    1 point
  10. Hey there. Definitely sounds like an unfortunate situation. Don't take this as the final word, but looking at your logs, I feel like the drive is KO. Justification; Your SMART tests are consistently failing at a sepcific LBA, which means it's not liklely related to a signalling issue between the drive and the controller. Further down, the scary part though -- your drive has marked 2,120 LBA addresses as "pending defects" It sounds like something unfortunate has happened to that drive. I'm one of those guys who will shove an only maybe-slightly-failing disk someplace unimportant until it breaks, but personally I would not trust this drive in use.
    1 point
  11. Wow cant believe I missed that - all working now. Thanks for the help and for the great docker image!
    1 point
  12. haha.... that did it! you're the bomb! thanks man, keep up the great work.
    1 point
  13. I tried a couple of USB 3.0 drives and got an error on everyone of them, finnaly I tried a random USB 2.0 I had laying around and it worked on the first time. So maybe if you're using USB 3.0, switch to USB 2.0
    1 point
  14. Yup, those seem to match up exactly. I've taken off my static assignments on br0, will update if it crashes again.
    1 point
  15. At the moment it's set to 35 min after disks are spun down. I'll create a new thread if I can't figure it out in the next day or so.
    1 point
  16. lol, already ahead of you its in the code, you simply separate the commands via a comma, e.g.:- gamerule showcoordinates true,help
    1 point
  17. Ok just trying it now with https://github.com/pdecat/frigate-hass-addons/
    1 point
  18. The only solution is changing from a monthly account to a one time payment which is not bad.
    1 point
  19. Cross-posting here for greater user awareness since this was a major issue - on 6.9.2 I was unable to perform a dual-drive data rebuild, and had to roll-back to 6.8.3. I know a dual-drive rebuild is pretty rare, and don't know if it gets sufficiently tested in pre-release stages. Wanted to make sure that users know that, at least on my hardware config, this is borked on 6.9.2. Also, it seems the infamous Seagate Ironwolf drive disablement issue may have affected my server, as both of my 8TB Ironwolf drives were disabled by Unraid 6.9.2. I got incredibly lucky that I only had two Ironwolfs, so data rebuild was an option. If I had 3 of those, recent data loss would likely have resulted. Paul
    1 point
  20. I suspect its because you have the following set for smart polling. poll_attributes="1800" which will be 30mins. What is the time for your sleep? Try changing poll to 3600 to see if that fixes issue.
    1 point
  21. So long as no drive assignments changed over the last week then yes
    1 point
  22. That is corruption on the flash drive. Probably best to redo the flash drive (or replace it)
    1 point
  23. Your CPU is not supported, the only way to make it work is using the modified version of the hass addon: https://github.com/blakeblackshear/frigate/issues/695 https://github.com/pdecat/frigate-hass-addons
    1 point
  24. Correct, I keep forgetting that because I honestly prefer the terminal. I'll try to remember to offer that in the future. You are....not wrong. Huh. That's the first time I've seen a kernel parameter outright ignored in a while. Okay, a different method, and I actually rebooted to test this one -- we add setterm -blank 0 to /boot/config/go and I'll let you all sort out how to get there and edit it yourselves. You didn't forget that backup, right?
    1 point
  25. Du müsstest zuerst mal den Parity sync fertig werden lassen da die Parität ja noch nicht erstellt ist und dann sollte es problemlos funktionieren.
    1 point
  26. Hallo @vernichter04, könntest du deine vorgamgsweise ein wenig genaier beschreiben? Kannst du evtl einen screenshot von der Hauptseite machen auf der man sieht wie du was wo hinzugefügt hast und welchen slots es zugeteilt ist?
    1 point
  27. Is there a performance benefit to using CUDA 11.2 for the cards/drivers where it's available? If so you could possibly offer a separate 11.1 build for compatibility, although that would introduce some additional overhead for maintaining the image. /edit: thanks for the quick response, I appreciate it. Just tested the new build you pushed and it works perfectly again.
    1 point
  28. Or keep these things on cache and have them backed up to the array. CA Backup plugin will take care of appdata, I think there is also a VM Backup plugin but I haven't used it. To get your dockers going again exactly as they were you just need appdata and the saved templates, which are on flash. You should always have a flash backup of course.
    1 point
  29. You need to disable DNS Rebinding in your new router: https://wiki.unraid.net/My_Servers#A_note_regarding_DNS_Rebinding_Protection Until that happens you can access the webgui at https://ipaddress (note this is https and not http). You will need to ignore any browser warnings about the certificate not matching. If you can't disable DNS Rebinding in the router and want to disable local SSL for the webgui instead, go to Settings -> Management Access and set "Use SSL/TLS" to "no"
    1 point
  30. Those log entries have ceased. So one of those things above fixed it. Either an update or turning on the iommu.
    1 point
  31. upps .... danke... nun hier die confic ohne Logindaten 🙂 config.docx
    1 point
  32. @Hoddl Lösch mal deine Logindaten für die Mail etc. aus der Config und ändere die direkt mal ab. Genauso von der Datenbank. @mgutt @ich777
    1 point
  33. Thanks for the help! It works now
    1 point
  34. LOL. That was my intention. It's not a clear cut situation. I'll add another factor to the equation. Parity doesn't hold any sensible data by itself, it works in conjunction with all the data drives in the parity array. So assuming one parity drive, if one drive fails it is emulated by the rest of the drives, if 2 drives fail, you lose the data on both failed drives. If that happens, you are much better off if parity is one of the failed drives, as it doesn't have any data. So using that info, parity should be the lowest priority for reliability, highest speed for writing, and data drives should be high priority for reliability and read speed. Bottom line, it doesn't make a whole lot of difference in the large scheme, do whatever makes you feel best. Expected lifetime of a drive is a crapshoot not likely to be influenced by whether it's a parity or data drive. Personally I've had better luck with WD than Seagate, so my suggestion is put the Seagate as parity. Since I've only personally had experience with 100's of drives vs. hundreds of thousands manufactured, my statistical experience is meaningless, you do you.
    1 point
  35. Yes, by default my template stores your roonserver data in your appdata folder on your cache drive. If you don't use your cache drive for your appdata folder then you can change those volume mappings to: "/mnt/user/appdata/roonserver/app" "/mnt/user/appdata/roonserver/data" Unfortunately, if your Roon CORE is already corrupt I don't believe you'll be able to backup your Roon data by copying the appdata folder and get it to work again. I think you'll have to do as @dkerlee mentioned and delete the container from the UNRAID docker page, and then manually remove the appdata folder. You can do this any number of ways, using the command @dkerlee mentioned, using a command line based visual file manager like Midnight Commander, or by using a file manager running in another docker container, like Double Commander (available via Community Apps) to manage your directories. Then you can reinstall the RoonServer container with whatever mappings you want for your appdata directory. Of course, Roon will have to build your library from scratch which is annoying, but it happens at times to the best of us.
    1 point
  36. Thanks for pointing this out K. You're not actually doing anything wrong. I think that the issue is that web browsers can't display .cbr or .cbz files. I'd never actually tried reading any comics in my browser so I didn't realize this. YACReaderLibraryServer is generally used as a server to host your comics which are then read by the YACReader application, either on your computer, or a tablet or phone. Here is the link to the software page: https://www.yacreader.com/downloads Once you download YACReader on your computer or tablet you can link it to your instance of YACReaderLibraryServer by either being on the same network and setting it up at the local address (ex. http://<UNRAID_SERVER_IP>:8082 in your case) or you could set up a reverse proxy using something like Linuxserver's SWAG image, though if you do that I'd recommend you add an HTML password to the page because there is no built in authentication with YACReaderLibraryServer. Hope that helps.
    1 point
  37. Yes, everything written to the array updates parity, so if for example you have 10 data disks parity will end up having 10 times the writes of the data disks.
    1 point
  38. ahh ok, hätte hier noch na alte AMD HD 5770 Wenn die besser ist ?
    1 point
  39. Nicht wirklich, ich setze nur AMD ein. Ich meine allerdings gehört zu haben das Nvidia einige Probleme macht. Da muss man das Biosfile patchen. Youtube: How to easily passthrough a Nvidia GPU as primary without dumping your own vbios! in KVM unRAID PS: in deinem Bild hast Du bei Soundcard nicht die HDMI Ausgabe der GTX 1050 gewählt. Das sollte/muss ausgewählt sein. Gruss, MPC561
    1 point
  40. New version is out From the settings you can set the current unhealthy status to be reported as healthy in the dashboard (the tooltip will still show the correct status), as well as clear it. If the status goes back to fully healthy, then it will be reported as healthy, if any pool changes status, it will be reported as unhealthy again.
    1 point
  41. Good shout on netdata... I wrote this: It may help you.
    1 point
  42. What I would do is to make sure that you always have a current backup of the contents of that flash drive on a separate PC. Main >> Boot Device >>> click on 'Flash' under "Device" >> Flash Device Settings >> click on 'Backup' button While CA Backup will create a backup of the boot drive, it is on the array and you can't get at it without creating a working boot drive for your server--- a catch 22 situation. Next, I would purchase a new USB2 full-size flash drive smaller than 32GB that bears the name of a manufacturer that actually manufactures memory products. (It appears that heat is the thing that tends to kill flash drives. USB2 drives run cooler and being full size will prompt better heat dissipation. Speed is not a factor for the way Unraid uses the boot drive. Unraid reads about 500MB when the server boots and then ignores the drive until it writes a single file at shutdown.) That way you will have a replacement drive on hand if (and when) the present drive fails. (The reason I suggest doing this, while your present drive is functional, is that it is actually difficult now to find a flash drive that will meet all of these criteria!)
    1 point
  43. @SpaceInvaderOne It would appear that your site is down. I get error 1016 when navigating to it in a browser which is a origin dns error. Will your site be back up or will it change to something else?
    1 point
  44. Just edit the docker container, make sure advanced view is on and at the --hostname parameter...
    1 point
  45. Hi, I wanted to use this pihole docker as my second dns server, i already have a hardware raspberry pi as my main pihole dns, but having a backup is always good. I was wondering how i could change hostname of this pihole installation? I was hoping to simply ssh in and change /etc/hostname to whatever i wanted but there is no editor included in the docker so unsure how i would accomplish this. Ideas? Thanks. EDIT: Figured it out: ssh into the docker cp /etc/hostname /etc/pihole/ ssh into your unraid install nano /mnt/cache/appdata/pihole/pihole/hostname changename + save ssh back into the docker cp /etc/pihole/hostname /etc/ EDIT AGAIN: Wont survive docker container restart damn. EDIT One More time! added --hostname pihole2 to the run command in docker, seems to work!
    1 point
  46. I got an Eaton Ellipse Eco 1200 (1200VA/750W) to use with Unraid and it's detected out of the box when connected via USB. I'm using the built-in Unraid UPS menu in the WebUI, with apcupsd. So far I have no complaints; Unraid says the battery should give me about 50 minutes of power (which is great) and unplugging the UPS from the outlet immediately triggers a Telegram and Email notification.
    1 point
  47. I believe this plugin is abandoned. You should check out the disk location plugin available in community apps.
    1 point
  48. I thought I'd share how you can enhanced the go file by reducing the six lines to a single command and it's not by using another script. You can create a tar ball that contains the fetch_key and delete_key scripts. The go file calls the tar command. The tar ball files are extracted and event directories are created. You MUST have a fully functioning auto-start that unlocks using the event directories. This works with FTP or SMB fetch_key scripts. If you have changed the script names (fetch_key, delete_key) or changed the path where you store the scripts (/boot/custom/bin/), you will need to use your alternative names in the following procedure. 1) Create a tar ball call "events" from the existing files in the event directories. At the terminal prompt enter the following: tar -czf /boot/custom/bin/events -C /usr/local/ emhttp/webGui/event/starting/fetch_key emhttp/webGui/event/started/delete_key emhttp/webGui/event/stopped/fetch_key 2) Update the go file. Comment out the existing lines in order to test. From:- # auto unlock array mkdir -p /usr/local/emhttp/webGui/event/starting mkdir -p /usr/local/emhttp/webGui/event/started mkdir -p /usr/local/emhttp/webGui/event/stopped cp -f /boot/custom/bin/fetch_key /usr/local/emhttp/webGui/event/starting cp -f /boot/custom/bin/delete_key /usr/local/emhttp/webGui/event/started cp -f /boot/custom/bin/fetch_key /usr/local/emhttp/webGui/event/stopped To: # auto unlock array # mkdir -p /usr/local/emhttp/webGui/event/starting # mkdir -p /usr/local/emhttp/webGui/event/started # mkdir -p /usr/local/emhttp/webGui/event/stopped # cp -f /boot/custom/bin/fetch_key /usr/local/emhttp/webGui/event/starting # cp -f /boot/custom/bin/delete_key /usr/local/emhttp/webGui/event/started # cp -f /boot/custom/bin/fetch_key /usr/local/emhttp/webGui/event/stopped tar -xzf /boot/custom/bin/events -C /usr/local/ 3) Once you're confident that everything works, rebooting IS necessary. You can clean up by deleting the event scripts (fetch_key, delete_key). Your files are now stored in the "event" tar ball. And, updating the go file by removing the commented lines and any references to "unlock". # auto start array tar -xzf /boot/custom/bin/events -C /usr/local/ I hope some of you may find this interesting.
    1 point
  49. ok been playing with this and got cp working, so you need to do the following:- 1. edit rtorrent.rc (should be in the /config path) and change scgi_port = 127.0.0.1:5000 to scgi_port = 0.0.0.0:5000 and save. 2. update to the latest image, yes ive tweaked it 3. edit your docker config and change port 8112 to 5000 for host and container and save 4. go to cp and set for rtorrent downloader for "Host" scgi://<ip of your server>:5000 and click test, worked for me :-) not played with sonarr, but i guess this should work as well now, let me know how you get on. for anybody else following this and just starting off, you shouldnt need to do any of the tweaks above, just specify the scgi in cp, as the tweaks are now incorporated.
    1 point