Leaderboard

Popular Content

Showing content with the highest reputation on 03/10/21 in all areas

  1. Upgraded from last stable as I had a spin-up problem with my Seagate Ironwolf parity drive under RC2. I see the same quirk again under the new kernel - this time having attached diagnostics. From what I can tell, it appears once the mover kicks in and forces the spin up of the parity. It tripped up once as you can see from the logs but came up and wrote fine. I've done repeated manual spin downs of the parity, writing into the array via the cache, and forcing a move hence bringing up the parity again. No errors as of yet. This is a new drive and under the same hardware setup completely as 6.8.3 so it is a software/timing issue buried deep. If this continues, I will move the parity off my 2116 controller (16 port) and over to my 2008 (8 port) to see if that relieves any issues. Past that, perhaps over to the motherboard connector to isolate the issue. FYI. Kev. Update: I've disabled the cache on all shares to force spin ups much faster. Just had another reset on spin up. I'll move to next controller now. Update 2: Drive dropped outright on my 2008 controller and Unraid dropped the parity and invalidated it: I'm going to replace the Seagate with a 8T WD unit and rebuild. Definitely an issue somewhere with timings between the 2 controllers. Update 3: After some testing offline with the unit and digging, it looks like the Ironwolf may be too aggressively set at the factory for a lazy spin up. This behavior must be tripping up some recent changes in the mpt2/3sas driver. Seagate does have advanced tools for setting internal parameters ("seachest"). I set the drive to disable the EPC (extended power conditions) which have a few stages of timers before various power-down states. I also for good measure disabled the low current spin-up to ensure a quick start. Initial tests didn't flag any opcodes in the log. I'm rebuilding with it and testing. For note, the commands are: SeaChest_PowerControl_xxxx -d /dev/sgYY --EPCfeature disable SeaChest_Configure_xxxx -d /dev/sgYY --lowCurrentSpinup disable Update 4: I've submitted all info to Seagate tech support citing the issue with the 10TB and my suspicions. Deeper testing today once my rebuild is done to see if the fixes clear the issue. Update 5: Full parity rebuild with Ironwolf. No issues. I've done repeated manual drive power downs and spin ups to force the kernel errors across both the controllers above and I have not seen any errors. Looks like the two tweaks above are holding. Kev.
    3 points
  2. You don't have to do that. If the share is empty it will delete all top-level dirs for it on all devices.
    2 points
  3. Normally I would have, but I just didn't have a chance to yet. My life is still a bit of a mess after moving (I'm currently sitting on the floor while I type this because most of my furniture won't arrive for another week haha). I'm planning on incrementing the version later today though, but I wanted to get the quick and dirty fix out sooner rather than later.
    2 points
  4. Have you a active internet connection on boot? The plugin will automatically download the new version for you, but you need to have a internet connection on boot otherwise it will fail. No, you only need a active internet connection on boot and it will download the new driver (keep in mind that the boot will take a little longer since it's downloading the new driver ~130MB). As @tjb_altf4 said if you don't have a Internet connection the worst thing can happen is that you have to reinstall the Plugin and also disable and enable the Docker Daemon or reboot once more. Hopefully the next time a update is released this won't happen again. I check for new versions now every 15 minutes and have everything automated so that about 1 hour and 15 minutes after a release the Plugins are updated, even if I'm sleeping...
    2 points
  5. Personally and I do mean personally I always do the following: Stop all Dockers Spin up all Drives (below does it anyways, but......) Stop the Array Shutdown/Reboot I do that simply because if a docker hangs I can wait for it to shutdown vs wondering what's hung and why my machine isn't shutting down. So I assume control of each step because I don't like unclean shutdowns and having to wait for a parity check to fire up if something goes sideways. I've not done that a few times and had good results, but there was a few times in the past when I had to eventually login and pray when I forced it to shutdown nothing would go wrong.
    2 points
  6. Yeah just a min, actually about 15. I put it on the wrong branch 😆 Ok good to go now, sheesh
    2 points
  7. I'm simply running multiple root shells under tmux with each shell in its own directory. I think that's a fair use of unraid, without going into the whole other users can of worms. So you can imagine my surprise when a simple login shell refused to open in the specific directory I had it open.
    1 point
  8. This seems to be very difficult to achieve, although I can fully understand your "wish". Keep in mind that even an 80 plus titanium psu has 10% loss on 10% load. ARM sbc's are very low on idle usage, but they're not super powerful.
    1 point
  9. 谢谢. I will merge the pull request. Give Community Apps a bit to pick up the updated language pack and then you can update it and see the translation changes.
    1 point
  10. That's because I'm an idiot. I don't mean wireshark, I mean wireguard 🤯
    1 point
  11. Yes, since it's currently empty. Before deleting the share, add Disk 3 back to it (ie. include Disk 3). Then delete the share. Then recreate it, including just the encrypted Disk 5. Alternatively, if you don't mind using the command line, just delete the empty folder: rmdir /mnt/disk3/Work
    1 point
  12. I'll be out of town for a couple of days and can see there's a new update(6.9.1). I'm going to update it and as soon as i return, I'll try the previous solutions, otherwise I'll try the your new sugestion. But I really don't like deleting things :E Feels like there's a high probability of error from my side, but hey, gotta learn somehow
    1 point
  13. @Gico Do you know which client on your network is 192.168.168.10? its connecting to your server repeatedly via /login and nginx is complaining about it.
    1 point
  14. wow, that was quick, update went smooth from 6.9.0 to 6.9.1. Now bring on 7.0.0
    1 point
  15. A noble goal, but a lot of technology/expense for very little benefit (ecological or economic) Solar is highly variable in how much energy is produced based on time of day, cloud conditions, etc. What's going to happen to the excess energy when the sun is shining but your server only needs a small amount of what's generated? What's going to happen to the energy generated when it isn't quite enough to power your server? (your server will need to switch to mains, and the power that is generated is wasted?) Have you looked at getting a more standard roof-top solar/inverter system? If you can afford the expense, the cost/benefit ratio would likely be much better (again, both ecologically and economically)
    1 point
  16. Thanks for the fix Trouble though with what you've done is that many users may not be aware of the issue (ie: the plugin may have auto updated), and if at some point when they do check out the webUI they'll have the issues on a version of the plugin which is already fixed. Why not bump the version of the plugin so that every version to avoid any issues?
    1 point
  17. Ich laste laut CrystalDiskMark 10G mit 1175 MB/s aus. Windows zeigt im Schnitt 1.05 "GB/s" an. Da Windows in Wirklichkeit Gibibyte anzeigt, sind das also 1127 MB/s. Maximal sind es sogar 1.09 GiB/s, also 1170 MB/s. 10Gbit sind 1250 MB/s. Der Durchsatz von Ethernet ist 94%, also 1175 MB/s. Demnach sollte MTU 9000 mir irgendwas zwischen 0 und 50 MB/s bringen. Ich habe es nie zum Laufen bekommen und daher schlussendlich auf 15⁰ gelassen. EDIT: Ich habe gerade herausgefunden, dass der Durchsatz genau das ist, was man mit der MTU erhöht. Nämlich 99% mit 9000. Demnach könnte ich doch noch was rausholen. 1249 MB/s wären möglich.
    1 point
  18. This is one way: https://forums.unraid.net/topic/93846-btrfs-error-with-new-system-m2-ssd/?tab=comments#comment-867992
    1 point
  19. Also, wenn Du ständig hohe Datenmengen schiebst kann es schon was bringen auf MTU 9000 zu gehen. Bei 1Gbe rechnet man im Schnitt mit 10MB/s Gewinn. Bei 10GBe könnte das schon zu einem Unterschied von 100 MB/s führen. Alle Geräte müssen das können und beim CSS610 steht da irgendwo etwas von 10218 Bytes. Mal wieder wie von MT gewohnt. Anstatt der standardmäßigen allseits bekannten 9038 Bytes zu nutzen, benutzt MT etwas unstandardisiertes bzw. ungewöhliches. Das ist imho so ähnlich wie bei dem POE Port der nur Passiv beherrscht während alle anderen 802.3af/at nutzen. Lass das besser auf 1500 Bytes stehen wenn Du keine Probleme hast. Mittlerweile bin ich der Meinung daß ein anderer Switch besser, wenn auch erheblich teurer gewesen wäre. Die Dokumentation stimmt nicht. Reset wie beschrieben auf SWos 2.12 geht nicht, ich bleibe immer bei 2.13RC5. Mit Passwort hängt die Kiste ewig bei der Anmeldung um dann einen Fehler auszuspucken.. Ja, ich kann Tippen... Ist halt ein Billigteil. Anders kann ichs nicht mehr ausdrücken. Wären die zwei sfp+ Ports nicht würde ich sogar von einem Fehlkauf sprechen, so kann ich sagen funktioniert das Gerät für Standardanwendungen immerhin normal. 10G ist schnell und problemlos. Ich bin gespannt auf was für Probleme ich stoßen werde wenn ich das Netz mal in Vlans auftrennen will... womöglich noch mit einem router derselben Firma..
    1 point
  20. @ich777 I have confirmed this works. Thanks, I'll happily wait for the fix.
    1 point
  21. @all for all that are using the Nvidia-Driver Plugin: @Kaldek & @sterofuse If you are using the Nvidia-Driver Plugin I have a workaround for you if you boot into GUI mode: When you get to the blinking cursor press CTRL+ALT+F1 Login with your root account and password Type in 'nvidia-xconfig' (without quotes) Type in '/etc/rc.d/rc.4' (without quotes) These commands have to be done from the local console (not a remote console). I will fix this ASAP and keep you updated here in the thread, sorry for the inconvenience... Fixed! Update the Nvidia-Driver Plugin (make sure that you are on version 2021.03.10) and reboot your server into GUI mode again.
    1 point
  22. Thanks for your reply, i've literally just finished using the kernel helper for 6.9.1 stable. I think I know where I was going wrong, a long time ago I was using the linuxserver.io unraid Nvidia, and Unraid DVB Plugins. They overwrote each other not your drivers. Re-looking just now I see there are two plugins I need in your Repo, Nvidia Driver, and DVB Driver. Thanks for your efforts! I'll try them next build rather than the Kernel Builder. p.s the Kernel Builder has been amazing.
    1 point
  23. The system is 100% stable since day 1. No issues, no problems. It just sits there and works. The max memory speed has been confirmed by Asrockracks technical support. I didnt play with the memory OC features since stability is important for me and I dont need a couple more megabyte/sec memory bandwidth. I also didnt try the 1.10 bios because it doesnt add any features I am missing. I would only flash it for a Ryzen 5000. I'd say this board and the config (cooler, memory, case, NVME cards) is a fire and forget thing. Set it up, configure it and you are done. All disks work well, have a decent troughput, especially the NVME Raid 10 is amazingly fast. The only two things I remember from the build process to be aware of is, that the ATX cable needs to be long enough since the connector on the board is at the top, wich requires a bit of cable length to allow bending of the cable properly. And, secondly, that the IPMI password can only be 16byte long. If you set a longer password, login will not work. I think my 1st post covers all the points. I also did the upgrade from 6.9 RC2 to the final release, which took 5 mins incl reboot.
    1 point
  24. That drive is very sick -as each Pending sector indicates a sector that can not be read reliably (and can thus result in the corresponding sector on the parity drive potentially having the wrong contents). Reallocated sectors while not necessarily a problem if they are stable are a big warning sign if the number is not small,. with that drive in the system I would not assume that the contents of the parity drive are valid enough so that parity plus remaining drives can rebuild any failed drive without serious file system corruption on the rebuilt drive. Since you say the content of that drive is unimportant I would suggest doing Tools -> New Config; and select the options to retain all current settings return to the Main tab and change the problem drive slot to its replacement rebuild parity with the new drive set. Hopefully this time it will build without drive level errors so it can be assumed valid. You can then format the replacement drive to create an empty file system on it so it is ready to receive data.
    1 point
  25. That did the trick, thanks for the help!
    1 point
  26. Good deal. The -delete option uses extra memory. Let’s eliminate the variables one at a time. Just for the sake of troubleshooting, remove that option and try running again. Also, add -P and -h. Those will make the output a little easier to read for you.
    1 point
  27. Don't touch those as they are the SSh host keys (deleting them will regenerate them on sshd restart) If they are regenerated, you'll get warning about Man In The Middle attacks (ssh will consider your Unraid host as never before seen)
    1 point
  28. According to ethtool and ifconfig in your diagnostics you have eth0 and eth1. However your network.cfg file references eth0 and eth3 instead and they are both on the same network. # Generated settings: IFNAME[0]="eth0" PROTOCOL[0]="ipv4" USE_DHCP[0]="no" IPADDR[0]="192.168.1.6" NETMASK[0]="255.255.255.0" GATEWAY[0]="192.168.1.1" METRIC[0]="1" DNS_SERVER1="8.8.8.8" DNS_SERVER2="8.8.4.4" USE_DHCP6[0]="yes" DHCP6_KEEPRESOLV="no" IFNAME[1]="eth3" PROTOCOL[1]="ipv4" USE_DHCP[1]="no" IPADDR[1]="192.168.1.119" NETMASK[1]="255.255.255.0" GATEWAY[1]="192.168.1.1" SYSNICS="2" I'm not sure what you aim to achieve but you probably want to recreate the network.cfg file. If you have a DHCP server you could simply delete the /boot/config/network.cfg file and reboot. It will then be recreated with safe DHCP settings. You can then edit it via the GUI Settings -> Network.
    1 point
  29. Done as you wrote and with all default settings everything works. Public posting to server list and stable for a few hours. I'm going to start changing the server name, world name, password, one by one to see what happens. I did make sure the password was not the world name. Will report back if with results.
    1 point
  30. I did indeed get another crash since, I saw there was a new unraid release today so I tried that but still another crash since. I went to try safe mode but then found most of my disks don't show because I assume it's not loading the drivers for my LSI card so for now I've reverted to normal boot but with Docker engine turned off. Will see how that goes. Will also see if theres a way for me to just load the drivers for my LSI card so i can use safe mode but still access plex on a 2nd box using the files on the crashing one.
    1 point
  31. @cortana - please type this in a Terminal session and then reboot. If still no local video post diags again: touch /boot/config/modprobe.d/amdgpu.conf
    1 point
  32. This is already possible in the 6.9.x releases as long as the code that generates the notification message provides the correct URL information as a parameter to the call to create the message. As such it is up to authors to update their code to utilise this feature.
    1 point
  33. guys it looks like sonarr V3 has finally dropped and is now the latest release so I need to make some minor adjustments for the image and this will fix it all up Sent from my CLT-L09 using Tapatalk
    1 point
  34. Next point release can you update docker to 20.10.5 It patches 3 CVE's including these 2 ugly ones: https://github.com/moby/moby/security/advisories/GHSA-6fj5-m822-rqx8 https://github.com/moby/moby/security/advisories/GHSA-7452-xqpj-6rpc
    1 point
  35. IIRC GPU stats plugin can cause this.
    1 point
  36. Hi all. I had this problem too after upgrading to 6.9 this week. My Win10 VM and even unRAID itself were running dog slow. System Interrupts was using 100% CPU in Windows. For me what worked was changing a setting in the "Tips and Tweaks" plugin. I had "CPU Scaling Governor:" set to "Power Save", which I think is the default for Intel CPU. Changed it to "On Demand", and now my VM and unRAID are running much better. Disclaimer: I don't understand much of that, but it seems to work. Maybe there's downsides. YMMV. Seeing a lot of posts with this issue, hopefully this helps you.
    1 point
  37. I have the latest update from 2 days ago and I still have the issue where a bunch of the plugins wont work. Anyone else also experiencing this?
    1 point
  38. Hello, both Sonarr and Radarr recently have been unable to connect to Deluge. Both return the following error under System: I suspect this may be related to Privoxy, as my Deluge log shows the following repeating over and over: 2021-03-04 19:35:47,139 DEBG 'watchdog-script' stdout output: [info] Privoxy not running 2021-03-04 19:35:47,143 DEBG 'watchdog-script' stdout output: [info] Attempting to start Privoxy... 2021-03-04 19:35:48,153 DEBG 'watchdog-script' stdout output: [info] Privoxy process started [info] Waiting for Privoxy process to start listening on port 8118... 2021-03-04 19:35:48,159 DEBG 'watchdog-script' stdout output: [info] Privoxy process listening on port 8118 Up until recently everything was working fine. My VPN is PIA. Anyone know where to start? --- ✔️SOLVED: I had to add my server's IP in both Sonarr and Radarr under Settings > General > Proxy > "Addresses for the proxy to ignore" This is described in Q26 in the documentation here: https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md
    1 point
  39. I like to request NordVPN client docker from bubuntux Its designed to provide connection to other containers via --net=container:vpn And the advantage front normal openvpn clients its that uses the NordLynx connection so you get better performance with this when using this vpn. Tried to create a docker template my self and but it dockerman/templates-user, but can't get it to work. Thanks for your time.
    1 point
  40. I like to request Remotely would be very nice to use it for customer remote support for Windows, linux or maybe vm access. This Youtube video (hope its okay to post) got me interested and it can be selfhosted. https://github.com/lucent-sea/Remotely
    1 point
  41. well summarized and worth a PIN. I think I'll redo the math. As written in my initial post, the 45W config on surface looks like a sweet spot, without getting to technical with math and numbers.
    1 point
  42. I did a few more tests with 65W respectively allowing the Motherboard (through BIOS config) to set the package power limits. Without any adjustments and the CPU running at stock 65W the system consumes: 50W during boot 50 - 60W during normal operations e.g. docker starts / restarts 84 - 88W during parity check and array start up (with all services starting up too) 184 - 188W during full load when transcoding a 4K video CPU temps at full load went up to 86° (degree celcius). I also compared the 35W vs. 45W vs. 65W (unlimited) performance: 4K transcoding of a 1.7GB file using a Handbrake container: 65W - 28 FPS / 3mins 30sec - 188W max. 45W (Called Eco Mode in Bios) - 25 FPS / 3mins 45sec - 125W max. 35W - 4FPS / 25 mins - 79W max. So bottom line - the average load / idle load does not differ that much, however the max consumption can be limited quite a lot, with the price of much lower performance. One can also see that if the system has to execute other jobs in e.g. Nextcloud the avg. FPS in handbrake drop to 3.xx. Rendering a movie and using Nextcloud at the same time becomes sluggish. Without rendering a movie the performance is still good. I edited the original post and added a few cost related comments.
    1 point
  43. I think the Meshify allows even a better cable management, especially for the ATX connector since it has one more rubber protected cable grommet on top of the other two upper grommets. With the experience from my Define build I'd recommend the Meshify. And, the optics of the Meshify are cooler too. The Switch is fine. Not nowing your home / infrastructure I'd probably throw away the other small switch a get device which allows Link Aggregation. The full PCIe4 x1 slot of the board allows a 10G upgrade at a later stage.
    1 point
  44. Main purpose of this system is media and document storage, with the flexibility to add docker container and VMs where I need them. My Picture db is approx. 40K High res JPEGs & RAW files and >1K videos, and growing . I am using a Photoprism container to structure the albums and also planing to use the Windows VM to edit the videos & photos. The workflows in this matter are still under development. I am also using a Handbrake container for video transcoding, primarily 4K IPhone footage (.mov) which I transcode into H.264 .mp4. This setup works really well, especially with all the CPU cores I can assign cores to e.g. Handbrake and let it render at 100% and assign cores to VMs and other services and basically nothing conflicts with each other. I dont use any Adobe software at the moment since I dont like their subscription model and I dont use this system for audio editing. I tested basic sound features Your 1G Switch will limit the speed per port to 1G (unless this rule has changed in last 5 yrs :D). What you could do is to bond the ports, however your Switch need to support this feature. 1G (max. ~80-90 Mbyte / second) is enough for my purpose and if I need more throughput I will bond the 2 NICs to get approx. 150 Mbyte / second. Simple Desktopswitches with link aggregration start at approx. 30 EUR. I picked the CPU to achieve a good core / power consumption ratio since it is running 24/7. And also, because I dont like standard hardware :). Since the board is relatively future proof I will also be able to add future 65W Zen 3 CPUs which AMD is preparing atm (Ryzen 9 5900 for example). The differences in performance between 3900 and 3900x can be measured but you will not notice them when working in apps. My goal also was to build a clean and well build system as I hate untidy setups or cable mess:). Updated pictures below. The case is really awsome and allows great airflow and a clean setup. If you plan to add a GPU to accelerate your VMs and to use the Asus card you would be limited to 2x x8 PCIe lanes. So you could use 2 PCIe 4.0 SSDs in Raid 1 with the Asus card, which should be fast enough for editing purposes. The x16 slot would be running at x8 speed. See the handbook page 15 for details.
    1 point
  45. So far I have no issues with the board. Even the relatively high temps of the X570 seem to be no issue. The maximum I have seen is 76° during Parity check and will copying data back and forth to the NVMEs. I am using 2 Linux (server related) hosts and 1 Windows VM atm. Given that the Windows VM is only using a virtual VGA Adapter the performance is ok. I get between 5 - 7GB / Sec read speed (Raid 10) and 2,5 - 4,2 GB / Sec write speed (Raid 1 & Raid 10). Still testing the network speed which should give max. 1 GBit, given I have a WIFI only network. The UnRaid host is connected to a Fritzbox 1GBit Lan port. Main purpose of the large cache is to host docker container and a large picture db. I use the slower disks primarily as a 1st backup instance. Both cache drives are also being used to store VM disks. I got the 12 core CPU for encoding purposes and to be able to pin cores more granular to container and VMs. Still I wanted a low power CPU and not a 105W. Bottom line, I think the board (and probably the 10G version) is worth the money. 10G version only if the network allows the speed. The Bios is basically a server grade bios with added desktop (overclocking) features. The 4x Asus card works flawless. Even though the SSDs stay in their termal tolerance range I am thinking to add the fan again, and since its rather noisy add a resistor to lower the fan speed. The SSDs are ok up to 70°, so there is still plenty of head room atm. If a dedicated GPU would accelerate the Desktop VMs I might get one, but only a cheap 2D Card Like the Nvidia GT710 passive. Still need to figure that out. I did a quick disk benckmark with a the Linux Desktop which used a 60G disk on the Raid 10. The 8,7 GB is read, the 1,2GB is write speed.
    1 point
  46. I was able to pull bubuntux/nordvpn, which is a Nord VPN service utilizing Wireguard, as an ordinary docker from Docker hub. I was able to modify the config file and get it to run using docker-compose. I have to launch it from the command line, and while its running I can pause it and stop it. from the Unraid interface. I haven't yet figured out how to configure it for other dockers that might want to use the tunnel. I think more than anything, it needs a docker menu to help reconfigure the service, but I haven't figured out how to create a menu file. Any tips of borrowing another apps docker menu and modifying it to work for bubuntux/nordvpn would be appreciated.
    1 point
  47. Try this From a windows command prompt (run as administrator) mklink /d "c:\WhateverFolderYouWantItCalled" "\\unRaidServer\unRaidShareName" Your share will wind up being mounted within that folder on an existing windows drive Should be close enough to what you need.
    1 point