Leaderboard

Popular Content

Showing content with the highest reputation on 12/20/20 in Posts

  1. The latest Unraid blog highlights all of the new major changes in Community Applications 2020.12.14 including: New Categories and Filters Autocomplete Improvements Repositories Category/Filter Community Applications now viewable on Unraid.net As always, a big thanks to @Squid for this amazing Unraid Community Resource! https://unraid.net/blog/community-applications-update
    3 points
  2. After starting to play around with UnRaid a couple of weeks ago I decided to build a proper system. I want to share build progress and key learnings here. Key requirements have been: AMD system Plenty of CPU cores Low Wattage ECC Memory IPMI Good cooling since the system sits in a warm closet Prosumer build quality Config: Runs 24/7 and is rock stable since day 1. UnRaid OS: 6.10 RC1 Case: Fractal Design Define 7 PSU: Be Quiet! Straight Power 550W Board: AsRockrack X570D4U w/ Bios 1.20; latest version as of 2021/10 CPU: Ryzen 9 3900 (65W PN: 100-000000070) locked to 35W TDP through Bios setting; CPU was difficult to source since it is meant for OEMs only. Cooler: Noctua NH-L12S Case Fans: 5x Arctic P14 PWM - noise level is close to zero / not noticeable Memory: 64 GB ECC (2x32 GB) Kingston KSM32ED8/32ME @ 3200Mhz (Per Memory QVL) Data disks: 3x 4TB WD40EFRX + 1x 4TB WD40EFRX for Parity (all same disks, same size) Cache 0: 2x 512GB Transcend MTE220S NVME SSDs Raid 1 Cache 1: 4x 960GB Corsair MP510 NVME SSDs Raid10. Set up with ASUS Hyper M.2 in PCIE X16 Slot (BIOS PCI Bifurcation config: 4x4x4x4x) Todos: Replace the 4 SATA cables with Corsair Premium Sleeved 30cm Sata cables Eventually install a AIO water cooler Figure dual channel memory setting out, atm. single channel config. Thats done. Eventually configure memory for 3200mhz, Done. Eventually install a 40mm PWM cooler for the X570. Update: After a few weeks of 24/7 uptime this seems to be unnecessary since the temps of the X570 settled at 68 - 70° Get the IPMI Fan control plugin working Temperatures (in Degree Celcius) / Througput: CPU @ 35W: 38° - 41° Basic usage (Docker / VMs) / 51° - 60° Load CPU 65W: 78 - 80° Load (This pushes fans to 1300 - 1500 RPM, which lowers the X570 temps to 65°) Disks: 28° - 34° Load SSDs: 33° - 38° Load Mainboard: 50° in average X570: 67° - 72° during normal operations, 76° during parity check Fan config: 2x Front (air intake), 1x bottom (air intake), 1x rear & 1x top (air out); 800 - 1000 RPM Network Througput: 1 Gbit LAN - Read speed: 1 Gbit / Write speed 550 - 600 Mbit max. (Limited by the UnRaid SMB implementation?). Write tests done directly to shares. So fare meeting expectations. Final Config: 2x1 Gbit Bond attached to a TP-Link TL-SG108E. Learnings from build process: Finding the 65W version of the Ryzen 9 3900 CPU was difficult; finally found a shop in Latvia where I ordered it. Some shops in Japan sell these too. The Case / Board config requires a ATX cable with min. 600mm length IPMI takes up to 3 mins after Power disconnect to become available The Bios does not show more than 2 M.2 SSDs which are connected to the Asus M.2 Card in the x16 slot. However, unRaid has no problem seeing them. Mounting the CPU before mounting the board was a good decision, should have also installed the ATX and 8PIN cable on the board before mounting it, since installing the two cables on the mounted board was a bit tricky Decided to go with the Noctua Top Blower to allow airflow for the components around the CPU socket, seems to work good so far Picked the case primarily because it allows great airflow for the HDDs and a clean cable setup The front Fans may require PWM extension cables for proper cable setup, depending where on the board the Fan connectors are located X570 is hot, however with a closed case airflow seems to be decent (vs. open case) and temps settled at 67° - 68° Removed the fan from the ASUS M.2, learned later that it has a fan switch too. Passive cooling seem to work for the 4 SSDs PCIe Bifurcation works well for the x16 slot, so far no trouble with the 4x SSD config Slotting (& testing) the two RAM modules should be done with the board not mounted yet since any changes to ram slots, or just in's/out's is a true hassle since the slots can only be opened on one side (looking down at the board on the left side, towards external connectors) and the modules have to be pushed rather hard to click in. IPMI works well, still misses some data in the system inventory. However the password can only have a max. length of 16 Byte; used a online generator to meet that. Used a 32 char PW at first instance and locked the account. Had to unlock it with the second default IPMI user (superuser) Asrock confirmed the missing data in the IPMI system inventory. Suggested to refresh the BMC what I didn't do yet. Performance: With CPU @ 35W the system performs well for day to day tasks, however feels like it could be a bit faster here and there. Nothing serious. VMs are not as fluent as expected. The system is ultra silent. With CPU @65W the system, especially VMs and docker tasks such as media encoding are blazing fast. VM performance is awsome and a Win10 VM through RDP on a MacBook feels 99% like a native desktop. The app performance in the VM is superiour to usual Laptops from my view, given the speed of the cache drive where I have the VM sitting at and the 12 core CPU. Fans are noticeable but not noisy. 45W Eco Mode seems to be the sweet spot, comparing performance vs. wattage vs. costs. Transcoding of a 1.7GB 4K .mov file using a Handbrake container: 65W config - 28 FPS / 3mins 30sec - 188W max. 45W (called ECO Mode in Bios) - 25 FPS / 3min 45sec - 125W max. 35W config - 4FPS / 25 mins - 79W max. Power consumption: Off (IPMI on) - 4W Boot 88W Bios 77- 87W Unraid running & ECO Mode (Can be set in Bios) - 48W Unraid running & TDP limited to 35W - 47W Parity check with CPU locked to 35W - 78W Without any power related adjustments and the CPU running at stock 65W the system consumes: 80W during boot 50 - 60W during normal operations e.g. docker starts / restarts 84 - 88W during parity check and array start up (with all services starting up too) 184 - 188W during full load when transcoding a 4K video CPU temps at full load went up to 86° (degree celcius). Costs: If I did the math right - the 35W config has less peak power consumption, however since calculations take longer the costs (€/$) are higher, compared to the 65W config. In this case 0.3 (188W over 3,5 Minutes) vs. 2.3 (78W over 25 Minutes) Euro Cent. So one might look for the sweet spot in the middle January 2021 - Update after roughly a month of runtime - No issues, freezes etc. so far. The system is rock stable and just does its job. Details regarding IOMMU groupings further below. I will revisit and edit the post while I am progressing with the build.
    2 points
  3. Click on the thumbs down and select acknowledge.
    2 points
  4. My NUC running Win 10 and Roon crashed so I thought I would give Roon a try on UnRAID, using your docker container (thanks!) I stumbled across this thread later, after having set things up. If I am reading this right, it sounds like if we use xthursdayx' updated template then we don't need to go through all the steps indicated above, so that things properly update when Roonlabs issues updates. How do I know if the updated template was present in Community Apps, when I installed? When I look at the roonserver entry in Community Applications, it says "Added to CA:September 19, 2020." Does this mean that the old template is still on CA? I'm not sure if the "added to" date indicated communicates when it originally appeared in CA, or if it communicates the latest version that was added. Also, a couple of quick public service announcements for UnRAID and docker newbies (I very much include myself in this category!) 1) MULTIPLE MUSIC DIRECTORIES: I have my music files in separate directories. The template by default provides a single entry for your music directory. I went ahead and entered one of my music directories, and then later I tried to add another directory using the main Roon interface (i.e., in the settings, storage area of Roon Remote). Needless to say, it didn't work after numerous attempts. After searching a bit I learned that docker containers can't really see your host file system, unless you map (mount?) particular directories. This is done through the aforementioned template, by selecting "Add another Path, Port, Variable, Label or Device," and then you just follow the formatting example established for the first music directory you set up. 2) ACCESSING THE ROON INTERFACE: This was noted elsewhere in this thread, but I thought I would enter it here as well. It is not necessary to add a port to access the Roon interface. Just install the container, and then with the Roon Remote software on your phone or laptop do a search for your Roon Core. It will find it, you can then access Roon settings etc. from your phone/laptop, and then you will be in music paradise. Thanks again for building this container! Dave
    2 points
  5. Hi All, Just want to share out my findings about unRAID notification. My notification settings are based on Gmail. This how-to will enable the user to send email notification from Gmail to Yahoo email. If you like my how-to, then make it a sticky. Thank you.🙂 ======================================================================== Requirements: A) Setup a gmail account. This account will be the SENDER's email address << Assumption: you have setup 2-step authentication via you mobile phone for logging into your gmail account >> B) Setup a second gmail or any other free webmail account. eg: [email protected] This account will be the RECEIVER's email address ======================================================================== You need to set up google App Password. 1) login into: accounts.google.com 2) Go to "Security" on your left section. 3) Under the heading: "Signing in to Google" 3.1) Click on App passwords 3.2) Sign in your normal gmail accounts 3.3) click: Select app, then select: Mail 3.4) click: Select device, then select: Custom 3.5) Give a name for the unRAID server e.g: midtowerunraid 3.6) Press Generate button 3.7) A window will pop out and app password for the device is display in the yellow box. Copy the password and keep in a safe place and save in notepad. This password is 16 character long. Next click the button: Done e.g: sskwowcomemtyufg <----- 16 character long app password. 3.8) Finally sign out all accounts Follow the steps below, to complete SMTP settings within unRAID server
    1 point
  6. Can someone create a Template for tgorg's locast2plex docker? Also, if its possible to add built in open VPN support inside it so we can change the location of the ipAddress. I really would appreciate this! trying to set it up through dockerhub, but I think lots of people will find this container VERY useful!
    1 point
  7. 1 point
  8. Looks like it didn't upload to Github at all. It should be there now. Sorry about that.
    1 point
  9. 1 point
  10. It's something misconfigured within say Radarr / Sabnzbd that's creating the folder at /mnt/user
    1 point
  11. Put flash in your PC and let it checkdisk. While there make a backup. Reboot, be sure to boot from USB2 port.
    1 point
  12. Try starting the array with NO cache devices assigned at all, then reassign them back to their original slots.
    1 point
  13. Hey @ich777, having issues with the Mordhau game server docker. Have setup and hosted the server in the past no problem. But recently trying to launch it I get these warnings in the log: [2020.12.20-18.59.41:601][865]LogNet: IpNetDriver_3 IpNetDriver_3 IpNetDriver listening on port 15000 [2020.12.20-18.59.41:601][865]LogNetVersion: Checksum from delegate: 502013394 [33m[2020.12.20-18.59.41:634][868]LogMordhauGameInstance: Warning: Ping France: failed (Timeout) [0m[33m[2020.12.20-18.59.41:634][868]LogMordhauGameInstance: Warning: Ping Germany: failed (Timeout) [0m[33m[2020.12.20-18.59.41:655][869]LogMordhauGameInstance: Warning: Ping UK: failed (Timeout) [0m[33m[2020.12.20-18.59.41:669][870]LogMordhauGameInstance: Warning: Ping Poland: failed (Timeout) [0m[33m[2020.12.20-18.59.41:686][871]LogMordhauGameInstance: Warning: Ping Russia: failed (Timeout) [0m[33m[2020.12.20-18.59.41:701][872]LogMordhauGameInstance: Warning: Ping US_East: failed (Timeout) [0m[33m[2020.12.20-18.59.41:717][873]LogMordhauGameInstance: Warning: Ping US_Central: failed (Timeout) [0m[33m[2020.12.20-18.59.41:717][873]LogMordhauGameInstance: Warning: Ping US_West: failed (Timeout) sh: 1: ping: not found sh: 1: ping: not found sh: 1: ping: not found Ports are forwarded correctly and all that. Could a recent update to the game be causing issues?
    1 point
  14. Docker templates are on flash and they can be used to reinstall your dockers using the Previous Apps feature on the Apps page, but of course without appdata the applications themselves will be starting over.
    1 point
  15. If this is the case, is there a reason that you cannot shutdown the server at a desired time and then set it to auto-power on at another time? This is how I have my server setup that never seemed to sleep reliably.
    1 point
  16. Dec 14 14:48:32 v1ew-s0urce kernel: [Hardware Error]: Corrected error, no action required. Dec 14 14:48:32 v1ew-s0urce kernel: [Hardware Error]: CPU:0 (17:71:0) MC27_STATUS[-|CE|MiscV|-|-|-|SyndV|-|-|-]: 0x982000000002080b Dec 14 14:48:32 v1ew-s0urce kernel: [Hardware Error]: IPID: 0x0001002e00000500, Syndrome: 0x000000005a020001 Dec 14 14:48:32 v1ew-s0urce kernel: [Hardware Error]: Power, Interrupts, etc. Ext. Error Code: 2, Link Error. Dec 14 14:48:32 v1ew-s0urce kernel: [Hardware Error]: cache level: L3/GEN, mem/io: IO, mem-tx: GEN, part-proc: SRC (no timeout) IIRC, Ryzen has problems with Overclocking memory in some circumstances. Run the memory at the SPD speed to see if that makes a difference.
    1 point
  17. edac_mce_amd is included in Unraid for the last couple of years The message is simply a reminder to everyone else in the world that the author(s) of mcelog have no idea how to properly word an informational sentence or they are not native English speakers and utilized a TI-99/4A to translate the actual message into English. IE: It's simply telling you that the mcelog default driver (Intel) doesn't support the chip. Its automatically using the AMD module instead
    1 point
  18. It is quite possible that you do not have Krusader setup correctly to set the permissions correctly for access via the network. If this is the case then running Tools => New Permissions on the share in question will rectify the permissions.
    1 point
  19. Unraid neu: Stell bei den Shares überall den Secondary auf "Array" und die Mover-Action auf "Cache to Array". Unraid alt: Stell bei den Shares alle prefer und only Caches auf Yes bzw No. Egal ob neu oder alt: Dann in den Settings Docker und VM auf "No" stellen und zum Schluss den Mover ausführen. Damit werden alle Dateien auf das Array verschoben. Sobald die M.2 leer ist (über die Disk-Übersicht den Inhalt der SSD anschauen = muss komplett leer sein!), kannst du den Server herunterfahren und die M.2 ausbauen, die SATA SSD installieren und neu starten. Jetzt wird Unraid meckern weil die M.2 fehlt. Nach Auswahl der neuen Cache SSD kannst du das Array starten. Nun wieder die Share Caches auf Only bzw Prefer stellen wie sie vorher waren. Den Mover ausführen und wenn der fertig ist (also die Dateien vom Array auf die SSD verschoben wurden), kannst du zum Schluss Docker und VM wieder aktivieren. Docker und VM müssen übrigens deaktiviert werden, da sonst Dateien, die von den Diensten in Benutzung sind, nicht vom Mover bewegt werden können.
    1 point
  20. Hi, i am going to use a gigablue box for Sat>ip streaming. With my old hardware there are too many problems with my dvb-receivers. Thanks for your help
    1 point
  21. It's not so much about the features that makes it expensive, it's the fact that it's server hardware. Server hardware is just more expensive, even if it has less features than consumer hardware. One argument is, depending whether you believe it or not, is that server hardware is "more stable" than consumer hardware, in a server environment. And some of that may have gone back to the ability to use ECC RAM when consumer boards didn't have an option for ECC... a lot of consumer boards do these days, especially for AMD chips. Look at the comparison between specs for these two boards... https://www.newegg.com/Product/Productcompare?CompareItemList=13-140-056%2C13-144-327&compareall=true The AsRock board is $270 USD more expensive than the MSI board, and for that you get... A) 10GBs LAN vs 2.5GB on the MSI -- this would be an improvement, a benefit, IF you have any 10gb network hardware on your LAN (expensive stuff). Even MSI's 2.5GB can't be fully utilized without new network hardware. (I have a 2.5GB MSI board on my PC -- 1GB network). B) 8 SATA ports vs 6 on the MSI -- optionally up to 12 drives using one of the M2 slots. Easier to run 12 drives without an expansion card. C) Dual LAN ports -- not exactly useful. Or I don't know how to make them useful for myself. Currently my SuperMicro board dual lan is set up for fail over or something like that so if the main port dies I still have network.... yeah, not really useful. Maybe I changed it to link aggregation at one point -- doesn't seem to do anything. D) IPMI -- this is network based management of the board.. it's kind of neat, sometimes it's useful (to me) since my server doesn't have a mouse/keyboard/screen.. if I need to access it's BIOS or configuration I can use IPMI to connect to it. This is a feature of most server boards. E) NO AUDIO -- most server boards do not include audio chipsets .. so in your case wanting to use VM's ... I think you'd need/want an audio driver.. not sure. F) ASRock has 2 USB ports (only) .. MSI has a variety of ports. If you plan on plugging anything in this could be a factor. That's kind of a quick run down on the main differences between these boards. So you can see that $270 doesn't get you much. You get a server board with server features, not much else.
    1 point
  22. As I mentioned above this is not my work, I only compiled it and made a package for Unraid so that it installs correctly and is user friendly, please look at the Github repo from above that I've linked. From my understanding this is only the Kernel Module for NCT6687 so that the temps and fans are recognized and also that you can control the fans also please note that with different implementations from different manifacturers not everything can/will work correctly. If it gives you this error than something went wrong at the installation of the package and/or depmod. EDIT: @Bolagnaise is this needed for RC2? Should I build it for upcoming versions of Unraid?
    1 point
  23. This will not work then mate, you should have mentioned this. you need the WMI ASUS plugin
    1 point
  24. check the MB manual properly....for many, if a second NVMe-PCIe or -SSD ist installed, one or even two SATA ports will be disabled.
    1 point
  25. the blue ones are Xpenology on the same hardware. with small files it looks the synology software is 30times faster while on big files it is comparable though there is one great result i measured when copying from Mac to unraid on HDD i cannot explain
    1 point
  26. Auf jeden Fall richtig und wichtig. Das verlinkte ist technisch OK, hat auch nur 1x12V Schiene, kann also die 20A dort frei an die Komponenten/über alle Kabel verteilen, Wenn man beim Neustart 45W für MB, RAM, CPU und SSDs abzieht, bleiben 250W für die HDDs...jede wird 2-2.5A an 12V wollen. Bei angenommenen, noch verfügbaren 16A (48W abgezogen) und 2A je Disk reicht das gerade so für 8 Stück. Wenn die dann "rollen" brauchen sie nur noch die Hälfte. Gerade die viel gerühmten be-quiet haben meist aber 2 Schienen für 12V (die man immerhin bei einigen Modellen wohl zusammenschalten kann, muss man aber wissen). Wenn gleich mit 8 Disks gestartet werden soll und PLatz für 12+ im Gehäuse ist, dann auf ein 500er NT gehen...wird aber eben teuerer. Es macht, ohne zu wissen wie der Ausbau erfolgen soll, aber jetzt keinen Sinn ein grösseres NT auf Vorrat zu nehmen. Dieses arbeitet dann zunächst im ineffizienten Bereich und selbst 5W Mehrverbrauch sind im Jahr ca. 13EUR wert (5W x 24h x 365 x 0,3EUR/kWh).
    1 point
  27. Hey, I did the same thing earlier with the same result. I tried deleting the Big Sur image files and changing it to method 2 which then grabbed the correct Big Sur image file from Apple. So yeah, I'd say workaround-able bug to be fixed when there is time to.
    1 point
  28. Removing disks from the array is covered here in the online documentation accessible via the 'Manual' link at the bottom of the Unraid GUI.
    1 point
  29. when you followed the video in the 1st post step by step starting from removing "old" macinabox incl. old template, adjust new docker, start docker, wait a little until download is done, run vm ready script, edit vm helper script, start vm helper script, after notification start vm ... i would start from scratch and watch the tutorial video, very good explained, if you still broke i would look for an error where it didnt do what u expected, like what comes when u run the helper script
    1 point
  30. Yes. Keep the original disk4 as it is in case there are any problems.
    1 point
  31. In the past I used intel quick sync, and in addition I had to have this in my "go" file: # Setup Intel HW pass-through for Plex transcoding modprobe i915 chmod -R 777 /dev/dri
    1 point
  32. Amazing little container this Pihole. Works like a charm and I have it set up as my 2ndary DNS in case of a fail of my first one. Planning to do an HA Pihole. There are few tutorials out there. Seems to be in its infancy but it looks promising. Any chance to update the container with the latest WebInterface and FTL? I really appreciate your time for making this available to all of us. Cheers PS: Also noticed that seems to be crashing sometimes. Not sure but I believe cloudflared is the culprit
    1 point
  33. So I'm probably doing something wrong with my implementation of VMs because I never really feel that a VM is good for any sort of actual productive work... it's just too slow and clunky and less responsive than an actual PC. And video editing? Forget about it...…. in my view. But a lot of that may be due to the limited specs of my hardware, for sure. I'd probably say that by the sounds of it you do not need that ASRock board.. it's a server board with more server features that you probably don't need or want. I would save money on the board and maximize the CPU and RAM as much as you can. 64GB minimum I'd say. As far as your old hardware... that i7 is about ready for the trash bin, it's decent enough for basic Unraid use but probably not for what you intend to do, and certainly doesn't compare to the Ryzens in benchmarks.. I would not use it for Unraid with your desired use case of VMs. The graphics card might be good to have.
    1 point
  34. Just wanted to update this and mark it solved. The 10TB drive has been successfully added as the parity drive and the 6TB is now a data drive. Thanks JorgeB for the help!
    1 point
  35. It turns out that it was the Recycle Bin plugin I have installed. I guess I thought Move, meant Move. But apparently it means Copy, Paste and then Delete. So even though the move of the file was completed the data remained in the recycle bin.
    1 point
  36. ...no chance for ECC RAM there, as intel skipped that feature for all 10thgen Desktop processors. Also, transcoding using the IGP is currently not supported (yet). For speed and future upgrades, I'd look into a MB with support of 2 NVMe-PCIe drives (for cache or high-speed pool), but I think an mITX might not have that. Also, MB with S1200 often have a newer revision of the onboard i219-V NIC, which is only supported from unraid 6.9beta/RC onwards. In terms of the pricepoint for i5-10400 vs i5-10500, i doubt that you would feel a difference in performance that is worth the money (25%, where i live) today.
    1 point
  37. ...although the J4105 comes with hardware acceleration for encryption, my best guess is, that this is what results in a high CPU load....RAM usage will also be a bit higher. I do have an older AMD opteron 3350HE (4 cores) driving an encrypted array of 11 disks +1 cache....and without anything else RAM usage is approx 20% from a total of 16GB. CPU load however goes up to 80/90% sometimes, during writing to the array.
    1 point
  38. At the moment I am not prepared to implement an option that would auto-pause the parity check that happens after an unclean shutdown. the implementation I am currently testing will auto-pause a restarted array operation that was paused at the time of a shutdown, but that will only happen after a clean shutdown. As soon as an unclean shutdown is detected then the decision is to err on the side of safety. if I get convinced that an auto-pause of the automated check after an unclean shutdown is a feature would be desirable then it could be added but it is not going to be in the next release I make.
    1 point
  39. You have a bad memory stick, so the only prudent thing to do is to replace it.
    1 point
  40. Thank you @Squid for the awesome work. 👍
    1 point
  41. Followed your instructions above @xthursdayx Dude, worked like a charm! Did all that stuff, and it all went off perfectly. Updated using native interface, and it went swimmingly. Thank you very much! You made those changes to your container, so people moving forward should be good to go. Excellent! Now if Roon will sort out their other issues: remote listening, and a slew of other stuff I'm sure. I'll be good. <meh>
    1 point
  42. Update just because people have been asking, yes I've completed the build. It sits inside an IKEA Alex with a button on the side connected to a Raspi to power on my gaming VM. Had a few days to set it up with GPU passthrough but its working fine now. Build: Asrock B550 Extreme4 AMD Ryzen 5 3600X (3.80GHz / 32MB) - boxed Noctua NH-D15 chromax.black 2x Kingston 16GB DDR4-2666MHZ ECC CL19 Gigabyte GeForce GTX 1660 Super Gaming OC 6G Be quiet! Dark Power Pro 11, 750W Cache: Samsung 970 EVO NVMe M.2 - 500GB as a Cache drive Another 250GB M.2 I've had around for VMs Array: New Seagate IronWolf 6TB for parity 2 old 6TB Drives I've had around 2 used Seagate IronWolf 4TB Drives I've had around VM and dockers running: Gaming VM Work VM 2-3 Linux servers to play around hassio -> home automation Plex/nzbget/sonarr/radarr -> media unifi controller
    1 point
  43. I tried to get locast2plex to work on my server with no luck. I too would appreciate a simple way to set it up or a youtube video with step by step instructions.
    1 point
  44. I found a workaround: Specifically, this part at the end: The it87 driver will now load on boot and your fan speeds will be displayed on the Unraid dashboard, and the fan controllers will be available in Dynamix Auto Fan Control. Warning: Setting acpi_enforce_resources to lax is considered risky for reasons explained here. Of course I didn't need the "video=efifb:off" part, so I just added "acpi_enforce_resources=lax" to my /boot/syslinux/syslinux.cfg, then "modprobe it87 force_id=0x8628" to my /boot/config/go. A little risky (see link above), but otherwise seems to be working well so far. Will monitor it for any instability. Just want to share my findings in case someone else finds this post later.
    1 point
  45. @SpencerJ, any chance of possibly modifying the code tag to embed the spoiler tag simultaneously? That way when we tell people to post using the code button they are monospaced and collapsed, making the forum much cleaner to read through.
    1 point
  46. this is a personal choice, for me i want to know the drive is ok before rebuilding/adding data to it (think about failed writes), that way i can confidently sell the old drive (assuming its a replacement) once the rebuild has completed. Sending a failed drive back to WD sooner rather than later also sounds like a good idea to me, but each to their own :-). originally the preclear script was designed to do one thing, preclear drives in readiness to be added to the array, this was deemed a good idea as it used to be the case that adding a new drive meant unraid had to preclear it, and this meant the whole array would be offline until that preclear ended, taking many hours, not good for the WAF!. The script then got enhanced to do other stuff as well, such as stress testing of the drive using various linux utils. roll forward in time and preclearing a drive via unraid now is done in the background, allowing the array to carry on working and thus no down time, meaning no need to preclear, however the need to stress test a drive is still present (imho) and thus the preclear script lives on.
    1 point