Leaderboard

Popular Content

Showing content with the highest reputation on 12/20/20 in all areas

  1. The latest Unraid blog highlights all of the new major changes in Community Applications 2020.12.14 including: New Categories and Filters Autocomplete Improvements Repositories Category/Filter Community Applications now viewable on Unraid.net As always, a big thanks to @Squid for this amazing Unraid Community Resource! https://unraid.net/blog/community-applications-update
    3 points
  2. After starting to play around with UnRaid a couple of weeks ago I decided to build a proper system. I want to share build progress and key learnings here. Key requirements have been: AMD system Plenty of CPU cores Low Wattage ECC Memory IPMI Good cooling since the system sits in a warm closet Prosumer build quality Config: Case: Fractal Design Define 7 PSU: Be Quiet! Straight Power 550W Board: AsRockrack X570D4U w/ Bios 1.00 CPU: Ryzen 9 3900 (65W PN: 100-000000070) locked to 35W TDP thr
    2 points
  3. Click on the thumbs down and select acknowledge.
    2 points
  4. My NUC running Win 10 and Roon crashed so I thought I would give Roon a try on UnRAID, using your docker container (thanks!) I stumbled across this thread later, after having set things up. If I am reading this right, it sounds like if we use xthursdayx' updated template then we don't need to go through all the steps indicated above, so that things properly update when Roonlabs issues updates. How do I know if the updated template was present in Community Apps, when I installed? When I look at the roonserver entry in Community Applications, it says "Added to CA:September 19, 2020." Does
    2 points
  5. 升级6.9.0-rc2 硬盘温度不显示
    1 point
  6. Can someone create a Template for tgorg's locast2plex docker? Also, if its possible to add built in open VPN support inside it so we can change the location of the ipAddress. I really would appreciate this! trying to set it up through dockerhub, but I think lots of people will find this container VERY useful!
    1 point
  7. it does call that to get temps for disks that are spun up. In your disk settings what is the value of your poll attributes, mine is 1800 and spin down delay of 15mins.
    1 point
  8. Tools - New Config will let you assign any disks however you want and then optionally and by default rebuilds parity based on the new assignments. In the case where you remove a data disk from the array you must rebuild parity.
    1 point
  9. Looks like it didn't upload to Github at all. It should be there now. Sorry about that.
    1 point
  10. That did the trick, Mordhau game server is back up and running. You're the best @ich777
    1 point
  11. It's something misconfigured within say Radarr / Sabnzbd that's creating the folder at /mnt/user
    1 point
  12. Ah, was worth a shot. I only have a single drive in my second pool, so not sure if they makes a difference, it shouldnt. Sorry don't know what else to suggest. Maybe Limetech will find something. I dont use autofan, does that query the disks for temps?
    1 point
  13. Docker templates are on flash and they can be used to reinstall your dockers using the Previous Apps feature on the Apps page, but of course without appdata the applications themselves will be starting over.
    1 point
  14. If this is the case, is there a reason that you cannot shutdown the server at a desired time and then set it to auto-power on at another time? This is how I have my server setup that never seemed to sleep reliably.
    1 point
  15. Dec 14 14:48:32 v1ew-s0urce kernel: [Hardware Error]: Corrected error, no action required. Dec 14 14:48:32 v1ew-s0urce kernel: [Hardware Error]: CPU:0 (17:71:0) MC27_STATUS[-|CE|MiscV|-|-|-|SyndV|-|-|-]: 0x982000000002080b Dec 14 14:48:32 v1ew-s0urce kernel: [Hardware Error]: IPID: 0x0001002e00000500, Syndrome: 0x000000005a020001 Dec 14 14:48:32 v1ew-s0urce kernel: [Hardware Error]: Power, Interrupts, etc. Ext. Error Code: 2, Link Error. Dec 14 14:48:32 v1ew-s0urce kernel: [Hardware Error]: cache level: L3/GEN, mem/io: IO, mem-tx: GEN, part-proc: SRC (no timeout) IIRC, Ryzen has problems wi
    1 point
  16. If you want to keep the peak draw low, for heat dissipation, then keep the processor throttled as you described. If you want to conserve energy overall, then allow the processor to work as hard as it can so it completes the tasks sooner. The CPU is only a portion of the system power draw, and if the processor is throttled back it will take longer to accomplish the work given, thus keeping all the other parts of the system at full power for a longer period. As an example, imagine a 4K transcode of a large file. For the sake of the example, let's assume that forcing the p
    1 point
  17. edac_mce_amd is included in Unraid for the last couple of years The message is simply a reminder to everyone else in the world that the author(s) of mcelog have no idea how to properly word an informational sentence or they are not native English speakers and utilized a TI-99/4A to translate the actual message into English. IE: It's simply telling you that the mcelog default driver (Intel) doesn't support the chip. Its automatically using the AMD module instead
    1 point
  18. Hi @CS01-HS my second pool spins down, but didnt seem to when I first when from beta35 to rc1. But wasn't sure if was due to me setting spin down timer on test pool drive as I had disabled spin downs so my SAS drives would use timers. Now I have the timer set for the main pool I changed the second pool to use the default and now doesn't make any difference on the setting it spins down. Try changing the value on the pool and then change it back to see if that kicks it into life, but may have just been coincidence for me.
    1 point
  19. It is quite possible that you do not have Krusader setup correctly to set the permissions correctly for access via the network. If this is the case then running Tools => New Permissions on the share in question will rectify the permissions.
    1 point
  20. Hi, i am going to use a gigablue box for Sat>ip streaming. With my old hardware there are too many problems with my dvb-receivers. Thanks for your help
    1 point
  21. It's not so much about the features that makes it expensive, it's the fact that it's server hardware. Server hardware is just more expensive, even if it has less features than consumer hardware. One argument is, depending whether you believe it or not, is that server hardware is "more stable" than consumer hardware, in a server environment. And some of that may have gone back to the ability to use ECC RAM when consumer boards didn't have an option for ECC... a lot of consumer boards do these days, especially for AMD chips. Look at the comparison between specs for these two board
    1 point
  22. This error is due to the absence of the 'nvme-cli' package in the container, and thus the 'nvme' command. You have to install the missing package through the "Post Arguments" parameter of the container (Edit/Advanced View). Here's the content of my "Post Argument" param for reference, properly working with nvme devices, to be adapted of course to your specific configuration if required : /bin/sh -c 'apt-get update && apt-get -y upgrade && apt-get -y install ipmitool && apt-get -y install smartmontools && apt-get -y install lm-sensors && apt-get
    1 point
  23. As I mentioned above this is not my work, I only compiled it and made a package for Unraid so that it installs correctly and is user friendly, please look at the Github repo from above that I've linked. From my understanding this is only the Kernel Module for NCT6687 so that the temps and fans are recognized and also that you can control the fans also please note that with different implementations from different manifacturers not everything can/will work correctly. If it gives you this error than something went wrong at the installation of the package and/or depmod.
    1 point
  24. This will not work then mate, you should have mentioned this. you need the WMI ASUS plugin
    1 point
  25. check the MB manual properly....for many, if a second NVMe-PCIe or -SSD ist installed, one or even two SATA ports will be disabled.
    1 point
  26. the blue ones are Xpenology on the same hardware. with small files it looks the synology software is 30times faster while on big files it is comparable though there is one great result i measured when copying from Mac to unraid on HDD i cannot explain
    1 point
  27. Auf jeden Fall richtig und wichtig. Das verlinkte ist technisch OK, hat auch nur 1x12V Schiene, kann also die 20A dort frei an die Komponenten/über alle Kabel verteilen, Wenn man beim Neustart 45W für MB, RAM, CPU und SSDs abzieht, bleiben 250W für die HDDs...jede wird 2-2.5A an 12V wollen. Bei angenommenen, noch verfügbaren 16A (48W abgezogen) und 2A je Disk reicht das gerade so für 8 Stück. Wenn die dann "rollen" brauchen sie nur noch die Hälfte. Gerade die viel gerühmten be-quiet haben meist aber 2 Schienen für 12V (die man immerhin bei einigen Modellen wohl zusammensc
    1 point
  28. Hey, I did the same thing earlier with the same result. I tried deleting the Big Sur image files and changing it to method 2 which then grabbed the correct Big Sur image file from Apple. So yeah, I'd say workaround-able bug to be fixed when there is time to.
    1 point
  29. Removing disks from the array is covered here in the online documentation accessible via the 'Manual' link at the bottom of the Unraid GUI.
    1 point
  30. when you followed the video in the 1st post step by step starting from removing "old" macinabox incl. old template, adjust new docker, start docker, wait a little until download is done, run vm ready script, edit vm helper script, start vm helper script, after notification start vm ... i would start from scratch and watch the tutorial video, very good explained, if you still broke i would look for an error where it didnt do what u expected, like what comes when u run the helper script
    1 point
  31. Server has almost been up for 2 days, longest so far. Pretty sure this fixed it. I am running the memory at 2133 as it is single rank. Thanks for the help!
    1 point
  32. Yes. Keep the original disk4 as it is in case there are any problems.
    1 point
  33. In the past I used intel quick sync, and in addition I had to have this in my "go" file: # Setup Intel HW pass-through for Plex transcoding modprobe i915 chmod -R 777 /dev/dri
    1 point
  34. Amazing little container this Pihole. Works like a charm and I have it set up as my 2ndary DNS in case of a fail of my first one. Planning to do an HA Pihole. There are few tutorials out there. Seems to be in its infancy but it looks promising. Any chance to update the container with the latest WebInterface and FTL? I really appreciate your time for making this available to all of us. Cheers PS: Also noticed that seems to be crashing sometimes. Not sure but I believe cloudflared is the culprit
    1 point
  35. So I'm probably doing something wrong with my implementation of VMs because I never really feel that a VM is good for any sort of actual productive work... it's just too slow and clunky and less responsive than an actual PC. And video editing? Forget about it...…. in my view. But a lot of that may be due to the limited specs of my hardware, for sure. I'd probably say that by the sounds of it you do not need that ASRock board.. it's a server board with more server features that you probably don't need or want. I would save money on the board and maximize the CPU and
    1 point
  36. Just wanted to update this and mark it solved. The 10TB drive has been successfully added as the parity drive and the 6TB is now a data drive. Thanks JorgeB for the help!
    1 point
  37. It turns out that it was the Recycle Bin plugin I have installed. I guess I thought Move, meant Move. But apparently it means Copy, Paste and then Delete. So even though the move of the file was completed the data remained in the recycle bin.
    1 point
  38. ...no chance for ECC RAM there, as intel skipped that feature for all 10thgen Desktop processors. Also, transcoding using the IGP is currently not supported (yet). For speed and future upgrades, I'd look into a MB with support of 2 NVMe-PCIe drives (for cache or high-speed pool), but I think an mITX might not have that. Also, MB with S1200 often have a newer revision of the onboard i219-V NIC, which is only supported from unraid 6.9beta/RC onwards. In terms of the pricepoint for i5-10400 vs i5-10500, i doubt that you would feel a difference in performance that
    1 point
  39. At the moment I am not prepared to implement an option that would auto-pause the parity check that happens after an unclean shutdown. the implementation I am currently testing will auto-pause a restarted array operation that was paused at the time of a shutdown, but that will only happen after a clean shutdown. As soon as an unclean shutdown is detected then the decision is to err on the side of safety. if I get convinced that an auto-pause of the automated check after an unclean shutdown is a feature would be desirable then it could be added but it is not going to b
    1 point
  40. Thank you @Squid for the awesome work. 👍
    1 point
  41. Followed your instructions above @xthursdayx Dude, worked like a charm! Did all that stuff, and it all went off perfectly. Updated using native interface, and it went swimmingly. Thank you very much! You made those changes to your container, so people moving forward should be good to go. Excellent! Now if Roon will sort out their other issues: remote listening, and a slew of other stuff I'm sure. I'll be good. <meh>
    1 point
  42. Someone's posted a ticket for it... Paper 1.17 Java Update
    1 point
  43. Update just because people have been asking, yes I've completed the build. It sits inside an IKEA Alex with a button on the side connected to a Raspi to power on my gaming VM. Had a few days to set it up with GPU passthrough but its working fine now. Build: Asrock B550 Extreme4 AMD Ryzen 5 3600X (3.80GHz / 32MB) - boxed Noctua NH-D15 chromax.black 2x Kingston 16GB DDR4-2666MHZ ECC CL19 Gigabyte GeForce GTX 1660 Super Gaming OC 6G Be quiet! Dark Power Pro 11, 750W Cache: Samsung 970 EVO NVMe M.2 - 500GB as a Cache drive Another 250GB M.2 I've had around for
    1 point
  44. Hi All, Just want to share out my findings about unRAID notification. My notification settings are based on Gmail. This how-to will enable the user to send email notification from Gmail to Yahoo email. If you like my how-to, then make it a sticky. Thank you.🙂 ======================================================================== Requirements: A) Setup a gmail account. This account will be the SENDER's email address << Assumption: you have setup 2-step authentication via
    1 point
  45. I found a workaround: Specifically, this part at the end: The it87 driver will now load on boot and your fan speeds will be displayed on the Unraid dashboard, and the fan controllers will be available in Dynamix Auto Fan Control. Warning: Setting acpi_enforce_resources to lax is considered risky for reasons explained here. Of course I didn't need the "video=efifb:off" part, so I just added "acpi_enforce_resources=lax" to my /boot/syslinux/syslinux.cfg, then "modprobe it87 force_id=0x8628" to my /boot/config/
    1 point
  46. this is a personal choice, for me i want to know the drive is ok before rebuilding/adding data to it (think about failed writes), that way i can confidently sell the old drive (assuming its a replacement) once the rebuild has completed. Sending a failed drive back to WD sooner rather than later also sounds like a good idea to me, but each to their own :-). originally the preclear script was designed to do one thing, preclear drives in readiness to be added to the array, this was deemed a good idea as it used to be the case that adding a new drive meant unraid had to preclear it,
    1 point