Jump to content


Popular Content

Showing content with the highest reputation on 03/22/20 in all areas

  1. 5 points
    Hi everyone: I am Squids wife. I just wanted everyone to know he will be 50 on Sunday March 22nd, If you all can wish him a happy birthday that would be great.Due to Covid 19 - no party. Thanks Tracey
  2. 2 points
    Just caught onto this today (Thx @SpaceInvaderOne !), saw we're "only" #2, which just won't do --- Just "remembered" I have a Threadripper 2950x new in box - was going to sell the old dual Xeon E5 V2s and upgrade, but now going to bring this out & join the fray with the I9-9900 Hackintosh [AMD 580] and Ryzen 3700x. The threadripper will have to go "benchtop bare" for now, but that's OK. Should probably just use the office for a sauna now 🥵. Think the UPS is sweating a tad.... I am regional medical director for a company that does home medical visits on the sickest of the (US Medicare) population, IE top tier risk for COVID, avg. patient age 80+. We have offices in all the top affected cities in US so far. We're working nonstop to try to keep our patients safe at home. We've had to retreat temporarily to mostly telephonic visits due to shortage of PPE (protective gear) til our supply improves so we don't spread it to them - very frustrating. Now I can feel better about being stuck at home, still helping on the compute side as well til we get to get back safely in their homes. I wanted to thank everyone here for being so eager to take part / take action and with such impressive results. It means alot in the medical world to see folks being resourceful and doing their part. Please stay home, stay safe, and round up some more CPU's for this !
  3. 2 points
    The ease to set it up with docker probably played a huge role in that. It's set-it-and-forget-it (quite literally, this morning I was wondering who was watching Plex at home around 5am and then realised Plex and BOINC use the same cores 😅)
  4. 2 points
    BOINC team coming in at #2 in the world!!! https://boinc.bakerlab.org/rosetta/top_teams.php
  5. 2 points
    https://www.tomshardware.com/news/folding-at-home-worlds-top-supercomputers-coronavirus-covid-19 https://cointelegraph.com/news/foldinghome-surpasses-400-000-users-amid-crypto-contribution
  6. 1 point
    Request to have the current 30 data drive (28+2) limit increased. My specific application is a media server and with the advent of 4K videos, the need for storage has almost tripled per video. I've hit the maximum 30 drives and as I replace my smaller 4TB and 6TB models with 8 or 10TB, I cannot reutilize those 4TBs within the same data drive pool; cache drives server no real purpose for me on this server. Speaking for myself, I am willing to pay an upgrade or higher tier license fee for the ability to go beyond 30 data drives.
  7. 1 point
    Use another, preferably USB2, 32GB or less. Format as FAT32. Also, preferably boot from USB2 port.
  8. 1 point
    Disclosure The upcoming version of Unraid supports multi cache pools, and allows the user to create as many cache pools as needed. Each pool can consist of 1 up to 30 devices, and with a pro license, you are truly unlimited in number of devices to use.
  9. 1 point
    Happiest of Birthdays Squid!
  10. 1 point
    I was able to resolve the issue thanks to the help provided. To anyone else coming here like I did, before you try the suggestion of using the '../cache/..' directory, remove the 'garrysmod' directory from your appdata so that the container does a fresh install of the game files. No idea why it worked, but it does and that's what's important.
  11. 1 point
    Or alternatively, https://www.easeus.com/ running in Windows. Inexpensive compared to the wrath of your wife for the loss of the file(s), and works great (The free trial will give you an idea of what it will manage to recover)
  12. 1 point
    If you click on the orange icon for the drive on the Dashboard then you will get a menu of which one option is to acknowledge the error. You then only get notified again if it changes.
  13. 1 point
    Happy birthday squid!!
  14. 1 point
    Happy birthday. Thank you very much for all your community contributions.
  15. 1 point
  16. 1 point
    With PCIe 3.0 controllers like the ones I recommended you'll get around 2200MB/s usable per port, then divide that by the number of connected disks, you can see some more performance numbers here.
  17. 1 point
    Yep! Makes you wonder what else she got into. 🤣 Happy Birthday!
  18. 1 point
    Or another of my favorites:
  19. 1 point
    have a great one!
  20. 1 point
    That looks about right to me considering the SSDs used, you need faster 3D TLC SSDs (860EVO, MX500, WD Blue 3D, etc), also higher capacity models, at least 250GB to get better than that.
  21. 1 point
    Thank you for the quick reply! Was running MariaDB set to Bridge and Nextcloud set to br0. Tried with both running in br0, but same issue... then changed both to bridge, and everything seems to work now
  22. 1 point
    That is possible if you have a server class motherboard with IPMI support built in (in data centres which are frequently remotely manage this is a high value capability). It is not likely to be possible with a typical desktop/gaming type motherboard.
  23. 1 point
    Or for the gamer, 'I'm not getting old I'm just levelling up' [emoji16] Sent from my CLT-L09 using Tapatalk
  24. 1 point
    I always say it's just one day older than yesterday.
  25. 1 point
    They utilize the cache pool or Unassigned Devices which doesn't use super.dat
  26. 1 point
    Half a century already?! Congratulations for that and best wishes for the next half 😁
  27. 1 point
    @Squid - Obviously I don't know you personally, but Happy Birthday anyway, and thanks for all that you do for the community. It is very much appreciated. Would love to see a pic including moose antlers...
  28. 1 point
    Well the "almost" is the fact, that I USED MY OWN DNS and config, I'm sorry if this annoyed you. Other than that exactly what was in the video. Thanks for reading....I got it working. Guess I'll use a more "newb" solution in the future.
  29. 1 point
    Happy B-day Squid !! 🍻
  30. 1 point
    Hey, Thats a network error. The container nextcloud cannot reach the host Port. You are running nextcloud on br0 or bridge Mode, host Mode? Best way is to have them both in a bridge or a br0 network. Try to ping from within the nextcloud container. Docker exec -it nextcloud ping [IP] I am running it with a MySQL DB, IT is working well. Cheers
  31. 1 point
    Happy birthday@Squid the big 'five o' eh its just a number, just keep chanting that [emoji16] Sent from my CLT-L09 using Tapatalk
  32. 1 point
    Happy birthday Squid!! 🎂🎂 No idea you were such an old guy , though I'm not that far behind you.
  33. 1 point
    Hello Ich777, thank you so much it works like a charme. Have a good day a stay well.
  34. 1 point
    Happy Birthday! Sent from my SM-G973F using Tapatalk
  35. 1 point
    Happy Birthday Squid. Agree with trurl, 50 is painless. Well, as long as you exclude the pain you start to get in your joints.
  36. 1 point
    In the template wgere the app id is type in something like this: 294420 -beta alpha18.3 This should do the work, but i'm not 100% sure because i don't own the game, this was allready discussed in this topic but i couldn't remember where it was. Also don't forget to set validate to true
  37. 1 point
    Yes, that is the case with many ASRock Rack server boards. It depends on the CPU socket and chipset but many of them have no onboard audio. I don't think I have seen onboard audio on any socket 1150 or 1151 server motherboards. Xeon W and Threadripper server motherboards do have onboard audio. Some Supermicro server boards have onboard audio, including socket 1151 for the Xeon 2100/2200. The ASRock Rack "workstation" boards have onboard audio, but none of them have IPMI. Any audio device (onboard or otherwise) would show up in the IOMMU groups for the board. IOMMU group 0: [8086:3e31] 00:00.0 Host bridge: Intel Corporation Device 3e31 (rev 0d) IOMMU group 1: [8086:1901] 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) (rev 0d) [8086:1905] 00:01.1 PCI bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x8) (rev 0d) [1000:0072] 02:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03) IOMMU group 2: [8086:3e9a] 00:02.0 Display controller: Intel Corporation Device 3e9a (rev 02) IOMMU group 3: [8086:1911] 00:08.0 System peripheral: Intel Corporation Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th/8th Gen Core Processor Gaussian Mixture Model IOMMU group 4: [8086:a379] 00:12.0 Signal processing controller: Intel Corporation Cannon Lake PCH Thermal Controller (rev 10) IOMMU group 5: [8086:a36d] 00:14.0 USB controller: Intel Corporation Cannon Lake PCH USB 3.1 xHCI Host Controller (rev 10) [8086:a36f] 00:14.2 RAM memory: Intel Corporation Cannon Lake PCH Shared SRAM (rev 10) IOMMU group 6: [8086:a368] 00:15.0 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH Serial IO I2C Controller #0 (rev 10) [8086:a369] 00:15.1 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH Serial IO I2C Controller #1 (rev 10) IOMMU group 7: [8086:a360] 00:16.0 Communication controller: Intel Corporation Cannon Lake PCH HECI Controller (rev 10) [8086:a361] 00:16.1 Communication controller: Intel Corporation Device a361 (rev 10) [8086:a364] 00:16.4 Communication controller: Intel Corporation Cannon Lake PCH HECI Controller #2 (rev 10) IOMMU group 8: [8086:a352] 00:17.0 SATA controller: Intel Corporation Cannon Lake PCH SATA AHCI Controller (rev 10) IOMMU group 9: [8086:a340] 00:1b.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #17 (rev f0) IOMMU group 10: [8086:a338] 00:1c.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #1 (rev f0) IOMMU group 11: [8086:a330] 00:1d.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #9 (rev f0) IOMMU group 12: [8086:a331] 00:1d.1 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #10 (rev f0) IOMMU group 13: [8086:a332] 00:1d.2 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #11 (rev f0) IOMMU group 14: [8086:a328] 00:1e.0 Communication controller: Intel Corporation Cannon Lake PCH Serial IO UART Host Controller (rev 10) IOMMU group 15: [8086:a309] 00:1f.0 ISA bridge: Intel Corporation Cannon Point-LP LPC Controller (rev 10) [8086:a323] 00:1f.4 SMBus: Intel Corporation Cannon Lake PCH SMBus Controller (rev 10) [8086:a324] 00:1f.5 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH SPI Controller (rev 10) IOMMU group 16: [144d:a808] 04:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 IOMMU group 17: [8086:1533] 05:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03) IOMMU group 18: [1a03:1150] 06:00.0 PCI bridge: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge (rev 04) [1a03:2000] 07:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 41) IOMMU group 19: [8086:1533] 08:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03) You may have to add a PCIe audio card for audio to a VM.
  38. 1 point
    I remember turning 50. It was relatively painless.
  39. 1 point
    Happy birthday @Squid Since I'm in the future, I can already wish you a happy birthday 🎂 I'm going to have a big party, as usual all alone, for my own birthday on monday...
  40. 1 point
    I have to correct you there. We do have a crystal ball, but it's still in the repair shop waiting for the not in stock part.
  41. 1 point
    New Config will only (optionally) rebuild parity. If you think parity is already valid then you can check a box when you go to start the array telling it parity is valid and it won't even rebuild parity. No other disk will be changed in any case. The reason for the warning is because if someone has a data disk that needs to be rebuilt, then doing New Config makes it forget that data disk needs to be rebuilt. The main danger from New Config is accidentally assigning a data disk to the parity slot, thus overwriting that data with parity.
  42. 1 point
    Create a new trial key for the new system. After everythings all done, use the existing key (and reassign the drives accordingly), or transfer the licence from the old to the trial.
  43. 1 point
    This is fine, but it honestly has me reconsidering unraid as a viable platform for me. A bit too late in the game now as it will require substantial financial resources to migrate to a different platform. I know you, nor CA is affiliated with Limtech or Unraid, but its a pretty integral part of the user experience and its now tainted. And it's taken since last wednesday for me to come to this decision as a result. Hence the long gap between the event and me posting about it. I realized it happened. I had to think long and hard about how I wanted to proceed.
  44. 1 point
    I'm not sure which application in the Community Apps library was responsible for the popup alert about COVID-19 support, but I will be uninstalling all CA packages as a result. It was invasive and I'm not okay with that. I get it was a good gesture, and its got some serious circumstances behind it - but I like it when other people aren't touching my systems. I'm even okay with pinning the apps to the top of the list like they are. Just not the invasive nature of the popup and warning banner. Felt like I was visiting a webpage with my adblocker off.
  45. 1 point
    That's "normal". unRAID can't reset the card after the VM stops using it. But it can reset it after a full server reboot. So you're stuck in the situation where anytime you power down the VM, you need to reboot the server before powering on a VM that uses the card. If your BIOS is up to date, do a search for 127 errors. There are a LOT of posts on here about that
  46. 1 point
    I'm now on q35 v2.12 and have added "pcie_no_flr=1022:149c,1022:1487". Two shutdowns and starts without any issue. I will test later again
  47. 1 point
    I had the opportunity to test the “real word” bandwidth of some commonly used controllers in the community, so I’m posting my results in the hopes that it may help some users choose a controller and others understand what may be limiting their parity check/sync speed. Note that these tests are only relevant for those operations, normal read/writes to the array are usually limited by hard disk or network speed. Next to each controller is its maximum theoretical throughput and my results depending on the number of disks connected, result is observed parity check speed using a fast SSD only array with Unraid V6.1.2 (SASLP and SAS2LP tested with V6.1.4 due to performance gains compared with earlier releases) Values in green are the measured controller power consumption with all ports in use. 2 Port Controllers SIL 3132 PCIe gen1 x1 (250MB/s) 1 x 125MB/s 2 x 80MB/s Asmedia ASM1061 PCIe gen2 x1 (500MB/s) - e.g., SYBA SY-PEX40039 and other similar cards 1 x 375MB/s 2 x 206MB/s JMicron JMB582 PCIe gen3 x1 (985MB/s) - e.g., SYBA SI-PEX40148 and other similar cards 1 x 570MB/s 2 x 450MB/s 4/5 Port Controllers SIL 3114 PCI (133MB/s) 1 x 105MB/s 2 x 63.5MB/s 3 x 42.5MB/s 4 x 32MB/s Adaptec AAR-1430SA PCIe gen1 x4 (1000MB/s) 4 x 210MB/s Marvell 9215 PCIe gen2 x1 (500MB/s) - 2w - e.g., SYBA SI-PEX40064 and other similar cards (possible issues with virtualization) 2 x 200MB/s 3 x 140MB/s 4 x 100MB/s Marvell 9230 PCIe gen2 x2 (1000MB/s) - 2w - e.g., SYBA SI-PEX40057 and other similar cards (possible issues with virtualization) 2 x 375MB/s 3 x 255MB/s 4 x 204MB/s JMicron JMB585 PCIe gen3 x2 (1970MB/s) - e.g., SYBA SI-PEX40139 and other similar cards 2 x 570MB/s 3 x 565MB/s 4 x 440MB/s 5 x 350MB/s 8 Port Controllers Supermicro AOC-SAT2-MV8 PCI-X (1067MB/s) 4 x 220MB/s (167MB/s*) 5 x 177.5MB/s (135MB/s*) 6 x 147.5MB/s (115MB/s*) 7 x 127MB/s (97MB/s*) 8 x 112MB/s (84MB/s*) *on PCI-X 100Mhz slot (800MB/S) Supermicro AOC-SASLP-MV8 PCIe gen1 x4 (1000MB/s) - 6w 4 x 140MB/s 5 x 117MB/s 6 x 105MB/s 7 x 90MB/s 8 x 80MB/s Supermicro AOC-SAS2LP-MV8 PCIe gen2 x8 (4000MB/s) - 6w 4 x 340MB/s 6 x 345MB/s 8 x 320MB/s (205MB/s*, 200MB/s**) *on PCIe gen2 x4 (2000MB/s) **on PCIe gen1 x8 (2000MB/s) Dell H310 PCIe gen2 x8 (4000MB/s) - 6w – LSI 2008 chipset, results should be the same as IBM M1015 and other similar cards 4 x 455MB/s 6 x 377.5MB/s 8 x 320MB/s (190MB/s*, 185MB/s**) *on PCIe gen2 x4 (2000MB/s) **on PCIe gen1 x8 (2000MB/s) LSI 9207-8i PCIe gen3 x8 (4800MB/s) - 9w - LSI 2308 chipset 8 x 525MB/s+ (*) LSI 9300-8i PCIe gen3 x8 (4800MB/s with the SATA3 devices used for this test) - LSI 3008 chipset 8 x 525MB/s+ (*) * used SSDs maximum read speed SAS Expanders HP 6Gb (3Gb SATA) SAS Expander - 11w Single Link on Dell H310 (1200MB/s*) 8 x 137.5MB/s 12 x 92.5MB/s 16 x 70MB/s 20 x 55MB/s 24 x 47.5MB/s Dual Link on Dell H310 (2400MB/s*) 12 x 182.5MB/s 16 x 140MB/s 20 x 110MB/s 24 x 95MB/s * Half 6GB bandwidth because it only links @ 3Gb with SATA disks Intel® RAID SAS2 Expander RES2SV240 - 10w Single Link on Dell H310 (2400MB/s) 8 x 275MB/s 12 x 185MB/s 16 x 140MB/s (112MB/s*) 20 x 110MB/s (92MB/s*) Dual Link on Dell H310 (4000MB/s) 12 x 205MB/s 16 x 155MB/s (185MB/s**) Dual Link on LSI 9207-8i (4800MB/s) 16 x 275MB/s LSI SAS3 expander (included on a Supermicro BPN-SAS3-826EL1 backplane) Single Link on LSI 9300-8i (tested with SATA3 devices, max usable bandwidth would be 2200MB/s, but with LSI's Databolt technology we can get almost SAS3 speeds) 8 x 475MB/s 12 x 340MB/s Dual Link on LSI 9300-8i (tested with SATA3 devices, max usable bandwidth would be 4400MB/s, but with LSI's Databolt technology we can get almost SAS3 speeds, limit here is going to be the PCIe 3.0 slot, around 6000MB/s usable) 10 x 510MB/s 12 x 460MB/s * Avoid using slower linking speed disks with expanders, as it will bring total speed down, in this example 4 of the SSDs were SATA2, instead of all SATA3. ** Two different boards have consistent different results, will need to test a third one to see what's normal, 155MB/s is the max on a Supermicro X9SCM-F, 185MB/s on Asrock B150M-Pro4S. Sata 2 vs Sata 3 I see many times on the forum users asking if changing to Sata 3 controllers or disks would improve their speed, Sata 2 has enough bandwidth (between 265 and 275MB/s according to my tests) for the fastest disks currently on the market, if buying a new board or controller you should buy sata 3 for the future, but except for SSD use there’s no gain in changing your Sata 2 setup to Sata 3. Single vs. Dual Channel RAM In arrays with many disks, and especially with low “horsepower” CPUs, memory bandwidth can also have a big effect on parity check speed, obviously this will only make a difference if you’re not hitting a controller bottleneck, two examples with 24 drive arrays: Asus A88X-M PLUS with AMD A4-6300 dual core @ 3.7Ghz Single Channel – 99.1MB/s Dual Channel - 132.9MB/s Supermicro X9SCL-F with Intel G1620 dual core @ 2.7Ghz Single Channel – 131.8MB/s Dual Channel – 184.0MB/s DMI There is another bus that can be a bottleneck for Intel based boards, much more so than Sata 2, the DMI that connects the south bridge or PCH to the CPU. Socket 775, 1156 and 1366 use DMI 1.0, socket 1155, 1150 and 2011 use DMI 2.0, socket 1151 uses DMI 3.0 DMI 1.0 (1000MB/s) 4 x 180MB/s 5 x 140MB/s 6 x 120MB/s 8 x 100MB/s 10 x 85MB/s DMI 2.0 (2000MB/s) 4 x 270MB/s (Sata2 limit) 6 x 240MB/s 8 x 195MB/s 9 x 170MB/s 10 x 145MB/s 12 x 115MB/s 14 x 110MB/s DMI 3.0 (3940MB/s) 6 x 330MB/s (Onboard SATA only*) 10 X 297.5MB/s 12 x 250MB/s 16 X 185MB/s *Despite being DMI 3.0, Skylake, Kaby Lake and Coffee Lake chipsets have a max combined bandwidth of approximately 2GB/s for the onboard SATA ports. DMI 1.0 can be a bottleneck using only the onboard Sata ports, DMI 2.0 can limit users with all onboard ports used plus an additional controller onboard or on a PCIe slot that shares the DMI bus, in most home market boards only the graphics slot connects directly to CPU, all other slots go through the DMI (more top of the line boards, usually with SLI support, have at least 2 slots), server boards usually have 2 or 3 slots connected directly to the CPU, you should always use these slots first. You can see below the diagram for my X9SCL-F test server board, for the DMI 2.0 tests I used the 6 onboard ports plus one Adaptec 1430SA on PCIe slot 4. UMI (2000MB/s) - Used on most AMD APUs, equivalent to intel DMI 2.0 6 x 203MB/s 7 x 173MB/s 8 x 152MB/s Ryzen link - PCIe 3.0 x4 (3940MB/s) 6 x 467MB/s (Onboard SATA only) I think there are no big surprises and most results make sense and are in line with what I expected, exception maybe for the SASLP that should have the same bandwidth of the Adaptec 1430SA and is clearly slower, can limit a parity check with only 4 disks. I expect some variations in the results from other users due to different hardware and/or tunnable settings, but would be surprised if there are big differences, reply here if you can get a significant better speed with a specific controller. How to check and improve your parity check speed System Stats from Dynamix V6 Plugins is usually an easy way to find out if a parity check is bus limited, after the check finishes look at the storage graph, on an unlimited system it should start at a higher speed and gradually slow down as it goes to the disks slower inner tracks, on a limited system the graph will be flat at the beginning or totally flat for a worst-case scenario. See screenshots below for examples (arrays with mixed disk sizes will have speed jumps at the end of each one, but principle is the same). If you are not bus limited but still find your speed low, there’s a couple things worth trying: Diskspeed - your parity check speed can’t be faster than your slowest disk, a big advantage of Unraid is the possibility to mix different size disks, but this can lead to have an assortment of disk models and sizes, use this to find your slowest disks and when it’s time to upgrade replace these first. Tunables Tester - on some systems can increase the average speed 10 to 20Mb/s or more, on others makes little or no difference. That’s all I can think of, all suggestions welcome.
  48. 1 point
    Hi Guys, I setup the reverse proxy with some help of the great videos of Spaceinvader One. But there are some extra security options that I want to be fixed but no idea how I can fix that. Hopefully some one here can help me out! 1. create a redirection for all the reverse proxy dockers. What I have tried is changing the unifi-controller.subdomain.conf file of the docker located in the appdata folder "appdata\letsencrypt\nginx\proxy-confs" if i type https://unifi.domain.com everthing is working fine. But I want to enter http://unifi.domain.com end auto redirect to https://unifi.domain.com 2. setup / enable fail2ban service that is integrated in the Letsencrypt docker from Linuxserver 3. setup / enable GeoIP service that is integrated in the Letsencrypt docker from Linuxserver Thx
  49. 1 point
    Hello, Possible to increase this 30 disks limit ? Wanted to "join" my servers with HBA card so that 30 disks limit will be reached quickly Thanks
  50. 1 point
    I had the same problem using the downloaded Windows 10 iso, but I typed "exit", which took me to the BIOS and I selected the EFI DVD Rom drive (or something like that - it was the first boot option) and the install started.