Tybio

Members
  • Posts

    610
  • Joined

  • Last visited

Everything posted by Tybio

  1. I'm hoping that Intel's rushed and seemingly totally unworkable 28 core demo today was them trying to one-up AMDs event tomorrow. If you watch it, check out the case, it has something in the back that looks very like the plug used to charge electric cars. My guess is they had to have an external sub-zero cooling unit to keep the chip from exploding. Gigabyte released a very over-the-top board that looks epic (without the RGB madness no less!) https://www.tomshardware.com/news/aorus-x399-xtreme-b450-motherboard,37194.html I'm actually starting to think there might be some big news at the AMD event tomorrow. I wasn't holding much hope until the news from other events started to break...but who knows.
  2. For running at stock speeds, the QVL is less critical. If you want to OC it to 3200 or higher then the QVL becomes much more important. That's not to say you don't have to worry, but the risk goes way down if the sticks are running at their native speed/latency.
  3. Found a review of the ASRock version: ASRock Ultra Quad M.2 Card Read more: https://www.tweaktown.com/reviews/8542/asrock-ultra-quad-2-card-16-lane-aic-review/index.html They make a big deal about how the orientation of the slots in this is better...but really these are just adapters. I'd think the only real delta would be in heat dissipation and fan noise.
  4. Confirmed, disk spin up/spin down without OS read/write pending has no impact. If I cat a text file on a spun down disk, reading files on already active disks freezes (key symptom is playback freezes in Kodi until other disk finishes spin up). Again, no idea if any of this is useful or if anyone is actually reading the thread. I might just be talking to myself!
  5. Ok, some more details. If I spin down all disks, then spin them up while streaming there is no interruption in SMB read. I've even tried spinning down and then spinning up/down disks one by one and can't replicate it. This seems to be an issue when there is an actual read or write waiting on the spin-up. Going to do some testing on that now.
  6. I've had no luck with this, I've tried both the active streams and file activity plugin to try to track things..but those are just me trying to reduce spin-ups which I'm seeing without any file activity to justify them. As to the pausing during disk spin up, I'm at the end of my rope...the first time I've been in this position in over 10 years of using unRAID. I'm actually considering going back to the SAS2LP cards and just praying they will work with IOMMU when I get around to it. I'd really like the LSI cards to not suck, but as the issue showed up when I switched to them...I've no other path to go down.
  7. I'm really struggling with two issues, the first is disk spin up and thus my using this addon...the second (and the root problem) is that shares stop during disk spin-up...but that's for other posts. i /just/ had a disk spin up, with nothing noted in the addon. Is there any known source of spin-ups that wouldn't show an inotifiywait result?
  8. As far as I can tell, they are both current: root@Storage:~# dmesg | grep mpt2sas | grep FWV [ 13.714606] mpt2sas_cm0: LSISAS2308: FWVersion(20.00.07.00), ChipRevision(0x05), BiosVersion(07.39.02.00) [ 14.234291] mpt2sas_cm1: LSISAS2308: FWVersion(20.00.07.00), ChipRevision(0x05), BiosVersion(07.39.02.00) root@Storage:~#
  9. One more question, have you got this plugged into a UPS or Kill-A-Wat? I'd love to know what the idle draw is.
  10. It would be really nice if anyone could chime in here. I'm not sure if this is a known limitation of the LSI cards, or if there is something else going on. I'm actually considering returning them and going back to the SAS2LPs and praying things don't blow up when I setup my windows VM.
  11. No, I wasn't clear enough...my fault. In your example, the 2TB drive would be the parity, giving you 1.5TB of protected space. It isn't the size of the protected array that matters, only the size of the largest protected disk. For example, if you had the following drives: 2: 0.5TB 2: 2TB 1: 10TB You would end up with an array where the 10TB drive is the Parity drive, giving you 5TB of protected storage (.5+.5+2+2=5) but you could get the same thing by using a 2TB drive as parity rather than the 10TB as the largest "protected" disk is only 2TB in size. Does that make more sense? Edit: Let me attach a picture of my current system, you can see that all of my data disks are protected by a single 10TB parity drive
  12. Parity needs to be the largest disk in the array...if you have 4x2TB drives then just pick one for parity. If you have an 8TB drive and 2x6TB drives than the 8TB drive has to be your parity drive. Look at it this way, the Parity drive is as large, or larger than all of the other drives in the array. You can have as many data disks as you want (within the unraid limit at the moment), as long as they are the same size, or smaller than the Parity drive.
  13. I have a case with no side fan, but I have good front->back fans over the MB...and these cards are fine, not a lot of airflow so you don't need a jet engine...but it does need some air moving over the heat syncs.
  14. I've been fighting a similar problem, disks spinning up with no modification to files. I'm running cache_dirs and used to be able to have disks spun down for days/weeks. Now I've just given in and gone with it and have them all spinning all the time as they were in constant flux with no real activity. It would be VERY nice to have unraid it's self help out with tracking disk spin-up, as that is one of the primary reasons for unraid over traditional raid. It is very frustrating to troubleshoot this, and with Docker & VMs being more a core offering...the "It's the addons" line is holding less and less water :). To be clear, I'm not upset or angry, just poking fun. I'm a huge fan of unraid and have used it for over 8 and will continue to use it...would just like some more help in this area from the devs as even an advanced user can't track what reads are spinning up disks so we can trace them back. Is it a cache miss on a directory list? Is it an inspection of a file? I know this is complex stuff, and I'm not asking for an easy button....but right now, my toolbox is empty on how to address this.
  15. I've just upgraded to 2 of these from 2xAOC-SAS2LP-MV8 as I'm planning on upgrading to a Threadripper later this summer and setting up a windows VM. While the speeds are OUTSTANDING, I'm struggling with a new issue where the disks become unavailable when another spins up. If I'm streaming from disk4 for example, when something spins up disk2 the video pauses and loses sync as it seems access to the data freezes until disk2 comes fully online. I suppose I could solve this with increasing the buffering, however it strikes me that I shouldn't expect this behavior on a PCIE 3.0 HBA....Unless I'm missing something?
  16. Very nice build, almost exactly what I'm going to do but I'm waiting another month or two to see if there is a new TR chipset like there was with the x470. I'd love to get a TR Taichi like the x470 Prime with 10GBE onboard. This will also let me ditch the HBAs as the board will have enough SATA ports On the case front, there was a great deal for the PC-D600 in the US a month ago so I picked that up for my new build...I /love/ the case, I can fit anything I want in it...but it is a little large being double wide :). Have 3 3-in-5 trayless istars in it for 15 hot swap disks, with the TR I'll move Cache to NVMe along with VM SSD. One question, can you confirm the ASRock BIOS lets you set the LED color, or at least turn the LED on the MB off? Case Link: http://www.lian-li.com/pc-d600/
  17. Interesting, if you fire up a console, do you see any arp entries after you try to pinging outbound? I just ran a ping from my server to my laptop.... For example (Removed the Docker lines and other stuff, my laptop is .7 and gateway is .1): root@Storage:~# arp -a Tybio-Laptop.fios-router.home (192.168.1.7) at 78:4f:43:55:19:ad [ether] on br0 FIOS_Quantum_Gateway.fios-router.home (192.168.1.1) at 48:5d:36:71:6f:6f [ether] on br0 root@Storage:~#
  18. Not a lot of options left when you take the most common source of issues off the table. It's a decision only you can make, but the people here aren't making things up to brick your board...they are presenting the options, you pick. Once you make your decision, then you have to accept the consequences of that decision. If you don't want to update the BIOS, then you will have to deal with each issue and likely not a lot of support as there is little to no point in tilting at windmills. Please remember, this is a community supporting each other. I know it can be frustrating, but these are very advanced use-cases and true passthrough is not for the faint of heart or those expecting "easy". Try to keep people on your side and wanting to help you rather than treating them like they are trying to screw you over with their recommendations.
  19. Well, the max read from a spinner is ~250MB/sec, so if you are getting ~200MB/sec then you are right in line with what I would expect over a network with 2 NICs in the path. If you are getting ~200Mb/sec, then that's a problem. Can you confirm if you are seeing Megabits, or Megabytes a second?
  20. Just in case it matters, Diags attached. storage-diagnostics-20180517-1755.zip
  21. So I've been running for YEARS with Supermicro SATA HBAs, but in getting ready to setup a windows VM for my desktop I got 2xLSI 9207-8i cards. I swapped them in and ever since when I'm watching content, a different disk spinning up will cause the media to stutter or even to freeze up. It is like the working drive is "pausing" while waiting for the spin-up to finish. I've never run into this before, and I see some mentions of others having the issue on the forums (but no resolutions that I can find). Am I just missing something that a different search would pull up? Or if this is just the way these cards work, then why on earth are they recommended?
  22. The Noctua TR4 fan is actually beating the AiOs for the most part, just goes to show that design is more important than buzz. I'll take perfectly designed old tech that just "works" over new-bling that's not as well proven any day.
  23. Most hardware these days has very impressive idle control to lower power usage when not needed...that has the side impact of leaving most of the device unused. For GPUs it is a little different than a drive, as a drive will go almost totally silent while a GPU will still pull some voltage to be "ready" to instantly deliver and keep the desktop running. That said, a quality GPU is built as well as a CPU, the fan on the GPU might be at risk if it doesn't power down when the card is idle...but I'd think your odds of GPU failure over time don't go up dramatically....but then again, I've never really looked into it :). If you want to dig into this, check out the durability reported on GPU by Bitcoin miners. They are going to be FAR AND AWAY the worst case for a GPU and will have lots of data to back up their numbers. Just remember, they run 100% 24x7 so if they expect N years out of a card then you can likely expect N*<a lot>.
  24. With an open case, some of the more flush comps might be getting LESS airflow than in a closed one. Remember, the fans aren't drawing properly with the case open. I ran into that a few builds back and obsessed about it for a while before closing it up, then saw the drives warm by a degree or two and the MB sensors drop by up to 10 degrees. Air will flow in the unrestrained path, in a small open case that often negates the effect of the main system fan on items down by the motherboard. Not saying it /will/ change, just advising to plop the case lid on and run a parity check for an hour or two before you make any conclusions
  25. It isn't as bad as you might think, using a 1080ti as a baseline it looks like you would be adding something like 15w idle to the load for the card. If you go with the right processor, which if I'm not mistaken will be an intel one if power is a concern, you could get a very reasonable idle draw...given the age of what you are running now, it could even be slightly more efficient if you went with a 1050 or 1060. I'd recommend that you do some baseline numbers, I have a feeling you can build more than you think with a reasonable power draw if you pick the parts properly. Some info: https://www.tomshardware.com/reviews/nvidia-geforce-gtx-1080-ti,4972-6.html