citizengray

Members
  • Posts

    53
  • Joined

  • Last visited

Everything posted by citizengray

  1. I just ran a test by creating a new share and made it "cache" only. And weirdly enough... I guess the same write speed (~11-12 mb/s) on NVME drives that are able to write at 1000mb/s...
  2. Sounds good. Thx. On another note, I finished the partiy check and the array is now function - albeit all of the place in term of connection (some drive through cages and some directly) - this obviously temporary. However I am still very much concerned from a performance stand point. Right now I have 2 parity drive and 1 data drive (for now), each drive is at least able to individually able to read/write at at least ~150mb/s (sustained). But when I copy something to the only share I created on the Array, the best copy speed I get is ~11-12mb/s ... it feels abysmally small. I have disabled the cache for this share though to purposefully test the array speed. Any idea what could be happening ?
  3. Ok... there seems to be a world of progress here. I was able to power 1 cage and add 4 disk in there (3x14tb + 1x8tb) and the cage is powered up directly from the PSU with a chained molex cable. All connected to the LSI (on one SAS cable) I started pre-clear on all 4 disk, and for now it's holding well (performance and stability) - granted it's only ~3% done on pass #1, but that's further than I have ever been able to do. And outside of the cage I have the 2x14TB that I just completed preclear + 1 old 3TB (that completed preclear independently some time ago) that are connected directly to SATA 15 pin on the PSU and outside the cage directly with another SAS cable on the LSI. And somehow - for the moment - everything seems to be holding fine. 4 disk doing pre-clear, and the 2x14TB building parity. Wow. I actually still can't believe that the whole of this seems (to be confirmed) to be due to the power delivery to the cages... f*cking Y splitters... Can't wait to get the parts to build my own cables...
  4. Good point ! Got a couple of old 1TBs lying around ! Definitely will try those first !
  5. Ok preclear on those 2x 14TB have completed successfully last night. Not a single error, very stable. As a reminder, those where connected outside of the cage directly to the PSU with the SATA 15 pin cables and to the LSI card. Next step in my testing: connect only one cage without Y splitter and pre-clear another couple of disk in there to see if the Y splitter are the issue. (waiting to receive the components to build my own MOLEX cable to power the cages by mail). Thx again for the time you spent helping me.
  6. What are the role of those electronic components ? Are those capacitor ?
  7. I can totally do that. I just didn’t think I needed to go there. Any chance you’d have a link to a finished product so I can get a better understanding ?
  8. What would you use instead of the splitter ? For now I basically have a long molex extension plugged into the psu only molex line. That extension gets out the case and then is plugged into a bunch of y splitters to power the 5 cages (each with 2 molex)
  9. Ok, I tried one more thing that I had never tried before. I completely de-powered the whole column of 5 cages. They are basically entirely offline. I have taken out the 2 brand new 14 TB and connected them directly to the SAS card and to the SATA 15 pin from the PSU. I started pre-clear on both. For now it is going strong (the only 2 drives on the entire systems besides the 2 NVMEs). But what is impressing more than just the drives working, is the fact that preclear is also working much faster... sustained read speed of ~260mb/s per drives for now. Whereas before, when I managed to run them through the cage (for a while at least), they were more around ~160mb/s - 200mb/s. I can't believe the likely hood of all 5 cages being defective at once... (I did not even bought them at the same time). Could it be somehow "incompatible" ? Both through the mother board sata controller AND the LSI SAS ? Feels too much like a coincidence... what am I missing here
  10. Ok, so you are saying that I should have 2 direct lines to the PSU, and each connector should be on a different line ? Ideally. Ok, which cable should I take my measurements from ? 1 yellow, 1 red, 2 blacks ? (from the Molex)
  11. Perusing the forums, I have also read about some folks who some BIOS misconfigured somehow. Is there anything in particular that I should be looking for in the BIOS to see if it is set correctly ? The only thing I remember changing are the boot order of devices (basically disabling everything but the flash drive) and enabling virtualization (with the intent of user docker - but I am no where near that). Also, on an unrelated note. I had started a pre-clear last night on those 2 new 14TBs, one was connected to the onboard sata connectors (via a sata cage) and the other one connected to the LSI SAS card (also via a sata cage - but a different one). This morning the drive from the on board SATA had halted pre-clear due to error - basically the drive disappeared (cannot find file .../hdb etc.) but the other one was still going strong. So I took out the drive from the sata cage connected to the onboard sata and moved it to a cage connected to the LSI and plugged it in there with the aim of restarting pre-clear. And as soon as I did that... the other drives failed too - same thing seemingly disappeared !?! So very obviously I have some physical instability... but I really don't get it... everything is firmly installed, connected, all drives are properly bolted, the case is sitting firm on a shelf, all the cage are strongly inserted in a home made structure solidly attached to the wall...
  12. Thank you for the comments. I had noticed that the connection with the Molex Y splitters were not amazing, so I carefully pushed every pin to every tube with a small screw drivers from both sides to make sure there was a strong connection. I am happy to replace the Y connectors, but then what should I use instead ? Something like this: https://www.amazon.com/Computer-Supply-Splitter-Internal-Extension/dp/B08CRXG2FW/ref=sr_1_21?dchild=1&keywords=molex+power+splitter&qid=1634488522&sr=8-21 but I am failing to see how this is really better ? Because from the PSU there is only one "line" for Molex, the 2 others are for SATA 15pin connectors. Also I find it curious that the SATA Cages have 2 molex power input ? Why bother with 2 ? Why not just one as the power is ultimately coming from the same source anyways... That just strike me as odd... I guess I could also add another line of molex to the PSU directly... https://www.amazon.com/COMeap-Molex-Drive-Adapter-Modular/dp/B08DMFYDBN/ref=sr_1_4?dchild=1&keywords=multi+molex+power+cable&qid=1634488409&sr=8-4 For reference, my PSU is: https://www.amazon.com/gp/product/B079GFTB8F/ref=ppx_yo_dt_b_asin_title_o00_s02?ie=UTF8&psc=1 And I am connecting 5 cages https://www.amazon.com/gp/product/B00DGZ42SM/ref=ppx_yo_dt_b_asin_title_o01_s00?ie=UTF8&psc=1 with the hope of one day using 20 drives... I got 8 right now. Is there something I can do to test if the voltage is dropping or something ? I have a regular voltmeter ? What should I be looking for ?
  13. Thank you for taking the time to look into my situation - it is very much appreciated. I would assume that most of the errors come from the on board SATA controller, as this is where I plugged 4 of the 14TB drives, and it is where I am trying to run pre-clear on the brand new 14tb that I just acquired. Assuming - for a second - that the issue is not coming from the Sata Cages (I am starting to doubt this now), what could be causing such issues with the onboard sata controller ? The 2 nvmes are working flawlessly... I know they are not sata drives.. but they are on board the motherboard... The cages have 2 molex power inputs. What I did to connect them, is that I basically daisy chained them with Molex Y connectors until all 5 of them got connected and then I plugged the last Y on the Molex connector directly from the PSU. Is that the right way to do it ? Maybe somehow the cages are underpowered and it's causing issues ? I did buy a 750W PSU for that in mind... and besides the CPU (Ryzen 5600G) I got nothing else really taking power... no GPUs, etc. I'll play around with disk allocation in and outside of the cages and report back... might take me a while though. Alex
  14. Hi, I have been playing with Unraid for a couple of weeks now... and based on a lots of people's review I was expecting it to be a very system to use. However I feel that I am plagued by a lot of issues... I posted sometime ago about a network issue (unraid does not see internet after a reboot before being online for at least ~20h !?! for reference. It is still not resolved. But the issue that is driving mad is the issue of "disappearing drives"... I have a bunch of drives: 5 x 14tb + 8tb + 3tb + 4tb I was planning on adding all of those to the array, but it seems that it's impossible to get them to stay long enough to run pre-clear. Some drives (2x 14TB, the 3TB and the 4TB) seem to disappear randomly... Those drives are a mix of brand spanking new drives and old drives and a mix of chucked drives and proper NAS drives. There seem to be no pattern to the dropping. Unraid show no error what so ever, the drives are just no longer here... I can get them to re-appear on reboot... or by remove them and plug them back in... Every single one of those drives that disappears got tested on a mac and seem to be working just fine... All SMART data and health data on those drives are good. I have re-seated all the cables... I have run ~24h of memory test with issues... The drives are plugged to a mix of a SAS card and directly to the mother board, seems to have no relation to the dropping either... The drives are all connected through a cage (https://www.amazon.com/gp/product/B00DGZ42SM/ref=ppx_yo_dt_b_asin_title_o05_s00?ie=UTF8&psc=1) I tried to move drives around to install them in slot where the drive does seem to stay... changes nothing. Also it seems that when I delete the Array (Tools > New Config) all the disappearing drive come back for while.... but that's only temporary. Additionally, when I was running some tests on previous Array (before I got all the drives that I wanted in there, the performance seemed abysmal... especially when moving from the NVME Cache to the Array... I am talking single digit MB/s transfer...) I am out of ideas... Unraid has been a much more complicated and frustrating experience than I was imagining.... Any ideas ? unraid-diagnostics-20211016-1915.zip
  15. Humm interesting, maybe I can go in the bios and disable the onboard card. But what worries me the most is that I had the same issue when I only had the onboard card and I had physically removed the 10gb card from the pci8x slot...
  16. And boom it happened again... not touching a single thing. Array has now been up for 1 day, 14 hours, 57 minutes and this morning I have "found" internet access again... root@unraid:~# ping www.google.com ^C root@unraid:~# ping www.google.com PING www.google.com (142.250.69.228) 56(84) bytes of data. 64 bytes from den08s05-in-f4.1e100.net (142.250.69.228): icmp_seq=1 ttl=115 time=12.6 ms 64 bytes from den08s05-in-f4.1e100.net (142.250.69.228): icmp_seq=2 ttl=115 time=12.7 ms 64 bytes from den08s05-in-f4.1e100.net (142.250.69.228): icmp_seq=3 ttl=115 time=12.5 ms 64 bytes from den08s05-in-f4.1e100.net (142.250.69.228): icmp_seq=4 ttl=115 time=12.9 ms ^C --- www.google.com ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3004ms rtt min/avg/max/mdev = 12.508/12.684/12.867/0.134 ms This is so weird...
  17. Ok, I have disabled bonding and bridging between ETH0 and ETH1. Here is the new diagnostic file. But so far no luck, still no connection from unraid.unraid-diagnostics-20211006-1059.zip
  18. Right I can disable it, no pb. But as I mentioned before, I tried the setup with a single NIC (in the case) and no bonding, and I had the same connectivity issues.
  19. It is definitely not a DNS issue... as soon as I finished the parity build, I rebooted unraid, and boom upon restart, no access to internet again. Unraid has been up for 15 hours now, and still no access. No other devices has any issues on my network. Starting to regret buying the pro license... how can this be ?
  20. No I don't think that is a DNS issue. 1. I noticed the issue over the week end (Sunday evening - there was no facebook issue at this point) 2. My router has Google DNS setup as default and propagate them automatically via DHCP to all devices. 3. I was able to ping the DNS (8.8.8.8) from the unraid console, yet I still had no internet access
  21. Right, but why would only unraid be impacted !? I have a slew of other devices that I had no problem connecting... including an old QNAP Nas that was doing it's thing without issue.
  22. Ok the weirdest thing happened this morning... I have changed 0 configuration settings, changed nothing on the router side either... and somehow magically, the unraid system can now ping google... You can see below, I was sshed into unraid, I pinged www.google.com last night - nothing, and tried the command again this morning and now it works !?!