Thy

Members
  • Posts

    8
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Thy's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Well finally after few hours trying stuffs i get it working and start pre clearing all disks sorry can't post diagnostic right now then. I guess the problem was regarding the two gsata and esata motherborad controler as i disabled it or quickboot setting dunno exactly. I have sata drives Hoopster but 2* cable 1 SFF-8087 port on hba to 4 SFF-8482 (sata port with power) if that was your question. But will try to test again network speed transfer after pre-clearing as i plugged 2 brand new mx500 500GB as cache pool i expect speeds to be better than before as i'm currently facing the same speeds issue as this post : As i invest in 2 new ssd an hba and 10Gbits network i expect the speed problem to be gone but dunno why i'm not confident...
  2. Thanks for your reply well i just see in hard disk boot priority under mb bios 8 SCSI devices like : 1.SCSI-0 : #0200 ID0D LUN0 ATA 2.SCSI-1 : #0200 ID0C LUN0 ATA 3.SCSI-2 : #0200 ID0A LUN0 ATA etc untill number 8 i tried the first and second pci-e slot 9211 alone nothing change. About cable i use these : SFF-8087 4x SFF-8482 042N7H honestly i don't know about forward or reverse breakout thing sorry...
  3. Hello, i have finally received my 9211-8i and just flashed it to IT mode with 20.00.07.00 firmware as it was in IR but the problem is i just don't have any disks showing in unraid now... That's weird because i saw them during boot (lsi bios) but nothing available in unraid ? Is there something special to tweak or i did i missed something. i tried searching on forum regarding this and found nothing who seems similar if anyone has this card working i'm literally out of idea there...
  4. Hello, Well a quick update from 2019 there's definitely something wrong with the marvell 9128 who control the 2 6Gbits/s ports on p55 i tried the same ssd connected on the marvell port (sata3) and the p55 chipset (sata2) and i get better speed when connected with the p55 sata 2 than the sata3 from marvell chip, really weird. Finally i found a 9211-8i hba for 8 sata3 ports and will order 2 mx500 or samsung 860 will see if it improves things as current speeds really aren't enough for heavy write usage specially through a 10Gbits/s nic.
  5. direct I/O was already disabled during "bench" i guess one of the ssd is tired and lower average speed but it's those spikes and drop to 0 that bother me will see that in my return by end of the week currently at few hundreds kilometers from the server 🙂. Maybe will be able to try soon with a lsi 9211-8i or 9260-8i to see if the bottleneck is really the chip who controls sata ports as the implement of sata 3 in p55 was a complete mess. Thanks and have a nice christmas / end of the year all.
  6. Well i guess i have to investigate with sata controler limitations because these ssd can easily sustain 100GB write file at 300MB/s 360MB/s under windows system on a z370 platform instead of (old) p55. maybe sata raid card on pci-e 8X would solve this I/O problem i don't know. If the "real" speed of these tlc based ssd is about 150/160MB/s what's the point using them for large size data transferts instead of regular 7.2k disk.... One thing is sure i saw the difference between those crappy recent tlc ssd and old 850pro mlc i had during heavy write jobs. It should be considered in review where most of time they only use the slc cache with benchmark tools. Will try using raid0 mode to see if speeds bump a little or if the controller is already caped out. other thing what's the purpose of copy-on-write settings under share settings ? and is the tunable(md_num_stripes) and tunable(md_sync_windows) values raising still relevant as said in the wiki for system with more than 1GB ram as it was meant for the v5 of unraid in the wiki.
  7. Thanks johnnie.black i tried your script with differents ssd cache possible and results were let's say disappointing as visible in the screenshot below; When i try the same file transfer on windows 10 (ssds exfat format aes crypt) from the same nvme i get pretty decent write speed (the test file is 25GB size) : Kingston 480GB : 350MB/s average Crucial MX500 500GB : 400MB/s average Toshiba q300 1TB : 430MB/s average same with atto / crystal disk mark / hd tune pro but those only bench the slc cache i presume since they use low file size ? Question is why such huge difference between windows test and unraid through 10Gbits nic ? i get that btrfs crypt isn't same as exfat aes but at least i would wonder a stable transfer speed even if it's only 200-250MB/s and it's not the case it's dropping to 0 all the time... The most curious about that is that if i copy back the same file from unraid to the nvme disk i get +-400MB/s constant without droping... This thing is getting me mad (as i have +- 12TB to copy to unraid asap) . Is there any special tweak to do on unraid settings or windows to improve smb transfer ? or at least make it stable ? already have 9000mtu and direct i/o makes things worse... ram cache trouble ? controller bottleneck ? Any ideas from you experienced guys would be much appreciated.
  8. Hello all, i contact you after many researchs on several posts regarding (very) strange write performance on my unraid system, i've found similars troubles i guess as this forum is a gold mine for learning but unfortunatly no answer to fix it. Here is the build to present things globally : CPU i7 860 (8threads 2.8Ghz stock frequency but oc at 3.8Ghz possible if needed as watercooled) RAM 16GB DDR3 2000Mhz Motherboard : Gigabyte P55A UD6 DATA disks : 4X 2to seagate 7.2k spin and 1x 8to seagate barracuda 7.2k parity disk : 1x 8to 7.2k seagate enterprise Cache pool : 1x kingston 480Go ssd and 1x 500Go crucial mx 500 ssd network : 2x mellanox 10Gbits/s through om3 fiber for P2P networking and 1gbit ethernet for classic switch connection. (192.168.100.1 range). The two 10Gb nics have mtu set to 9000 and fixed ips in the same subnetwork for P2P connection (10.10.10.10 and 10.10.10.20). I tried using Iperf to check the lan performance is good at 9.5Gbits/s average then it seems normal. The problem i encounter each time i try to transfer anything is that i have very poor I/O during copying through 10Gbits/s fiber cards the speed move up to 500MB/s few seconds then fall back slowly to 0 and restart to spike about 400-450MB/s before falling slowly to 0 etc etc (as shown on screenshot)... In final the average transfer time is worst than with the gigabit ethernet line wich make me think that something's wrong with 10Gbits settings (smb windows 10 tweak, network or cache tweak ?) but unfortunatly i'm too dumb to discover it by myself. In the last 15 days i've tried all type of config or format possible (nfs or btrfs both encrypted without success) as this setup was meant to replace an old raid5 setup for backing up 4K videos rush and RAW files of personal family pictures among few professionnal stuffs it has to be encrypted. I've even tried only 2 ssd without parity (1 data and 1 cache) the problem still the same, I really can't figure out what's wrong with this setup i'm litteraly out of ideas. Thanks all for taking time reading this and wish you all happy end of year vacations if you have luck to have some. PS: Screenshot of windows10 and diagnostic tool zip during transfer of a 4K sample disk in attachments. tower-diagnostics-20181220-1250_during_transfer.zip