tapodufeu

Members
  • Posts

    52
  • Joined

  • Last visited

Posts posted by tapodufeu

  1. Hey Guys,

     

    Same for me. I wonder if can add a new fresh feature to my favorite unraid NAS with DVR etc...

    I already use plex for years and I have a premium access. 

     

    Now I am looking for a pcie x1 DVBT card to add in my server to use DVR with my samsung TV.

    No HDHomeRun or any other kind of network tuner, I lack power plug, power cord and ethernet ports.... And it is absolutely horrible to see tons of cable around/behind your tv.

     

    So I just need a simple DVB tuner card to be able to record live DVB-t stream.

     

    I have the latest unraid with plex docker.

     

    Does anyone can share experience in this thread... good or bad.

     

    thanks

  2. I switch to i5-6400t with motherboard GA-H110M-S2H.... I jump to 30/50W but at least it will work like I want.

    I am right now between 20 and 35W but too slow.

     

    Do you think this time I do a better choice ? :)

     

    I think I can really use the 20$ I invested in my legit (or not hahaha) lsi card to really mulitplex my 2.5 slow hard disks :)

     

    edit; I have returned to amazon my 23$ pciex1 sata card....

     

  3. controller link looks good...

     

    speedtest docker :

     

     

    SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon]
    IBM (Broadcom / LSI)
    Serial Attached SCSI controller

    Type: Add-on Card in PCIe Slot 2 (x16 PCI Express)
    Current & Maximum Link Speed: 5GT/s width x1 (4 GB/s max throughput)
    Capabilities: storage pm pciexpress vpd msi msix bus_master cap_list rom

    Port 1:     sdb     1TB     Western Digital WD10JPVX Rev 08.01A08 Serial: WXW1A17AP02T (Disk 5)
    Port 2:     sdc     1TB     Western Digital WD10JPVX Rev 08.01A08 Serial: WXX1A178VD2Y (Disk 4)
    Port 3:     sdd     1TB     Western Digital WD10JPVX Rev 08.01A08 Serial: WXW1A17PTTNY (Disk 2)
    Port 4:     sde     1TB     Western Digital WD10JPVX Rev 01.01A01 Serial: WX81E73KEER4 (Disk 3)
    Port 5:     sdf     1TB     Seagate ST1000LM035 Rev LCM2 Serial: WL1FVK5S (Parity 2)
    Port 6:     sdg     1TB     Unknown HTS541010A9E662 Rev JA0AB5D0 Serial: JA8000ET03SZSS (Disk 1)
    Port 7:     sdh     1TB     Western Digital WD10JPVX Rev 08.01A08 Serial: WXW1A17APEXK (Disk 6)
    Port 8:     sdi     1TB     Seagate ST1000LM035 Rev LCM2 Serial: WL12V84C (Parity)

     

    lspci -vv results:

     

    02:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
        Subsystem: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon]
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 64 bytes
        Interrupt: pin A routed to IRQ 23
        Region 0: I/O ports at e000
        Region 1: Memory at 805c0000 (64-bit, non-prefetchable)
        Region 3: Memory at 80180000 (64-bit, non-prefetchable)
        Expansion ROM at 80100000 [disabled]
        Capabilities: [50] Power Management version 3
            Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
            Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [68] Express (v2) Endpoint, MSI 00
            DevCap:    MaxPayload 4096 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us
                ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
            DevCtl:    CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
                RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
                MaxPayload 256 bytes, MaxReadReq 512 bytes
            DevSta:    CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
            LnkCap:    Port #0, Speed 5GT/s, Width x8, ASPM L0s, Exit Latency L0s <64ns
                ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp-
            LnkCtl:    ASPM Disabled; RCB 64 bytes Disabled- CommClk+
                ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
            LnkSta:    Speed 5GT/s (ok), Width x1 (downgraded)
                TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
            DevCap2: Completion Timeout: Range BC, TimeoutDis+, LTR-, OBFF Not Supported
                 AtomicOpsCap: 32bit- 64bit- 128bitCAS-
            DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
                 AtomicOpsCtl: ReqEn-
            LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-
                 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                 Compliance De-emphasis: -6dB
            LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
                 EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
        Capabilities: [d0] Vital Product Data
            Not readable
        Capabilities: [a8] MSI: Enable- Count=1/1 Maskable- 64bit+
            Address: 0000000000000000  Data: 0000
        Capabilities: [c0] MSI-X: Enable+ Count=15 Masked-
            Vector table: BAR=1 offset=00002000
            PBA: BAR=1 offset=00003800
        Capabilities: [100 v1] Advanced Error Reporting
            UESta:    DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
            UEMsk:    DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
            UESvrt:    DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
            CESta:    RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
            CEMsk:    RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
            AERCap:    First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
                MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
            HeaderLog: 00000000 00000000 00000000 00000000
        Capabilities: [138 v1] Power Budgeting <?>
        Capabilities: [150 v1] Single Root I/O Virtualization (SR-IOV)
            IOVCap:    Migration-, Interrupt Message Number: 000
            IOVCtl:    Enable- Migration- Interrupt- MSE- ARIHierarchy+
            IOVSta:    Migration-
            Initial VFs: 16, Total VFs: 16, Number of VFs: 0, Function Dependency Link: 00
            VF offset: 1, stride: 1, Device ID: 0072
            Supported Page Size: 00000553, System Page Size: 00000001
            Region 0: Memory at 00000000805c4000 (64-bit, non-prefetchable)
            Region 2: Memory at 00000000801c0000 (64-bit, non-prefetchable)
            VF Migration: offset: 00000000, BIR: 0
        Capabilities: [190 v1] Alternative Routing-ID Interpretation (ARI)
            ARICap:    MFVC- ACS-, Next Function: 0
            ARICtl:    MFVC- ACS-, Function Group: 0
        Kernel driver in use: mpt3sas
        Kernel modules: mpt3sas

     

     

    I have CC the jpg of benchamrk results. I see 8 disks, with speed from 130 -> 60. So the average I imagine should be around 85/90.

    Still don't understand why parity check cap at 55MB/s.

     

    I have also included history of parity check.

    The 63MB/s was prior the LSI card installation (2 parity drives onboard SATA + 6 sata disk on pcie x1 card).

    53MB/s is with all disks on LSI SAS 2008. 1 parity drive on port1, 1 parity drive on port0.

     

     

    Sans titre.png

    benchmark-speeds.png

  4. another information, maybe relevant... during dd tests, write speed vary depending on where data is written on disk..... 130 to 70MB/s.

    read don't... always 50MB/s... even when I read the file written during the write test at 130MB/s for  example.

     

    of course I do write and read tests on /mnt/disk[1-6] to not use my cache.

     

    I don't use a sata cache SDD drive but a pcie card with nvme SDD, which is far more efficient than the sata version... I would never go back on a sata sdd cache drive anymore !!!!

  5. I have just upgrade my conf with a lsi sas 2008 and I can't figure out why read performance are worst with the LSI than with SATA.

    I have a LSI 9220-8i, H3-25113-03A.

     

    Write performance are better,  for example if I do a dd test, write performs 30 to 40% faster. (tests done on the same drive). On my old 2.5 1T drive, I can reach more 130MB/s write speed at the beginning of the disk and around 70MB/s at the end. It was not possible to reach this level of performance with my sata card on regular load. 

    With my sata card, I was able to have this kind of performance if I turn off all dockers and VM.

     

    But read performances are stuck at 50MB/s max.... on ALL drives. This is the first time I see disks faster during write than read.

     

    For example, if I do a parity check, I am stuck around 50MB/s because read is done at this speed.

    If I use my onboard sata (for parity disks) and pci sata 6x ports (for data disks), parity check is done at 65MB/s.

     

    please help.

     

    BTW, I have flash the latest firmware P20 yesterday, nothing changed. 

    I have bought this card on ebay, from a seller with thousands of positive feedbacks. the card was delivered with the firmware 10.x.x.x

     

    Any idea where I can investigate?

     

     

     

  6. HI all,

     

    I have a weird situation I did not had before. I built a small server with just 1 data disk, no parity. Every time this server is restarted, the array is not restarted automatically. I have to log in and click start array manually.

    One restarted, everything works perfectly, no error detected, no problem apparently.

     

    Do you have any idea why? is it possible to have an array with just one data disk ?

     

    Thanks for your responses.

  7. So, now I removed all WD smr disk and change the controller to use a 6 ports sata controller (3 sata + 3 sata multiplex).

     

    Bad performances disappeared and I am back on regular performance.

     

    replacing my 2 parity disk from wd smr to seagate were 2 very slow operations... 3 to 4 MB/s (see check history). I was not just only a replacement but a switch data/parity disk to use seagate disks for parity.

     

    then I replaced one by one all my WD SMR disk by WD CMR disk. You can see that slowly, the speed increase at each replacement (4 disks). all WD10SPZX replaced by WD10JPVX (I was lucky to find a guy selling 4 of them, completely new and cheap)

     

    the last graph is the speed disk test which also demonstrate the speed gap between old CMR and new SMR disk with western digital. Expect 30 to 40% more just during the speed test and 5 to 6 times better with unraid (12Mb/s compared to 65Mb/s during parity check).

     

    to conclude, for all unraid users, try to avoid as much as possible all SMR disk, especially in 2.5 format, performance are even worst in 2,5 compared to 3.5.

     

    Soon I wil replace my motherboard by the new intel j4125B with a pcie2 x16 port, allowing me to use a LSI SAS HBA 8-ports  instead of my 4-6 ports sata card pciex1. I do not expect a huge performance gap (my disks are not that fast !!) but surely batter performance under heavy load or simultaneous access and less errors.

     

    Last point,  a disk created mysteriously some errors when connect to my marvell data card. Impossible to reproduce those erros with crystal disk or any other smart tools... moreover, smart error reported by the marvel card was not reported anymore when connected to crystal disk on windows.

    So I replaced also the 8x sata port card (under warranty) and downgrade to a 6 ports card which do not create any error.

     

    Problem solved... hardware choices and configuration was the issue.

     

     

    checkhistory.png

    unraid_pool.png

    diskspeed2.png

    • Like 1
  8. are you sure about it?

     

    They perform better than my other CMR drives I have and based on list provided by seagate, LM015, LM024 and LM048 are confirmed SMR. LM035 are manufactured in 2016, apparently CMR.

     

    Do you know any tips to check the technology used? 

     

     

  9. last thing... maybe important for some people, but my setup with seagate CMR as parity drives consume 21 to 25W. (8 drives + ssd cache + motherboard)

    My setup with WD drives as parity consume 19 to 21 W.

     

    During sleep, the lowest I used to have with my old configuration was 15,6 W....

    After 24 hours with this reorganisation of drives, during sleep I am above 17 W.

     

    My seagate CMR drives are LM035, so not the latest technology. and my WD SMR drives are form 2019 !!

     

  10. It took me 3-4 days per parity disk to replace my 2 parity disk (wd blue SMR) by Seagate CMR. Around 3-4 Mb/s..... OMG so long. Almost a week for 2 disks !!!!

     

    BUT, I am now back on "regular" performance, around 65MB/s for parity check.

     

    Conclusion: NEVER USE SMR DISK FOR PARITY !!!

    SMR disks for parity with SMR disks for data... 15Mb/s

    CMR disks for parity with SMR disks for data... 65MB/s

     

    I have notice a limitation on 8 ports RTD1295 with port multiplexer. If I connect more than 3 drives on the port multiplexer, performance fall down significantly.

    Parity disk attached with sata onboard or pci ext sata card has no impact on performance. I have even tried to connect a parity drive on a multiplexed sata port... performance remains the same. Just the number of drives connected on a multiplexed port has impact on performance.

     

    So I recommend to not use the 8 ports pcie card with an RTD1295 but maximum 6 ports (3 sata ports + the 4th port is multiplexer with 3 sata). Performance are the same for 4 to 6 drives. Performance are bad to very bad with 7 or 8 drives !! and you will save money, in Europe sata 6 cards cost 20€... the 8 ports cost 45€. the 8 ports card does not worth it at all.

     

    I am replacing all my WD blue SMR disks by older WD blue (before 2018) with regular CMR technology. I will let you know impact on performance next week when all drivers are switched.

    On the 2nd hand market, they all cost the same price, so easy to sell my SMR to buy CMR without losing money. (they are all second hand since the beginning of my NAS)

     

    And I will also move my seagate CMR disk to data disk. I have notice a little increase of the noise level with seagate CMR compared to wd blue SMR when I ise them for parity. Not only during writes... the noise level is always higher with my seagate drives.

    The noise level is very important for me, my NAS is ia HTPC under my tv used for plex/pihole/torrent/openvpn/smb, that why I have designed my NAS with 2,5 disk instead of 3,5 because they do less noise.

  11. Good to know. Thank you Johnnie for this precious information. I was not aware at all about this bottleneck in my configuration and maybe a lot of people can have the same issue without knowing it.

     

    I have an asrock mitx J5005 https://www.asrock.com/mb/Intel/J5005-ITX/index.asp

    It is a very cheap configuration, but it is so more powerfull than a NAS more expensive from qnap or synology (when it works haha).

    It is not easy to find a sata controller, with 8 disks on a pcie x1 port (this is the only port I have on my motherboard). 

     

    Based on what you say I think I found where you look at in syslog file

     

    PCIE SATA CONTROLLER MARVELL

    Jul 22 09:43:31 Tower kernel: ata4.00: ATA-10: ST1000LM035-1RK172,             WL12V84C, LCM2, max UDMA/100
    Jul 22 09:43:31 Tower kernel: ata4.00: configured for UDMA/100

    Jul 22 09:43:31 Tower kernel: ata5.00: ATA-10: WDC WD10SPZX-75Z10T2,         WXG1A29LVT1Y, 03.01A03, max UDMA/133
    Jul 22 09:43:31 Tower kernel: ata5.00: configured for UDMA/133
    Jul 22 09:43:31 Tower kernel: ata6.00: ATA-10: ST1000LM035-1RK172,             WL1FVK5S, LCM2, max UDMA/100
    Jul 22 09:43:31 Tower kernel: ata6.00: configured for UDMA/100
    Jul 22 09:43:31 Tower kernel: ata6.01: ATA-9: WDC WD10JPVX-75JC3T0,         WX81E73KEER4, 01.01A01, max UDMA/133
    Jul 22 09:43:31 Tower kernel: ata6.01: configured for UDMA/133
    Jul 22 09:43:31 Tower kernel: ata6.02: ATA-10: WDC WD10SPZX-60Z10T0,      WD-WX71A9849U04, 04.01A04, max UDMA/133
    Jul 22 09:43:31 Tower kernel: ata6.02: configured for UDMA/133

     

    this is the controller I bought Marvell 88SE9215 + JMicron JMB5xx 8 Ports

    https://www.amazon.fr/gp/product/B07ZT31GTD/ref=ppx_yo_dt_b_asin_title_o07_s00?ie=UTF8&psc=1

    I should have bought the 6 ports version of this controller... it is the same without a JMicron JMB5xx sata multiplier !! 

     

    As soon as my rebuild parity is done (at 5Mb/s... 2 days OMG !!!), I change how my disks are connected.

     

    Do you think using ST disks for parity disks and WD disks for data can help? 

    Do you think ATA-8, ATA-9 or ATA-10 has any importance ? it is the ATA stack version, correct ? If I cannot avoid using a sata port multiplier, I assume I should at least use the same version of ATA compatibilty and also avoid mixing UDMA100 and UDMA133.

     

    BTW if you have any recommendation for another sata controller on pcie x1 port with 6 sata ports, do not hesitate. Thanks

  12. What do you mean by "my controller configuration is not the best". Do you recommend I change anything?

     

    I do the dd test as soon as my array restart... partity rebuild, I tried to change 1 disk of parity, based on the rebuilt speed, I have not changed the right disk yet :)

     

     

  13. I totally agree and red tons of documentations about it. BUT,  if you pay attention to the first image displaying check history dates and durations, you can notice that suddenly, after fixing errors, speed fall down. No disk were changed, the array is exactly the same than before. This is why I doubt my problem is related to hardware. But i might be possible that an hardware error suddenly raised during the xfs repair.

     

    I launch a complete surface test.

     

     

  14. I have just run a diskspeed... nothing really interesting. All dockers were running during the test but with no a really few activity.

     

    The 2 parity disk looks slower than others..... but at least 45M in read and write.

     

    I am not sure disk are really the sourvce of this issue.

    diskspeed.png

  15. I tried diskspeed already. I had weird results showing different disk slow everytime I tried. The speed gap detection is really an issue, with different disj never ending the process...

     

    I tried also to check diskpeed with hdparm, but it did not show any disk slower than another.

     

    Do you know anything thing I can do to get a deeper analysis (and log) next time I do a parity check? I am not used at all with mdadm.

     

    I do a new diskspeed benchmark later today when my server is free to use.

     

     

  16. Hi All,

     

    I am trying to understand why my weekly parity check speed was suddenly divided by 6 after replacing a disk causing errors.

    I spent hours and hours on internet and on this forum looking for tips or advice on how to fix this, but nothing worked so now I post this message.

     

    First you can see on parity check history the problem. On june 25, I had a power cut. I was out of the town and tried mulitple times to fix it remotely. On june 27 I was finally home and restarted my server and fix the issue.  My server was off between june 25 and june 27.

     

    523016367_paritycheck.thumb.png.21bfca02af3901e27841c53b4f301338.png

     

     

    After the fix (array off, automatic fix by unraid), the speed of parity check suddenly felt down. And I don't know why.

    First I tried to reorganize disk positions on my motherboard and PCIe SATA controller. (parity disk directly on motherboard sata controller and data disk on pcie sata controller). No change !

     

     

    235580734_arraydevices.thumb.png.afd4be3fb4cbc60b5a413667e677b80e.png

     

    Now I try to investigate mdadm command but it looks different to what I see on posts and cannot understand it correctly.

     

    For example:

     

    mdadm.thumb.png.ced19234a3418d80b902dd5b939b3821.png

     

    do you know what is sbsynced2 and mdResyncAction status 'Q'... ?

     

    I have posted diagnostics file if it helps.

     

    To conclude, everything is working fine (docker, services etc..) but slower than before. Lot slower than before !! I can notice it on evey restart especially with my pihole docker which cannot respond dns request on time during multiple minutes or dns service very very slow during parity check. It never happend before.

     

    So I assume my unraid is not running at the right speed and something is going wrong. Please help.

     

    Tapodufeu, unraid lover for months :)

     

     

     

    tower-diagnostics-20200823-0949.zip