Jump to content

Hitman

Members
  • Posts

    12
  • Joined

  • Last visited

Posts posted by Hitman

  1. So being new with UnRAID I thought that the backups went to the cloud and indeed they do not. I upgraded for 6.10.3 to 6.11.0, and now I am in a endless reboot. I get to `Loading /bzroot...` and then the machine reboots.

     

    image.png.1d9430d0809ba9eb5e31fe180414d7ea.png

     

    I did not have any plugins or config other than a single share, but I wanted to see what else i could try before wiping starting over. Starting over, so i just pull the USB stick and reinstall it from a Windows machine?

  2. Yes, it is virtualized with the HBA Adapter in PCI Passthrough to the VM.
     

    13:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
            DeviceName: pciPassthru0
            Subsystem: Dell 6Gbps SAS HBA Adapter
            Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
            Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
            Latency: 64, Cache Line Size: 64 bytes
            Interrupt: pin A routed to IRQ 16
            Region 0: I/O ports at 3000 [disabled] [size=256]
            Region 1: Memory at fda40000 (64-bit, non-prefetchable) [size=64K]
            Region 3: Memory at fda00000 (64-bit, non-prefetchable) [size=256K]
            Capabilities: [50] Power Management version 3
                    Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
                    Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
            Capabilities: [68] Express (v2) Endpoint, MSI 00
                    DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us
                            ExtTag- AttnBtn- AttnInd- PwrInd- RBE- FLReset- SlotPowerLimit 0.000W
                    DevCtl: CorrErr- NonFatalErr- FatalErr- UnsupReq-
                            RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
                            MaxPayload 128 bytes, MaxReadReq 128 bytes
                    DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-
                    LnkCap: Port #0, Speed 5GT/s, Width x32, ASPM L0s, Exit Latency L0s <64ns
                            ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp-
                    LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk-
                            ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                    LnkSta: Speed 5GT/s (ok), Width x32 (ok)
                            TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt-
                    DevCap2: Completion Timeout: Not Supported, TimeoutDis- NROPrPrP- LTR-
                             10BitTagComp- 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix-
                             EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit-
                             FRS- TPHComp- ExtTPHComp-
                             AtomicOpsCap: 32bit- 64bit- 128bitCAS-
                    DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR- OBFF Disabled,
                             AtomicOpsCtl: ReqEn-
                    LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-
                             Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                             Compliance De-emphasis: -6dB
                    LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete- EqualizationPhase1-
                             EqualizationPhase2- EqualizationPhase3- LinkEqualizationRequest-
                             Retimer- 2Retimers- CrosslinkRes: unsupported
            Capabilities: [d0] Vital Product Data
    pcilib: sysfs_read_vpd: read failed: No such device
                    Not readable
            Capabilities: [a8] MSI: Enable- Count=1/1 Maskable- 64bit+
                    Address: 0000000000000000  Data: 0000
            Capabilities: [c0] MSI-X: Enable+ Count=15 Masked-
                    Vector table: BAR=1 offset=0000e000
                    PBA: BAR=1 offset=0000f800
            Capabilities: [100 v1] Advanced Error Reporting
                    UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                    UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                    UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                    CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-
                    CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+
                    AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
                            MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
                    HeaderLog: 00000000 00000000 00000000 00000000
            Capabilities: [138 v1] Power Budgeting <?>
            Kernel driver in use: mpt3sas
            Kernel modules: mpt3sas

     

    Right now I have 2 x 250GB SATA drives, RAID 1 for cache pool, but do not have it enabled as the files that are being written are 100GB each. So I don't believe it will keep up with it writing to drives after written to cache., or offer much of a speed improvement if the writes are constant. I was thinking of getting some cheap 1TB drives.

  3. I will say the community support is great so far. Thanks for all the responses and feedback.

     

    Quote

    This will disable "turbo write", keep writing one file only.

    I got it. I have two machines writing, so that isn't quite possible.

     

    Quote

    Won't that low for mid-high grade NVMe.

    What I means is once a SSD fills, wont it write directly to the disk?

     

    Quote

    Also apply to Unraid if you use software RAID by pool / UD.

    How does this work?

     

    Quote

    This won't happen, array throughput won't over single disk throughput, so for spinner disk usually in 140MB/s - 90MB/s. If you have enough RAM cache, then network transfer to RAM will saturate the 10GNIC ( 1GB/s ) and it will "slowly" writing to array.

    So If If I give the device more RAM, I will see better speeds? My goal is ~150-200MB/s with two device writing and one device reading.

     

    Quote

    Require all drives be the same size, and if you lose more than parity can recover you have lost it all.

    I understand, I went with UNRAID to save 16TB, but seeing the speed is making me think I might want to take the hit.

     

    Quote

    Unraid is not RAID, you'll never get same speeds as other solutions that stripe drives, you can have fast pools, array will always be limited by single disk speed.

    What are Fast Pools?

     

    To add, with one write, I still am not hitting the ~145MB/s one drive is capable of:

    image.thumb.png.135e7898891846e72421b0292e689042.png

  4. So let me ask the question. Why would people use UnRAID if it is so slow? I have new 16TB and 18TB drives. If I drop one more drive to Parity for RAID5 against the 18TBs and 16TBs, I could achieve 8x the speeds of unRAID. How are people getting speeds of 1GBs? Is it all due to NVMEs? What happens when writes are sustained? It goes from 1GBs to 90MB? Windows software raid can reach much higher speeds.

  5. I have a Unraid with dual Samsung 850 Cache pool and a mix of 16TB and 18TB drives. I have it running on a VM in ESXi. It has a 10GB NIC and another VM has a 10GB NIC. Moving a single files is only hitting ~600MBits/s. I have a JBOD HBA directly attached within the VM and that's how all HDDs & SSD Are connected. I noticed that the SSDs are not being written to and even if it it direct to the drive, i would expect it to hit higher while using the drive cache, and even writing direct once drive cache is full.

     

    I have a old dell with a few externals on it and that can saturate the 1GB link with this same one file to a single external 16TB drive. So I'm confident this should be getting equal or better performance.

    I've notice that the device is not using cache, but even direct to disk it should be able to handle this as the drive can hit over 120/MBs for a synchronous write.

     

    I've noticed that when I use another machine to transfer a single file, which looks to be hitting a different disk, it still only hits 600-700Mbps. Is there something wrong with the setup?

    unraid-diagnostics-20220918-2320.zip

×
×
  • Create New...