Jump to content

squirrelslikenuts

Members
  • Posts

    46
  • Joined

  • Last visited

Posts posted by squirrelslikenuts

  1. To anyone else experiencing this problem...

     

    My issue was related to a Trendnet 2.5GBE card that I installed. After the update the network card driver was borked. You need to pull your usb boot drive out, open "network.cfg" and replace all the MTU sections that are set to 9000 with 1500.

     

    reboot the server and you will be ok. 

  2. 7 hours ago, JorgeB said:

    You can edit /boot/config/network.cfg and change there.

    I just pulled my usb stick out and edited all the MTU's back to 1500.

    I had an uptime of over 200 days, on server grade hardware (hp proliant) and I am using a cheapo trendnet 2.5gbe card. I pulled the card and still the boot hung at "triggering udev events"

    My 200 day uptime was running on a TRIAL (yes I have a ups and good power) and I just purchased the unlimited license for this machine (will be my 3rd unraid). After purchase I did the upgrade from (i think i was 6.12 and went to 6.12.4

    Just booting with the config file changed, will advise.

    Man if this was the cause I'm gonna shit a brick.

    Edit : Boot hung after "triggering udev events" and now hangs at "device vhost3 doesnt exist". Ill reinstall the 2.5gbe card and see...

    Edit2: Now there is a Kernel Panic "not syncing VFS unable to mount root fs on unknown-block(0,0)

    I will reinstall the GTX 1050 that I left out while trying to solve this problem

     

    Edit3: Yes, changing MTU, reinstalling the Trendnet 2.5gbe card and reinstalling the GTX 1050 has allowed the system to boot.

     

    fuck me sideways

     

     

  3. On 3/19/2020 at 5:22 PM, sdballer said:

    Cant get webgui to start. Here is my log file.

     

     

     

    supervisord.log 22.5 kB · 1 download

    I'm not too good at interpreting log files, but Ill tell you what worked for me.

     

    After activating a VPN in Deluge, my webui wouldn't start. I banged my head against the wall all morning, I was ready to chop my dick off I was so frustrated.

     

    What ended up working for me was this. I was using a static IP for the specific container that deluge was running in docker. IE: My unraid server was  say 20.20.20.10 and I have the deluge docker container pull its own IP from my router of 20.20.20.11 . I found it easier to split some containers away from my unraid root ip. 

     

    Stop Deluge-VPN

    edit the container 

    change network type to BRIDGE

    vpn on

    apply

    boot the container 

     

    This is what worked for me.

     

    Goodluck!

  4. On 9/14/2019 at 4:22 PM, badwolf said:

    EDIT: I was able to fix my issue. It seems that auto setting in docker for local port wasn't working. Had to also adjust that instead of just the container port.

    So I'm having an issue with being able to connect to the webui. When I attempt to connect to port 8112, I'm unable to load the webui. Any help would be appreciated


    Below is my log

     

    
     

     

    Can you be a bit more specific about how you fixed it? I have the same problem, after activating the VPN I cant connect to the webUI.

     

    Thanks!

  5. 8 hours ago, johnnie.black said:

    Not enough info, is this array disk to array disk? With parity? Turbo write enable?

    Array disk to array disk

    8TB Red to 8TB Red

    No parity

    Turbo write enabled

     

    HP H220 HBA (SAS2308) -> SINGLE SFF-8087 Cable -> HP 12 Drive Backplane with integrated expander.

     

     

     

    Interestingly - While using unBALANCE to move files from drive to drive - at ~81 MB/s (lots of smaller files) I am able to speed test one of the other drives in the array at 200+ MB/s with only slight impact on performance. Slight speed hit to the drive being tested, but the 2 drives that are doing a transfer aren't really affected in performance.

     

    Edit: My model shows this in the specs

    12HDD Models

     

    HP Smart Array P212/256MB Controller (RAID 0/1/1+0/5/5+0)
    NOTE: Available upgrades: P410 with FBWC, 256MB with BBWC, 512MB with FBWC, Battery kit upgrade (for the 256MB cache), and Smart Array Advanced Pack (SAAP).
    NOTE: Support transfer rate up to 3Gb/s SAS or 3Gb/s SATA

     

    I am using an HP H220 HBA capable of 6Gb/s sas or 3Gb/s SATA - but It appears the 12 drive backplane will only negotiate 3gb/s SAS.

     

    The P212 supports 6Gb/s SAS so I assume its the expander/backplane that does not.

     

    Edit2: hba connected at single link not dual 

  6. On 8/13/2016 at 2:58 AM, johnnie.black said:

    Never tested as I only have 1 Intel expander but I would expect these speeds:

     

    using a PCIe 2.0 HBA the bottleneck is the PCIe bus, max speed ~110/125MB/s

     

    I am using a PCIe 3.0 HBA on a PCIe 2.0 server, connected to a SAS1 expander (hp dl180 g6).

    I can speed test at ~205-210 MB/s on a single drive, but transferring disk to to disk its limited to 85-90 MB/s.

     

    Does this make sense? 

  7. On 2/14/2019 at 12:38 PM, nasforthemass said:

    @Squid After seeing it in your signature, I read through the first two pages of this thread, got all excited... then i found this:

    p.s. Not trying to bash on you, just trying to save others from the allure of the oasis in the desert. hahaha 😉

    I just fell for the honeypot! Damn I wish this was still a thing.

  8. On 10/4/2018 at 11:20 AM, Steve-0 said:

    12x 2TB SUN SAS drives for $160

    • PX-350r with 12x 2TB Enterprise Seagate drives for $220
    • 5x 3TB, 6x 5TB and 2x 8TB Drives NIB in external enclosures from a pawn shop for $10-$40 each $250 total
    • Dell i3220 filled with 900gb SAS drives as part of a huge server buyout (2x maxed out R710's, 1x maxed out R310, 3x 6648 switches, 1 5548 switch, 2x 2700w UPS, the i3220, and a bunch on misc equipment) for $400
    • 22x Dell R710 with Dual 5530's for $180

     

    22 dell R710's for $180... man, the USA is a batshit crazy firesale. I feel bad for those businesses as a single R710 can easily fetch $200+ on any open market.

  9. 5 hours ago, Marshalleq said:

    So when you say solved, you know what was causing it?  Or you just proved Unraid can perform with the right client?  If the former, I'm keen to understand.  Thanks.

    Unfortunately no. And it pisses me off. The client (i7-3820, Asus Maximus MB, 32GB ram, intel ssd boot drive, all WD black drives) was a Windows 7 system. It has served me well for 6 years (since last re-install), and can WRITE to various servers (ubuntu, freenas and unraid) all at over 100 MB/s. When reading from the arrays, it would max out at 65 MB/s like clockwork, across 3 different server OS (with a slight bump in speed reading from ubuntu).

     

    I changed 4 variables (yes I know thats bad lol) at once to get a solid 112 MB/s R/W speed.

     

    Different Hardware (lower power Acer prebuilt i5/8gb/120gb ssd)

    Different OS - Windows 10 (albeit fully reinstalled and "fresh")

    Different Network Card

    Different Port/Cable on the switch

     

    I will not dedicate more than 1 more hour to tracking down what went wrong, as I was looking for an excuse to upgrade to Windows 10 so take advantage of installing (without kijigering) natively on an NVMe boot drive.

     

     

     

    Offending client that was capped at 65 MB/s read speeds from unRAID (network RX) was using an;

     

    -Intel 82579V Gigabit Network Adaptor (onboard)

     

    I'm unaware if this has known issues with unRAID, but the server shouldn't care what chipset of card is on the other end as long as it can handle GbE

     

     

    My goal was to get full speed from unRAID, and I have. If that requires a different network card or a different OS so be it.

  10. SOLVED

     

    I've made progress. Found an Acer i5-650 system in the basement with 8gb ram and threw Windows 10 on an SSD into it. After updating all the Windows Updates and throwing in an Intel PCIe network card, I was able to achieve this..with no changes to the unRAID server. Tested on unRAID 6.6.6

     

    Previous tests were with a (higher end hardware but windows 7 system with onboard nic).

     

    No magic config. Fresh install of Windows 10 and an Intel PCIe network card. Thats it.

     

    Will test with the onboard nic in that system and report back.

     

    First Pic is WRITES TO the unRAID server

    Screen Shot 2019-02-02 at 3.31.17 PM.png

     

    Second Pic is READS FROM the unRAID server

    Screen Shot 2019-02-02 at 3.30.58 PM.png

     

     

     

    Kinks worked out, Im ready to buy :)

     

  11. I've isolated 2 unraid servers on a separate switch that doesn't have internet or any other devices. Static ip address assigned. Cat6 cable all around.

     

    Each server is running dual xeon X5570s and 32 or 64 GB ram - onboard broadcom nics.

    One server running 6.6.6 the other 6.7.0

    iperf3 run as a client and server on each server report 112 MB/s sustained transfers.

     

    A windows VM on one machine (on nvme cache drive) writing to the other servers SMB NVMe share (or ssd share) reports over 100 MB/s transfer.

    The same windows VM copying data from the other servers SMB NVMe share maxes out at 65 MB/s. The Windows 10 file transfer graph looks like a rollercoaster (for reading)

     

    I'm at a loss.

     

    The 2 last things I will try is using PCIe intel nics and attempting a previous version of unraid like 6.5.

     

    Can anyone else try reading data from an unraid cache or SSD and report back speed over GbE?

     

  12. On 1/28/2019 at 10:54 AM, limetech said:

    Moving this to a Bug Report so we can keep track of it.

    I've just recreated the issue in 6.6.6 on completely different hardware (ibm x3550 m3 dual xeon 32gb ram etc)

     

    Exact same symptoms. 101 MB/s write. 65 MB/s read through SMB.

     

    ~106 MB/s write and ~90-95 MB/s read using unraid FTP server.

     

    Last thing I will try and isolate the unraid box and windows machine onto their own switch.

     

  13. 1 hour ago, johnnie.black said:

    A few things to try:

     

    - see if there's any difference reading from a disk share vs an user share

    - toggle Direct IO (Settings -> Global Share settings)

    - some time ago using an older SMB version could make a big difference, not so much recently but worth a try, for example to limit to SMB 2, add to "Samba extra configuration" on Settings -> SMB:

    
    max protocol = SMB2_02

     

    NO CACHE - Disk 6 Share - Empty drive - USES PARITY (not user share)

    Write from Win7 -> unRAID - 48GB MKV

    102MB/s tapering to ~50-60 MB/s around the 23GB mark

    72 MB/s Average Speed

    Average CPU Load is 8-9%

     

    iotop reveals 4 lines of code below @ ~25 MB/s each (while running at full speed)

    shfs /mnt/user -disks 127 2048000000 -o noatime,big_writes,allow_other -o direct_io -o remember=0

    Direct I/O is/was enabled 

     

    Read Speed starts out around 3.5 MB/s ( I didnt catch what it maxed at)

    20GB MKV - 46 MB/s average read (saving to ssd on windows machine)

     

     

    120GB SSD (unassigned)

    Write speeds to an unassigned 120GB ssd on the unRAID system (outside array and not cached)

    48 GB MKV

    Steady at ~103 MB/s write - Spiked down to ~65 MB/s within the last 2% of the copy

    Average 100 MB/s for the fill 48 GB

     

    Read Speed started off at a 500 KB/s then ramped to 65 MB/s and stayed there until complete

    Average 61 MB/s

     

     

    I will try the max protocol SMB setting you suggested next. If that fails I will reduce the network clutter down to just the unraid box and client machine connected to the switch. I can't see that helping as iperf speed tests, and write speeds look flawless.

     

  14. 1 hour ago, gubbgnutten said:

    For completeness - How fast are writes to the parity protected array (for a share not using cache)?

     

     

    When you do the write tests, what are you writing? (number of files, total size of data).

     

    I would expect all writes over the network to occur at line speed until the RAM buffer on the server is full, and you do have plenty of RAM.

     

    Test Disk 6 (wd red 4tb) - disabled cache - using parity

     

    Absolutely steady at 102 MB/s WRITE  for ~25 GB - MKV files

     

    Steady at 102 MB/s WRITE for 25/100GB vdmk file (vmware disk image)

    Around 25GB mark, the wirte slows to ~59 MB/s

     

    I will say that I have about ~20ish mbit of bandwidth on the network being used by ip cameras but they are pushing data to a different physical server on the network. Im not expecting 125 MB/s read and write  but I would like to see 100 MB/s both ways.

     

    Cache drive is a 960GB NVMe - I doubt Id push more than that to the array at any given time.

     

    EVERYONE on the net says "wahhh wahhh my writes are slow" I seem to be the only one able to achieve 100 MB/s out of the box but complain my reads are slow. 

     

     

    My network is as follows

     

    Internet -> pfSense box -> GbE switch -> many computers including the server

    Static IPs set to all devices

     

     

     

     

×
×
  • Create New...