Jump to content

Kuroyukihimeeee

Members
  • Posts

    40
  • Joined

  • Last visited

Posts posted by Kuroyukihimeeee

  1. Not sure what's gone on, lost power few days back and ever since my VM's are unable to access the network. They all have br0 set as their network device and under network settings eth0 is bridged. 

     

    On a Windows VM if i set the IP to say 10.0.0.100 it is then able to ping the unraid server on 10.0.0.11 but unable to ping or access anything else, for example 10.0.0.1 where my router is.

     

    Have tried the following to resolve the issue.

    • Set all network settings static, rebooted. No change
    • Set all network settings automatic, rebooted. No change
    • Replaced network.cfg with original from fresh unraid download. No change
    • Created another new VM on br0, still has no network access. Can ping the host when a IP is set manually but still nothing more

     

    If i assign the VM's virbr0 they are able to access the internet and a tracert shows that it is going from 192.168.X.X to 10.0.0.X to the internet however then they are not on the same network segment, something i need them to do.

    Also verified other devices can get DHCP and internet access, so its not my routers end. Its between unraid and br0

     

    Is there any way to reset the br0 or entire networking of unraid? Replaced network.cfg but it seems to have made no difference. Diags attached

     

    Please help, your my only hope!

    avalon-diagnostics-20170307-0532.zip

  2. My Dockers sometimes show they need an update when there is one when their DNS isnt responding. Have you restarted your unraid server since changing ISP and getting a new router?

     

    Edit. Crap sorry for necro

  3. Had 2 of them cards in the past if i remember correctly or a very similar Fujitsu card.

     

    Several hours wasted attempting to flash them, one left bricked. Fujitsu seems to have changed something on the card so it will not recognise stock LSI firmware.

     

    Link below to a really good flashing thread along with problems i encountered with Fujitsu cards

    https://lime-technology.com/forum/index.php?topic=12767.msg469057;topicseen#msg469057

     

    Edit, as Fireball3 put it on that thread "Although the card may be "just rebranded", probably there are checks in place to prevent crossflashing with LSI or other firmware."

  4. Hey all. This is probably stupidly simple but beginner with nginx configs, 4 hours and many many google searches later im not too much further into my problem.

     

    Essentially i have stats.domain.co.uk loading up PlexPy perfectly using letsencrypt. Now trying to get requests.domain.co.uk to point to Plex Requests.

     

    My default config file below. The first and second "server_name" seem to work perfectly. http traffic is denied and https gets to PlexPy perfectly, but the 3rd server_name doesnt make it to Plex Requests. Tried even using the stats.domain.co.uk with the port for Requests and that works fine so doesnt seem to be the Docker.

     

    Any pointers as to what i need to change below? Just want requests.domain.co.uk to work alongside stats.domain.co.uk

     

    # redirect all traffic to https
    server {
    listen 80;
    server_name _;
    return 301 https://$host$request_uri;
    }
    
    server {
    listen 443 ssl;
    
    root /config/www;
    index index.html index.htm index.php;
    
    server_name stats.*;
    
    ssl_certificate /config/keys/letsencrypt/fullchain.pem;
    ssl_certificate_key /config/keys/letsencrypt/privkey.pem;
    ssl_dhparam /config/nginx/dhparams.pem;
    ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
    ssl_prefer_server_ciphers on;
    
    client_max_body_size 0;
    
    #PLEX STATS
    
    location / {
    #		auth_basic "Restricted";
    #		auth_basic_user_file /config/nginx/.htpasswd;
    	include /config/nginx/proxy.conf;
    	proxy_pass http://10.0.0.11:8181;	
    }
    }
    
    
    server {
    listen 443 ssl;
    
    root /config/www;
    index index.html index.htm index.php;
    
    server_name requests.*;
    
    ssl_certificate /config/keys/letsencrypt/fullchain.pem;
    ssl_certificate_key /config/keys/letsencrypt/privkey.pem;
    ssl_dhparam /config/nginx/dhparams.pem;
    ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
    ssl_prefer_server_ciphers on;
    
    client_max_body_size 0;
    
    #PLEX REQUESTS
    
    location / {
    #		auth_basic "Restricted";
    #		auth_basic_user_file /config/nginx/.htpasswd;
    	include /config/nginx/proxy.conf;
    	proxy_pass http://10.0.0.11:3000/request;	
    }
    }
    

  5. Tried force user = nobody, now finding Couchpotato cant delete/rename movies and i still have to run new permissions on my desktop share.

     

    So strange, thinking of just turning it all off again and leaving it unprotected if i can, not secure at all but running that utility all the time is such a pain in the as.

     

    Edit after some google, "read only = no" added into smb

  6. Might have to dig around. Quick search cant find them in sickrage on qbittorrent.

     

    Super strange this had no issues before with no user accounts, now i add 1, set all shares to private with read/write to that account though it started being an issue :/

  7. Hey all,

     

    Recently set up some user accounts on my server but i am finding that qbittorrent, sickrage, etc seem to "lock" files to the wrong permissions now and i cant copy, rename, etc without first running the new permissions tool every time  a new file is added. Before and after for permissions below.

     

    w72BsUs.png

     

    All the dockers are set to 99:100 for user and group, do i need to be setting it to something different? I just get the error "You need permission from SERVERNAME\nobody to edit this file" whenever i try to do something.

     

    What should i be setting the user and group to to allow everybody read/write permissions?

  8. Might have to look into them. Might take a while, (9TB of data would need to be placed somewhere)

     

    Still something seems off. Thinking in the next few months of building an entirely new server anyway so might just throw in the towel for now and see if the new server performs better.

  9. Did a little more tweaking on this using a 10GB test file. Now managing to get about 700MB/s reads from the cache to a ramdisk but its not sustained, seems to go up for a second or so then go back to standard 350-400MB/s. Top screenshot below.

     

    Writing still seems to be capped at 525MB/s solid. No drops or anything, it just refuses to go higher.

     

    Still stumped by this, seems to be showing improvement from changes i have made, but still not what it should be.

     

    Nkt8RSO.png

     

    Made these tweaks below from this link to my sysctl.conf file - http://dak1n1.com/blog/7-performance-tuning-intel-10gbe/

     

     # -- tuning -- #
    # Increase system file descriptor limit
    fs.file-max = 65535
    
    # Increase system IP port range to allow for more concurrent connections
    net.ipv4.ip_local_port_range = 1024 65000
    
    # -- 10gbe tuning from Intel ixgb driver README -- #
    
    # turn off selective ACK and timestamps
    net.ipv4.tcp_sack = 0
    net.ipv4.tcp_timestamps = 0
    
    # memory allocation min/pressure/max.
    # read buffer, write buffer, and buffer space
    net.ipv4.tcp_rmem = 10000000 10000000 10000000
    net.ipv4.tcp_wmem = 10000000 10000000 10000000
    net.ipv4.tcp_mem = 10000000 10000000 10000000
    
    net.core.rmem_max = 524287
    net.core.wmem_max = 524287
    net.core.rmem_default = 524287
    net.core.wmem_default = 524287
    net.core.optmem_max = 524287
    net.core.netdev_max_backlog = 300000

  10. So directly to array with no cache SSD looks like this. Starts at about 200MB/s solid then drops down to the slow speeds. Im assuming parity calculations, HBA,etc and all that is why its so slow right to the disk. Array is all WD Red 3TB's. 3 data, 1 parity.

     

    a1uk8Si.png

     

     

    Tried taking out all other add-in cards so only cards left are the NIC and HBA which should free up more lanes and CPU resources for them, both in 16x PCIE2 slots. Both of them however only have 8x connections on the bottom so will be limited to the 8x speeds, but still this should be more than enough. I highly doubt intel would make a 2x 10gbe connection NIC that does not have the bandwidth available to it. Same for the LSI based HBA.

     

    Since swapping PCI slots and removing other cards, i see a different message in the syslog about bandwidth though, no warning this time.

     

    Jul 25 22:40:55 AVALON kernel: Intel(R) 10 Gigabit PCI Express Network Driver - version 4.3.13
    Jul 25 22:40:55 AVALON kernel: Copyright (c) 1999-2015 Intel Corporation.
    Jul 25 22:40:56 AVALON kernel: ixgbe 0000:02:00.0: PCI Express bandwidth of 32GT/s available
    Jul 25 22:40:56 AVALON kernel: ixgbe 0000:02:00.0: (Speed:5.0GT/s, Width: x8, Encoding Loss:20%)
    Jul 25 22:40:56 AVALON kernel: ixgbe 0000:02:00.0 eth0: MAC: 3, PHY: 3, PBA No: G36748-005
    Jul 25 22:40:56 AVALON kernel: ixgbe 0000:02:00.0: a0:36:9f:3c:59:88
    Jul 25 22:40:56 AVALON kernel: ixgbe 0000:02:00.0 eth0: Enabled Features: RxQ: 16 TxQ: 16 FdirHash 
    Jul 25 22:40:56 AVALON kernel: ixgbe 0000:02:00.0 eth0: Intel(R) 10 Gigabit Network Connection
    Jul 25 22:40:57 AVALON kernel: ixgbe 0000:02:00.1: PCI Express bandwidth of 32GT/s available
    Jul 25 22:40:57 AVALON kernel: ixgbe 0000:02:00.1: (Speed:5.0GT/s, Width: x8, Encoding Loss:20%)

  11. Just gave it a go with a 15.5GB file. Still solid transfer speeds (from about 525MB/s to 485MB/s so i dont think its NIC related at the moment, might try tomorrow making sure my HBA and NIC are both on PCIE2 16x slots to see if i can find any performance gain.

     

    I mean its still a super clean 5GBe transfer, but still half what it should be.

     

    ZGLg4b0.png

  12. Hey all,

     

    Having a small issue. Basically im trying to work out where my system is having the bottleneck for its cache speeds. Currently the cache is running 3x 850 120gb SSD's setup to use btrfs RAID0 with the -dconvert=raid0 command.

     

    No matter what i try though, i seem to keep hitting this limit of about 458MB/s write and around 355MB/s read. Tried using different HBA controllers for SSD's, different PCIE sockets and still no change. My HBA cards currently are only Dell H200 passing the disks directly to unRAID, i appreciate these are not the most powerful cards on the planet so suspect these could be the issue, but i had the same problem when running directly off the SATA2 ports on the motherboard too.

     

    Tried using hdparm -tT on one of the SSD's and it reports that it can do  Timing cached reads:  16800 MB in  2.00 seconds = 8409.01 MB/sec and  Timing buffered disk reads: 1394 MB in  3.00 seconds = 464.27 MB/sec. Not sure if this is useful at all, just that them buffered disk reads seem to be very very close to the ramdisk to cache array speeds.

     

    Both clients have a Intel X540-T2 10GBe network card and are connected using a CAT7 cable. Both have been tried using 1500MTU up to 9014, virtually no change. unRAID has 8 cores, 16 threads and gets up to about 30% usage when copying over, the other system has a i7 4790k and uses about 15% when copying so im fairly sure its not a processor issue.

     

    Again im not sure if this is just a blatant message pointing out the issue, but syslog does have these messages about the H200 and the X540-T2 when booting.

     

    Jul 18 23:21:48 AVALON kernel: ixgbe 0000:04:00.0 eth0: Intel(R) 10 Gigabit Network Connection
    Jul 18 23:21:49 AVALON kernel: ixgbe 0000:04:00.1: PCI Express bandwidth of 8GT/s available
    Jul 18 23:21:49 AVALON kernel: ixgbe 0000:04:00.1: (Speed:2.5GT/s, Width: x4, Encoding Loss:20%)
    Jul 18 23:21:49 AVALON kernel: ixgbe 0000:04:00.1: This is not sufficient for optimal performance of this card.
    Jul 18 23:21:49 AVALON kernel: ixgbe 0000:04:00.1: For optimal performance, at least 20GT/s of bandwidth is required.
    Jul 18 23:21:49 AVALON kernel: ixgbe 0000:04:00.1: A slot with more lanes and/or higher speed is suggested.

     

    Just wondering if anybody has any tips to break this barrier i seem to be hitting? Or is there just a system bottleneck i haven't identified yet? Its almost like unRAID is ignoring the RAID0 of the cache and only writing/reading from a single SSD at any one time?

     

    NAg2XIR.png

    avalon-diagnostics-20160725-1550.zip

  13. Hey all, recently upgraded to RC2 and RC1, pretty sure this is unrelated but at the same time my array started having issues

     

    My log is filled with constant errors shown below. I/O errors on Windows based machines trying to write to the array aswell

     

    "Jul 16 12:00:58 AVALON kernel: XFS (md3): xfs_log_force: error -5 returned."

     

    All disks were showing as active but an entire disk worth of files was just missing.

    Rebooted array and it came back, but im going to keep a close eye on it now

     

    All SMART tests come back clear, drives are 3-4 months old yet an entire drive dropped without unraid even noticing...

     

    I kept logs open and watch Plex as normal, about 10 mins after booting array this kicks in with red.

     

    Jul 16 19:46:44 AVALON kernel: XFS (md3): Internal error XFS_WANT_CORRUPTED_GOTO at line 1635 of file fs/xfs/libxfs/xfs_alloc.c. Caller xfs_alloc_fix_freelist+0x155/0x2de
    Jul 16 19:46:44 AVALON kernel: CPU: 7 PID: 22360 Comm: kworker/u34:7 Tainted: G IO 4.4.15-unRAID #1
    Jul 16 19:46:44 AVALON kernel: Hardware name: FUJITSU CELSIUS R570-2 /D2628-C1, BIOS 6.00 R1.22.2628.C1 12/19/2011
    Jul 16 19:46:44 AVALON kernel: Workqueue: writeback wb_workfn (flush-9:3)
    Jul 16 19:46:44 AVALON kernel: 0000000000000000 ffff88050f8cb480 ffffffff81369dfe ffff8804ea7f8000
    Jul 16 19:46:44 AVALON kernel: 0000000000000001 ffff88050f8cb498 ffffffff81274baa ffffffff812457e0
    Jul 16 19:46:44 AVALON kernel: ffff88050f8cb510 ffffffff81244eb1 0000000300000001 ffff8801ad7ec780
    Jul 16 19:46:44 AVALON kernel: Call Trace:
    Jul 16 19:46:44 AVALON kernel: [<ffffffff81369dfe>] dump_stack+0x61/0x7e
    Jul 16 19:46:44 AVALON kernel: [<ffffffff81274baa>] xfs_error_report+0x32/0x35
    Jul 16 19:46:44 AVALON kernel: [<ffffffff812457e0>] ? xfs_alloc_fix_freelist+0x155/0x2de
    Jul 16 19:46:44 AVALON kernel: [<ffffffff81244eb1>] xfs_free_ag_extent+0x13d/0x558
    Jul 16 19:46:44 AVALON kernel: [<ffffffff81243ee2>] ? xfs_alloc_get_freelist+0x135/0x14d
    Jul 16 19:46:44 AVALON kernel: [<ffffffff812457e0>] xfs_alloc_fix_freelist+0x155/0x2de
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8126b1cd>] ? xfs_perag_get+0x3a/0x44
    Jul 16 19:46:44 AVALON kernel: [<ffffffff81245bb2>] xfs_alloc_vextent+0x249/0x3a3
    Jul 16 19:46:44 AVALON kernel: [<ffffffff81251251>] xfs_bmap_btalloc+0x3c8/0x598
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8125142a>] xfs_bmap_alloc+0x9/0xb
    Jul 16 19:46:44 AVALON kernel: [<ffffffff81251cbe>] xfs_bmapi_write+0x40e/0x80f
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8127cd9d>] xfs_iomap_write_allocate+0x1c1/0x2bd
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8126dc40>] xfs_map_blocks+0x134/0x143
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8126e8e7>] xfs_vm_writepage+0x287/0x483
    Jul 16 19:46:44 AVALON kernel: [<ffffffff810c0eb2>] __writepage+0xe/0x26
    Jul 16 19:46:44 AVALON kernel: [<ffffffff810c1372>] write_cache_pages+0x249/0x351
    Jul 16 19:46:44 AVALON kernel: [<ffffffff810c0ea4>] ? mapping_tagged+0xf/0xf
    Jul 16 19:46:44 AVALON kernel: [<ffffffff810c14c2>] generic_writepages+0x48/0x67
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8137a240>] ? find_next_bit+0x15/0x1b
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8136a14d>] ? fprop_fraction_percpu+0x32/0x72
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8126daef>] xfs_vm_writepages+0x3f/0x47
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8126daef>] ? xfs_vm_writepages+0x3f/0x47
    Jul 16 19:46:44 AVALON kernel: [<ffffffff810c2e31>] do_writepages+0x1b/0x24
    Jul 16 19:46:44 AVALON kernel: [<ffffffff81129a8f>] __writeback_single_inode+0x3d/0x151
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8112a044>] writeback_sb_inodes+0x20d/0x3ad
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8112a255>] __writeback_inodes_wb+0x71/0xa9
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8112a43b>] wb_writeback+0x10b/0x195
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8112a984>] wb_workfn+0x18e/0x22b
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8112a984>] ? wb_workfn+0x18e/0x22b
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8105acd4>] process_one_work+0x194/0x2a0
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8105b68a>] worker_thread+0x26b/0x353
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8105b41f>] ? rescuer_thread+0x285/0x285
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8105f910>] kthread+0xcd/0xd5
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8105f843>] ? kthread_worker_fn+0x137/0x137
    Jul 16 19:46:44 AVALON kernel: [<ffffffff816232bf>] ret_from_fork+0x3f/0x70
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8105f843>] ? kthread_worker_fn+0x137/0x137
    Jul 16 19:46:44 AVALON kernel: XFS (md3): Internal error xfs_trans_cancel at line 990 of file fs/xfs/xfs_trans.c. Caller xfs_iomap_write_allocate+0x298/0x2bd
    Jul 16 19:46:44 AVALON kernel: CPU: 7 PID: 22360 Comm: kworker/u34:7 Tainted: G IO 4.4.15-unRAID #1
    Jul 16 19:46:44 AVALON kernel: Hardware name: FUJITSU CELSIUS R570-2 /D2628-C1, BIOS 6.00 R1.22.2628.C1 12/19/2011
    Jul 16 19:46:44 AVALON kernel: Workqueue: writeback wb_workfn (flush-9:3)
    Jul 16 19:46:44 AVALON kernel: 0000000000000000 ffff88050f8cb820 ffffffff81369dfe ffff8804ba061790
    Jul 16 19:46:44 AVALON kernel: 0000000000000101 ffff88050f8cb838 ffffffff81274baa ffffffff8127ce74
    Jul 16 19:46:44 AVALON kernel: ffff88050f8cb860 ffffffff81288dc0 ffff8800cc6ef800 0000000000053e82
    Jul 16 19:46:44 AVALON kernel: Call Trace:
    Jul 16 19:46:44 AVALON kernel: [<ffffffff81369dfe>] dump_stack+0x61/0x7e
    Jul 16 19:46:44 AVALON kernel: [<ffffffff81274baa>] xfs_error_report+0x32/0x35
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8127ce74>] ? xfs_iomap_write_allocate+0x298/0x2bd
    Jul 16 19:46:44 AVALON kernel: [<ffffffff81288dc0>] xfs_trans_cancel+0x49/0xbf
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8127ce74>] xfs_iomap_write_allocate+0x298/0x2bd
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8126dc40>] xfs_map_blocks+0x134/0x143
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8126e8e7>] xfs_vm_writepage+0x287/0x483
    Jul 16 19:46:44 AVALON kernel: [<ffffffff810c0eb2>] __writepage+0xe/0x26
    Jul 16 19:46:44 AVALON kernel: [<ffffffff810c1372>] write_cache_pages+0x249/0x351
    Jul 16 19:46:44 AVALON kernel: [<ffffffff810c0ea4>] ? mapping_tagged+0xf/0xf
    Jul 16 19:46:44 AVALON kernel: [<ffffffff810c14c2>] generic_writepages+0x48/0x67
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8137a240>] ? find_next_bit+0x15/0x1b
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8136a14d>] ? fprop_fraction_percpu+0x32/0x72
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8126daef>] xfs_vm_writepages+0x3f/0x47
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8126daef>] ? xfs_vm_writepages+0x3f/0x47
    Jul 16 19:46:44 AVALON kernel: [<ffffffff810c2e31>] do_writepages+0x1b/0x24
    Jul 16 19:46:44 AVALON kernel: [<ffffffff81129a8f>] __writeback_single_inode+0x3d/0x151
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8112a044>] writeback_sb_inodes+0x20d/0x3ad
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8112a255>] __writeback_inodes_wb+0x71/0xa9
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8112a43b>] wb_writeback+0x10b/0x195
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8112a984>] wb_workfn+0x18e/0x22b
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8112a984>] ? wb_workfn+0x18e/0x22b
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8105acd4>] process_one_work+0x194/0x2a0
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8105b68a>] worker_thread+0x26b/0x353
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8105b41f>] ? rescuer_thread+0x285/0x285
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8105f910>] kthread+0xcd/0xd5
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8105f843>] ? kthread_worker_fn+0x137/0x137
    Jul 16 19:46:44 AVALON kernel: [<ffffffff816232bf>] ret_from_fork+0x3f/0x70
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8105f843>] ? kthread_worker_fn+0x137/0x137
    Jul 16 19:46:44 AVALON kernel: XFS (md3): xfs_do_force_shutdown(0x8) called from line 991 of file fs/xfs/xfs_trans.c. Return address = 0xffffffff81288dd9
    Jul 16 19:46:44 AVALON kernel: XFS (md3): Corruption of in-memory data detected. Shutting down filesystem
    Jul 16 19:46:44 AVALON kernel: XFS (md3): Please umount the filesystem and rectify the problem(s)
    Jul 16 19:46:44 AVALON shfs/user0: shfs_read: read: (5) Input/output error
    Jul 16 19:46:44 AVALON shfs/user0: shfs_read: read: (5) Input/output error
    Jul 16 19:46:44 AVALON shfs/user0: shfs_read: read: (5) Input/output error
    Jul 16 19:46:44 AVALON shfs/user0: shfs_read: read: (5) Input/output error
    Jul 16 19:46:44 AVALON shfs/user0: shfs_read: read: (5) Input/output error
    Jul 16 19:46:44 AVALON shfs/user0: shfs_read: read: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    
    

     

    Ran memtest all night, came back with 3 passes and 0 errors so still running now but does anybody have any ideas as to what the problem could be?

     

    Never had any issues in the past, array keeps running like normal but the drive that is dropping out obviously means the files on that cant be accessed. Can still open the folders but the files inside them are missing.

     

    If i take down the array and then reboot it it works perfectly though for another 10 mins or so until the error kicks in again

    avalon-diagnostics-20160716-1447.zip

  14. Anybody else seeing loads of XFS errors since this update?

     

    My log is filled with constant errors shown below. I/O errors on Windows based machines trying to write to the array aswell

     

    "Jul 16 12:00:58 AVALON kernel: XFS (md3): xfs_log_force: error -5 returned."

    my experience is that this normally means the disk has dropped offline.

     

    You were right

     

    All disks were showing as active but an entire disk worth of files was just missing.

    Rebooted array and it came back, but im going to keep a close eye on it now

     

    All SMART tests come back clear, drives are 3-4 months old yet an entire drive dropped without unraid even noticing...

     

    I kept logs open and watch Plex as normal, about 10 mins after booting array this kicks in with red.

     

    Jul 16 19:46:44 AVALON kernel: XFS (md3): Internal error XFS_WANT_CORRUPTED_GOTO at line 1635 of file fs/xfs/libxfs/xfs_alloc.c. Caller xfs_alloc_fix_freelist+0x155/0x2de
    Jul 16 19:46:44 AVALON kernel: CPU: 7 PID: 22360 Comm: kworker/u34:7 Tainted: G IO 4.4.15-unRAID #1
    Jul 16 19:46:44 AVALON kernel: Hardware name: FUJITSU CELSIUS R570-2 /D2628-C1, BIOS 6.00 R1.22.2628.C1 12/19/2011
    Jul 16 19:46:44 AVALON kernel: Workqueue: writeback wb_workfn (flush-9:3)
    Jul 16 19:46:44 AVALON kernel: 0000000000000000 ffff88050f8cb480 ffffffff81369dfe ffff8804ea7f8000
    Jul 16 19:46:44 AVALON kernel: 0000000000000001 ffff88050f8cb498 ffffffff81274baa ffffffff812457e0
    Jul 16 19:46:44 AVALON kernel: ffff88050f8cb510 ffffffff81244eb1 0000000300000001 ffff8801ad7ec780
    Jul 16 19:46:44 AVALON kernel: Call Trace:
    Jul 16 19:46:44 AVALON kernel: [<ffffffff81369dfe>] dump_stack+0x61/0x7e
    Jul 16 19:46:44 AVALON kernel: [<ffffffff81274baa>] xfs_error_report+0x32/0x35
    Jul 16 19:46:44 AVALON kernel: [<ffffffff812457e0>] ? xfs_alloc_fix_freelist+0x155/0x2de
    Jul 16 19:46:44 AVALON kernel: [<ffffffff81244eb1>] xfs_free_ag_extent+0x13d/0x558
    Jul 16 19:46:44 AVALON kernel: [<ffffffff81243ee2>] ? xfs_alloc_get_freelist+0x135/0x14d
    Jul 16 19:46:44 AVALON kernel: [<ffffffff812457e0>] xfs_alloc_fix_freelist+0x155/0x2de
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8126b1cd>] ? xfs_perag_get+0x3a/0x44
    Jul 16 19:46:44 AVALON kernel: [<ffffffff81245bb2>] xfs_alloc_vextent+0x249/0x3a3
    Jul 16 19:46:44 AVALON kernel: [<ffffffff81251251>] xfs_bmap_btalloc+0x3c8/0x598
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8125142a>] xfs_bmap_alloc+0x9/0xb
    Jul 16 19:46:44 AVALON kernel: [<ffffffff81251cbe>] xfs_bmapi_write+0x40e/0x80f
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8127cd9d>] xfs_iomap_write_allocate+0x1c1/0x2bd
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8126dc40>] xfs_map_blocks+0x134/0x143
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8126e8e7>] xfs_vm_writepage+0x287/0x483
    Jul 16 19:46:44 AVALON kernel: [<ffffffff810c0eb2>] __writepage+0xe/0x26
    Jul 16 19:46:44 AVALON kernel: [<ffffffff810c1372>] write_cache_pages+0x249/0x351
    Jul 16 19:46:44 AVALON kernel: [<ffffffff810c0ea4>] ? mapping_tagged+0xf/0xf
    Jul 16 19:46:44 AVALON kernel: [<ffffffff810c14c2>] generic_writepages+0x48/0x67
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8137a240>] ? find_next_bit+0x15/0x1b
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8136a14d>] ? fprop_fraction_percpu+0x32/0x72
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8126daef>] xfs_vm_writepages+0x3f/0x47
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8126daef>] ? xfs_vm_writepages+0x3f/0x47
    Jul 16 19:46:44 AVALON kernel: [<ffffffff810c2e31>] do_writepages+0x1b/0x24
    Jul 16 19:46:44 AVALON kernel: [<ffffffff81129a8f>] __writeback_single_inode+0x3d/0x151
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8112a044>] writeback_sb_inodes+0x20d/0x3ad
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8112a255>] __writeback_inodes_wb+0x71/0xa9
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8112a43b>] wb_writeback+0x10b/0x195
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8112a984>] wb_workfn+0x18e/0x22b
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8112a984>] ? wb_workfn+0x18e/0x22b
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8105acd4>] process_one_work+0x194/0x2a0
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8105b68a>] worker_thread+0x26b/0x353
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8105b41f>] ? rescuer_thread+0x285/0x285
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8105f910>] kthread+0xcd/0xd5
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8105f843>] ? kthread_worker_fn+0x137/0x137
    Jul 16 19:46:44 AVALON kernel: [<ffffffff816232bf>] ret_from_fork+0x3f/0x70
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8105f843>] ? kthread_worker_fn+0x137/0x137
    Jul 16 19:46:44 AVALON kernel: XFS (md3): Internal error xfs_trans_cancel at line 990 of file fs/xfs/xfs_trans.c. Caller xfs_iomap_write_allocate+0x298/0x2bd
    Jul 16 19:46:44 AVALON kernel: CPU: 7 PID: 22360 Comm: kworker/u34:7 Tainted: G IO 4.4.15-unRAID #1
    Jul 16 19:46:44 AVALON kernel: Hardware name: FUJITSU CELSIUS R570-2 /D2628-C1, BIOS 6.00 R1.22.2628.C1 12/19/2011
    Jul 16 19:46:44 AVALON kernel: Workqueue: writeback wb_workfn (flush-9:3)
    Jul 16 19:46:44 AVALON kernel: 0000000000000000 ffff88050f8cb820 ffffffff81369dfe ffff8804ba061790
    Jul 16 19:46:44 AVALON kernel: 0000000000000101 ffff88050f8cb838 ffffffff81274baa ffffffff8127ce74
    Jul 16 19:46:44 AVALON kernel: ffff88050f8cb860 ffffffff81288dc0 ffff8800cc6ef800 0000000000053e82
    Jul 16 19:46:44 AVALON kernel: Call Trace:
    Jul 16 19:46:44 AVALON kernel: [<ffffffff81369dfe>] dump_stack+0x61/0x7e
    Jul 16 19:46:44 AVALON kernel: [<ffffffff81274baa>] xfs_error_report+0x32/0x35
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8127ce74>] ? xfs_iomap_write_allocate+0x298/0x2bd
    Jul 16 19:46:44 AVALON kernel: [<ffffffff81288dc0>] xfs_trans_cancel+0x49/0xbf
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8127ce74>] xfs_iomap_write_allocate+0x298/0x2bd
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8126dc40>] xfs_map_blocks+0x134/0x143
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8126e8e7>] xfs_vm_writepage+0x287/0x483
    Jul 16 19:46:44 AVALON kernel: [<ffffffff810c0eb2>] __writepage+0xe/0x26
    Jul 16 19:46:44 AVALON kernel: [<ffffffff810c1372>] write_cache_pages+0x249/0x351
    Jul 16 19:46:44 AVALON kernel: [<ffffffff810c0ea4>] ? mapping_tagged+0xf/0xf
    Jul 16 19:46:44 AVALON kernel: [<ffffffff810c14c2>] generic_writepages+0x48/0x67
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8137a240>] ? find_next_bit+0x15/0x1b
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8136a14d>] ? fprop_fraction_percpu+0x32/0x72
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8126daef>] xfs_vm_writepages+0x3f/0x47
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8126daef>] ? xfs_vm_writepages+0x3f/0x47
    Jul 16 19:46:44 AVALON kernel: [<ffffffff810c2e31>] do_writepages+0x1b/0x24
    Jul 16 19:46:44 AVALON kernel: [<ffffffff81129a8f>] __writeback_single_inode+0x3d/0x151
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8112a044>] writeback_sb_inodes+0x20d/0x3ad
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8112a255>] __writeback_inodes_wb+0x71/0xa9
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8112a43b>] wb_writeback+0x10b/0x195
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8112a984>] wb_workfn+0x18e/0x22b
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8112a984>] ? wb_workfn+0x18e/0x22b
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8105acd4>] process_one_work+0x194/0x2a0
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8105b68a>] worker_thread+0x26b/0x353
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8105b41f>] ? rescuer_thread+0x285/0x285
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8105f910>] kthread+0xcd/0xd5
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8105f843>] ? kthread_worker_fn+0x137/0x137
    Jul 16 19:46:44 AVALON kernel: [<ffffffff816232bf>] ret_from_fork+0x3f/0x70
    Jul 16 19:46:44 AVALON kernel: [<ffffffff8105f843>] ? kthread_worker_fn+0x137/0x137
    Jul 16 19:46:44 AVALON kernel: XFS (md3): xfs_do_force_shutdown(0x8) called from line 991 of file fs/xfs/xfs_trans.c. Return address = 0xffffffff81288dd9
    Jul 16 19:46:44 AVALON kernel: XFS (md3): Corruption of in-memory data detected. Shutting down filesystem
    Jul 16 19:46:44 AVALON kernel: XFS (md3): Please umount the filesystem and rectify the problem(s)
    Jul 16 19:46:44 AVALON shfs/user0: shfs_read: read: (5) Input/output error
    Jul 16 19:46:44 AVALON shfs/user0: shfs_read: read: (5) Input/output error
    Jul 16 19:46:44 AVALON shfs/user0: shfs_read: read: (5) Input/output error
    Jul 16 19:46:44 AVALON shfs/user0: shfs_read: read: (5) Input/output error
    Jul 16 19:46:44 AVALON shfs/user0: shfs_read: read: (5) Input/output error
    Jul 16 19:46:44 AVALON shfs/user0: shfs_read: read: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    Jul 16 19:46:45 AVALON shfs/user0: shfs_write: write: (5) Input/output error
    
    

     

    Going to run memtest now. Seems to be RAM related

  15. Anybody else seeing loads of XFS errors since this update?

     

    My log is filled with constant errors shown below. I/O errors on Windows based machines trying to write to the array aswell

     

    "Jul 16 12:00:58 AVALON kernel: XFS (md3): xfs_log_force: error -5 returned."

     

    I would suggest that  you follow the request (see first post in this thread) to attach your Diagnostics file to your post if you want a definite diagnoses.  Your description is most likely the result of the problem and not the cause of the actual problem.

     

    Diag attached. Was more posting to see if this was just a know issue others are experiencing

     

    avalon-diagnostics-20160716-1447.zip

  16. Worked perfectly, ended up keeping my go file for some custom network binds and ensuring that all disks were in the right place so no data was lost.

     

    Only thing i had to do was set all shares to use cache drives again and that was pretty much it.

     

    Running much more stable now, excellent work on the wiki page. Hell, even my array is performing almost double its old speed in some cases! Something must have been really wrong with my old install.

  17. So im wondering if its possible to reinstall unRAID from scratch, ie wiping the flash drive and downloading a fresh copy and starting over. Checked wiki and forums, some say yes, some say no. Quite vague information is all i can really find and im not up for risking my data.

     

    Reasons for reinstalling is that unRAID is just causing so many issues right now i just want to nuke and start over. It will not shut down without a hard reset, dockers randomly loose file permissions resulting in me having to run the new permissions utility at least 5 times a day, array often crashes or freezes. Over my time with it, i have made many tweaks to the go file, upgraded, downgraded, swapped hardware and at this point i think it would just be faster to just start over rather than backtrack every tweak, plugin, addon, docker, etc to see where something is slightly wrong.

     

    My only concern is that i want to keep my existing array without losing data. Will unRAID just import its old array or do i need to grab some files and put them on the fresh install if its possible? I cant really dump the array anywhere because i dont have 9TB of storage laying around to use

     

    Thanks in advance!

    • Like 1
×
×
  • Create New...