Jump to content

Doug Eubanks

Members
  • Posts

    10
  • Joined

  • Last visited

Posts posted by Doug Eubanks

  1. I've seen several posts about this problem, but I haven't seen a clear answer that is confirmed to work.

    Every time I go to performance maintenance, I realize my mover is still running. It's always running. I stopped it and started it again, it's been running for 6 hours reporting lines like this:

    /usr/lib/x86_64-linux-gnu/perl-base/unicore/lib/Bpt/O.pl File exists
    file: /mnt/disk2/system/docker/docker/btrfs/subvolumes/624761d8896971a8f64b96efbb92d017a8567e0942f7dda3416467fcc90737fa/usr/lib/x86_64-linux-gnu/perl-base/unicore/lib/CWU/Y.pl
    move_object: /mnt/disk2/system/docker/docker/btrfs/subvolumes/624761d8896971a8f64b96efbb92d017a8567e0942f7dda3416467fcc90737fa/usr/lib/x86_64-linux-gnu/perl-base/unicore/lib/CWU/Y.pl File exists
    file: /mnt/disk2/system/docker/docker/btrfs/subvolumes/624761d8896971a8f64b96efbb92d017a8567e0942f7dda3416467fcc90737fa/usr/lib/x86_64-linux-gnu/perl-base/unicore/lib/Lower/Y.pl
    move_object: /mnt/disk2/system/docker/docker/btrfs/subvolumes/624761d8896971a8f64b96efbb92d017a8567e0942f7dda3416467fcc90737fa/usr/lib/x86_64-linux-gnu/perl-base/unicore/lib/Lower/Y.pl File exists
    file: /mnt/disk2/system/docker/docker/btrfs/subvolumes/624761d8896971a8f64b96efbb92d017a8567e0942f7dda3416467fcc90737fa/usr/lib/x86_64-linux-gnu/perl-base/unicore/lib/CWCF/Y.pl
    move_object: /mnt/disk2/system/docker/docker/btrfs/subvolumes/624761d8896971a8f64b96efbb92d017a8567e0942f7dda3416467fcc90737fa/usr/lib/x86_64-linux-gnu/perl-base/unicore/lib/CWCF/Y.pl File exists
    file: /mnt/disk2/system/docker/docker/btrfs/subvolumes/624761d8896971a8f64b96efbb92d017a8567e0942f7dda3416467fcc90737fa/usr/lib/x86_64-linux-gnu/perl-base/unicore/lib/CWL/Y.pl
    move_object: /mnt/disk2/system/docker/docker/btrfs/subvolumes/624761d8896971a8f64b96efbb92d017a8567e0942f7dda3416467fcc90737fa/usr/lib/x86_64-linux-gnu/perl-base/unicore/lib/CWL/Y.pl File exists
    file: /mnt/disk2/system/docker/docker/btrfs/subvolumes/624761d8896971a8f64b96efbb92d017a8567e0942f7dda3416467fcc90737fa/usr/lib/x86_64-linux-gnu/perl-base/unicore/lib/IDC/Y.pl
    move_object: /mnt/disk2/system/docker/docker/btrfs/subvolumes/624761d8896971a8f64b96efbb92d017a8567e0942f7dda3416467fcc90737fa/usr/lib/x86_64-linux-gnu/perl-base/unicore/lib/IDC/Y.pl File exists
    file: /mnt/disk2/system/docker/docker/btrfs/subvolumes/624761d8896971a8f64b96efbb92d017a8567e0942f7dda3416467fcc90737fa/usr/lib/x86_64-linux-gnu/perl-base/unicore/lib/Perl/_PerlCha.pl

     

    I have Docker configured to use a directory, not an image.

    I have Docker configured like this:
    Docker directory:/mnt/cache/system/docker/docker/
    Default appdata storage location:/mnt/nvme/dockerApps/

    The share for system and dockerApps are both configured to move to the cache/nvme. The goal is to have system files on the cache and docker application data (and VMs) on the NVME drives. I'm configuring it this way so that the maximum amount of cache is available to the array and the IO on the NVME docker directories doesn't affect the cache.

     

    image.thumb.png.fbd7ea0a16cc7dd71c398d004254adc1.pngSystem Share Configuration

     

    dockerApps Share Configuration

     

    What's the best way to get this back to the correct state? Can I just rsync and delete the data from the disks back to the cache? I'm proficient in bash, so I'm comfortable performing the operations from the shell. I just need to know which files (/mnt/user/disk*/system or /mnt/cache/system) are the ones it's using and which I can dispose of.

  2. 7 hours ago, ich777 said:

    It seems that sdw has some sort of issue, are you sure that the disk is okay?

    It's possible that it does have an issue.  I don't know if it's a filesystem issue or a hardware issue with the Drobo itself.  I have two of them that I inherited from work during the COVID fire sale.  Out of all the mounted volumes, I think this is the only one that's giving me a problem.

    I unmounted the volume from unRAID, mounted it from my Linux workstation and I'm see the same errors.  Once I get the data copied back to unRAID, I'll do some testing.  I guess now that I'm seeing this error on the workstation as well, you can ignore my report.  Thanks for the reply!

    • Like 1
  3. I'm having a problem with this message repeated over and over.

    Nov 16 17:44:49 unRAID kernel: sd 16:0:0:0: [sdw] tag#59 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=DRIVER_OK cmd_age=0s
    Nov 16 17:44:49 unRAID kernel: sd 16:0:0:0: [sdw] tag#59 Sense Key : 0x3 [current]
    Nov 16 17:44:49 unRAID kernel: sd 16:0:0:0: [sdw] tag#59 ASC=0x11 ASCQ=0x1
    Nov 16 17:44:49 unRAID kernel: sd 16:0:0:0: [sdw] tag#59 CDB: opcode=0x88 88 00 00 00 00 05 44 8a 93 08 00 00 08 18 00 00
    Nov 16 17:44:49 unRAID kernel: critical medium error, dev sdw, sector 22624768776 op 0x0:(READ) flags 0x84700 phys_seg 259 prio class 2

     

    I'm connecting to an iSCSI Drobo over a local gigabit connection.  The connection is slow, but when I'm copying large amounts of data (from the iSCSI device), it eventually fails back to being RO.  I'm not able to resolve the issue without rebooting unRAID.  It'll run fine copying data for 1-3 days and then fail again.

     

    Do you have any suggestions?  Thanks!

  4. SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon]


    Broadcom / LSI
    Serial Attached SCSI controller

    Type: Onboard Controller
    Current & Maximum Link Speed: 5GT/s width x8 (4 GB/s max throughput)
    Capabilities: storage pm pciexpress vpd msi msix bus_master cap_list rom

     

    controller-benchmark.thumb.png.43d92919a692655c5fc3481a9c9cd7dd.png

  5. I'm curious if I can do anything to increase my performance, especially during parity checks.

     

    Ryzen 2600, 48GB of RAM running 6.9.0-beta35

    All drives are 10TB Iron Wolfs, except one of the dual parity drives which is a Western Digital (WD101KFBX-68R56N0).  I'm running 1.5TB of RAID1 SSD cache.  Everything is running BTRFS.  I'm using reconstruct write, and I don't power down any of my drives.  I was never able to get that to work properly, but I'm not opposed to it.

     

    SAS9211-8I 8PORT Int 6GB Sata+SAS Pcie 2.0 (flashed to IT mode, shows up as Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)) (Running 2118it)

    HP 468405-002 PCIE SAS EXPANDER CARD 468405-001 487738-001 (HP SAS EXP Card  2.10  -)

     

    [0:0:0:0]	disk    SanDisk' Cruzer Fit       1.00  /dev/sda   31.4GB
    [9:0:0:0]	disk    ATA      ST10000VN0004-1Z SC60  /dev/sdb   10.0TB
    [9:0:1:0]	disk    ATA      Samsung SSD 860  2B6Q  /dev/sdc   1.00TB
    [9:0:2:0]	disk    ATA      WDC WDS500G2B0A  90WD  /dev/sdd    500GB
    [9:0:3:0]	disk    ATA      Samsung SSD 860  2B6Q  /dev/sde   1.00TB
    [9:0:4:0]	disk    ATA      SanDisk SDSSDH35 70RL  /dev/sdf    500GB
    [9:0:5:0]	disk    ATA      ST10000VN0004-1Z SC60  /dev/sdg   10.0TB
    [9:0:6:0]	disk    ATA      ST10000VN0004-1Z SC60  /dev/sdh   10.0TB
    [9:0:7:0]	disk    ATA      ST10000VN0008-2J SC60  /dev/sdi   10.0TB
    [9:0:8:0]	disk    ATA      ST10000VN0004-1Z SC60  /dev/sdj   10.0TB
    [9:0:9:0]	disk    ATA      ST10000VN0008-2J SC60  /dev/sdk   10.0TB
    [9:0:10:0]	disk    ATA      WDC WD101KFBX-68 0A03  /dev/sdl   10.0TB

     

    I was running a BUNCH of maintenance scripts, but someone told me that was a waste.  I'm not sure what type of maintenance I need of my drives.

    I was running balances (weekly I believe).  I'm not doing any balance on my data drives, I was told it wasn't required.

    /usr/bin/ionice --class idle /usr/bin/nice --adjustment=19 /sbin/btrfs balance start -musage=50 /mnt/disk1 > /dev/shm/disk1.balance.output
    /usr/bin/ionice --class idle /usr/bin/nice --adjustment=19 /sbin/btrfs balance start -dusage=90 /mnt/disk1 >> /dev/shm/disk1.balance.output

    Scrubs are running twice a month, 15 days apart...one drive a time.

    /usr/bin/ionice --class idle /usr/bin/nice --adjustment=19 /sbin/btrfs scrub start -Bd -c 2 -n 5 /mnt/disk4 > /dev/shm/disk4.scrub.output

    Cache maintenance looks like this, daily.

    /usr/bin/ionice /usr/bin/nice /sbin/btrfs scrub start -Bd -c idle /mnt/cache > /dev/shm/cache.scrub.output

    I'm also running daily smart checks at 9am.

    for i in {b..o}; do
        smartctl --test=short /dev/sd$i
    done

    If my controller/expander are my bottleneck, what is the current suggested setup?

    Parity Check.PNG

×
×
  • Create New...