Jump to content

Opawesome

Members
  • Posts

    276
  • Joined

  • Last visited

Posts posted by Opawesome

  1. Thank you again.

     

    According to https://downloadmirror.intel.com/28749/eng/Intel_SSD_Firmware_Update_Tool_3_0_7_Release_Notes-328292-030US.pdf, I have the latest firmware for my Intel 660p NVME drive (002C). Or maybe I did not understand your comment properly ?

    root@MOZART:~# smartctl -a /dev/nvme0
    smartctl 7.0 2018-12-30 r4883 [x86_64-linux-4.19.56-Unraid] (local build)
    Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org
    
    === START OF INFORMATION SECTION ===
    Model Number:                       INTEL SSDPEKNW010T8
    Serial Number:                      BTNH925511J71P0B
    Firmware Version:                   002C
    PCI Vendor/Subsystem ID:            0x8086
    IEEE OUI Identifier:                0x5cd2e4
    Controller ID:                      1
    Number of Namespaces:               1
    Namespace 1 Size/Capacity:          1,024,209,543,168 [1.02 TB]
    Namespace 1 Formatted LBA Size:     512
    Local Time is:                      Mon Dec  9 19:37:39 2019 CET
    Firmware Updates (0x14):            2 Slots, no Reset required
    Optional Admin Commands (0x0017):   Security Format Frmw_DL Self_Test
    Optional NVM Commands (0x005f):     Comp Wr_Unc DS_Mngmt Wr_Zero Sav/Sel_Feat Timestmp
    Maximum Data Transfer Size:         32 Pages
    Warning  Comp. Temp. Threshold:     77 Celsius
    Critical Comp. Temp. Threshold:     80 Celsius
    
    Supported Power States
    St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
     0 +     4.00W       -        -    0  0  0  0        0       0
     1 +     3.00W       -        -    1  1  1  1        0       0
     2 +     2.20W       -        -    2  2  2  2        0       0
     3 -   0.0300W       -        -    3  3  3  3     5000    5000
     4 -   0.0040W       -        -    4  4  4  4     5000    9000
    
    Supported LBA Sizes (NSID 0x1)
    Id Fmt  Data  Metadt  Rel_Perf
     0 +     512       0         0
    
    === START OF SMART DATA SECTION ===
    SMART overall-health self-assessment test result: PASSED
    
    SMART/Health Information (NVMe Log 0x02)
    Critical Warning:                   0x00
    Temperature:                        25 Celsius
    Available Spare:                    100%
    Available Spare Threshold:          10%
    Percentage Used:                    0%
    Data Units Read:                    28,535,988 [14.6 TB]
    Data Units Written:                 18,216,684 [9.32 TB]
    Host Read Commands:                 129,177,201
    Host Write Commands:                85,548,913
    Controller Busy Time:               2,026
    Power Cycles:                       13
    Power On Hours:                     1,731
    Unsafe Shutdowns:                   1
    Media and Data Integrity Errors:    0
    Error Information Log Entries:      0
    Warning  Comp. Temperature Time:    0
    Critical Comp. Temperature Time:    0
    
    Error Information (NVMe Log 0x01, max 256 entries)
    No Errors Logged

     

  2. Hi,

     

    I have encountered a problem and would be very grateful if someone would have the kindness to help me.

     

    I was checking my syslog and saw BTRFS errors on my cache NVME SSD drive :

    Dec  9 02:10:42 MOZART kernel: BTRFS warning (device nvme0n1p1): csum failed root 5 ino 711847 off 9417928704 csum 0x98f94189 expected csum 0x8d72db16 mirror 1
    Dec  9 02:10:42 MOZART kernel: BTRFS warning (device nvme0n1p1): csum failed root 5 ino 711847 off 9417928704 csum 0x98f94189 expected csum 0x8d72db16 mirror 1
    Dec  9 02:10:42 MOZART kernel: BTRFS warning (device nvme0n1p1): csum failed root 5 ino 711847 off 9418031104 csum 0x98f94189 expected csum 0xf896b432 mirror 1
    Dec  9 02:10:42 MOZART kernel: BTRFS warning (device nvme0n1p1): csum failed root 5 ino 711847 off 9418031104 csum 0x98f94189 expected csum 0xf896b432 mirror 1
    Dec  9 02:10:42 MOZART kernel: BTRFS warning (device nvme0n1p1): csum failed root 5 ino 711847 off 9418162176 csum 0x98f94189 expected csum 0x3ecb4cb8 mirror 1

    Then I searched online and found that running a scrub on the drive could help. Here is the result:

    scrub status for 5a462ed1-615f-4fdc-8c3a-0e1fd40b537c
    	scrub started at Mon Dec  9 17:18:12 2019 and finished after 00:05:05
    	total bytes scrubbed: 443.09GiB with 1024 errors
    	error details: csum=1024
    	corrected errors: 0, uncorrectable errors: 1024, unverified errors: 0

    Now the syslog shows more warning and errors:

    Dec  9 17:18:57 MOZART kernel: BTRFS warning (device nvme0n1p1): checksum error at logical 1551573680128 on dev /dev/nvme0n1p1, physical 64441253888, root 5, inode 474170, offset 3762552832, length 4096, links 1 (path: downloads.cache/transmission/data_ygg/Les demoiselles de Rochefort (1967).mkv)
    Dec  9 17:18:57 MOZART kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 0, flush 0, corrupt 1025, gen 0
    Dec  9 17:18:57 MOZART kernel: BTRFS error (device nvme0n1p1): unable to fixup (regular) error at logical 1551573680128 on dev /dev/nvme0n1p1
    Dec  9 17:18:57 MOZART kernel: BTRFS warning (device nvme0n1p1): checksum error at logical 1551574106112 on dev /dev/nvme0n1p1, physical 64441679872, root 5, inode 474170, offset 3762978816, length 4096, links 1 (path: downloads.cache/transmission/data_ygg/Les demoiselles de Rochefort (1967).mkv)
    Dec  9 17:18:57 MOZART kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 0, flush 0, corrupt 1026, gen 0
    Dec  9 17:18:57 MOZART kernel: BTRFS error (device nvme0n1p1): unable to fixup (regular) error at logical 1551574106112 on dev /dev/nvme0n1p1
    Dec  9 17:18:57 MOZART kernel: BTRFS warning (device nvme0n1p1): checksum error at logical 1551573843968 on dev /dev/nvme0n1p1, physical 64441417728, root 5, inode 474170, offset 3762716672, length 4096, links 1 (path: downloads.cache/transmission/data_ygg/Les demoiselles de Rochefort (1967).mkv)
    Dec  9 17:18:57 MOZART kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 0, flush 0, corrupt 1027, gen 0
    Dec  9 17:18:57 MOZART kernel: BTRFS warning (device nvme0n1p1): checksum error at logical 1551573975040 on dev /dev/nvme0n1p1, physical 64441548800, root 5, inode 474170, offset 3762847744, length 4096, links 1 (path: downloads.cache/transmission/data_ygg/Les demoiselles de Rochefort (1967).mkv)
    Dec  9 17:18:57 MOZART kernel: BTRFS error (device nvme0n1p1): unable to fixup (regular) error at logical 1551573843968 on dev /dev/nvme0n1p1
    Dec  9 17:18:57 MOZART kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 0, flush 0, corrupt 1028, gen 0
    Dec  9 17:18:57 MOZART kernel: BTRFS error (device nvme0n1p1): unable to fixup (regular) error at logical 1551573975040 on dev /dev/nvme0n1p1
    Dec  9 17:18:57 MOZART kernel: BTRFS warning (device nvme0n1p1): checksum error at logical 1551573684224 on dev /dev/nvme0n1p1, physical 64441257984, root 5, inode 474170, offset 3762556928, length 4096, links 1 (path: downloads.cache/transmission/data_ygg/Les demoiselles de Rochefort (1967).mkv)
    Dec  9 17:18:57 MOZART kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 0, flush 0, corrupt 1029, gen 0
    Dec  9 17:18:57 MOZART kernel: BTRFS error (device nvme0n1p1): unable to fixup (regular) error at logical 1551573684224 on dev /dev/nvme0n1p1
    Dec  9 17:18:57 MOZART kernel: BTRFS warning (device nvme0n1p1): checksum error at logical 1551574110208 on dev /dev/nvme0n1p1, physical 64441683968, root 5, inode 474170, offset 3762982912, length 4096, links 1 (path: downloads.cache/transmission/data_ygg/Les demoiselles de Rochefort (1967).mkv)
    Dec  9 17:18:57 MOZART kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 0, flush 0, corrupt 1030, gen 0
    Dec  9 17:18:57 MOZART kernel: BTRFS error (device nvme0n1p1): unable to fixup (regular) error at logical 1551574110208 on dev /dev/nvme0n1p1
    Dec  9 17:18:57 MOZART kernel: BTRFS warning (device nvme0n1p1): checksum error at logical 1551574237184 on dev /dev/nvme0n1p1, physical 64441810944, root 5, inode 474170, offset 3763109888, length 4096, links 1 (path: downloads.cache/transmission/data_ygg/Les demoiselles de Rochefort (1967).mkv)
    Dec  9 17:18:57 MOZART kernel: BTRFS warning (device nvme0n1p1): checksum error at logical 1551573712896 on dev /dev/nvme0n1p1, physical 64441286656, root 5, inode 474170, offset 3762585600, length 4096, links 1 (path: downloads.cache/transmission/data_ygg/Les demoiselles de Rochefort (1967).mkv)
    Dec  9 17:18:57 MOZART kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 0, flush 0, corrupt 1032, gen 0
    Dec  9 17:18:57 MOZART kernel: BTRFS error (device nvme0n1p1): unable to fixup (regular) error at logical 1551573712896 on dev /dev/nvme0n1p1
    Dec  9 17:18:57 MOZART kernel: BTRFS error (device nvme0n1p1): unable to fixup (regular) error at logical 1551574237184 on dev /dev/nvme0n1p1
    Dec  9 17:18:57 MOZART kernel: BTRFS warning (device nvme0n1p1): checksum error at logical 1551573848064 on dev /dev/nvme0n1p1, physical 64441421824, root 5, inode 474170, offset 3762720768, length 4096, links 1 (path: downloads.cache/transmission/data_ygg/Les demoiselles de Rochefort (1967).mkv)
    Dec  9 17:18:57 MOZART kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 0, flush 0, corrupt 1033, gen 0
    Dec  9 17:18:57 MOZART kernel: BTRFS error (device nvme0n1p1): unable to fixup (regular) error at logical 1551573848064 on dev /dev/nvme0n1p1
    Dec  9 17:18:57 MOZART kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 0, flush 0, corrupt 1034, gen 0
    Dec  9 17:18:57 MOZART kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 0, flush 0, corrupt 1035, gen 0
    Dec  9 17:18:57 MOZART kernel: BTRFS error (device nvme0n1p1): unable to fixup (regular) error at logical 1551573688320 on dev /dev/nvme0n1p1
    Dec  9 17:18:57 MOZART kernel: BTRFS warning (device nvme0n1p1): checksum error at logical 1551573979136 on dev /dev/nvme0n1p1, physical 64441552896, root 5, inode 474170, offset 3762851840, length 4096, links 1 (path: downloads.cache/transmission/data_ygg/Les demoiselles de Rochefort (1967).mkv)

    Some more research on the internet suggests that I should reformat the cache drive entirely... and if the issue persists, replace it.

     

    What are your thoughts ?

     

    Many thanks,

    G

     

     

  3. 3 hours ago, binhex said:

    form this i would assume you have set a value incorrectly in the core.conf file which is causing deluge to crash on startup, 3 options to fix this:-

     

    1. open /config/core.conf and try and find the offending value and correct it.

    2. restore the application configuration from backup

    3. rename /config/core.conf to /config/core.conf.old and restart to re-create the config file (core.conf), then reconfigure deluge.

    Hi binhex,

    Solution nb 3 did the trick.

    Many thanks !

    G

  4. Hi all,

     

    Since yesterday, I cannot login to the Deluge WebGUI anymore. I don't remember doing anything special. I restarted the docker but it did not help.

     

    Here is the log (I don't see any obvious error). Any Idea ?

     

    Many thanks !

    G

    2019-11-14 01:05:36,353 WARN received SIGTERM indicating exit request
    2019-11-14 01:05:36,354 DEBG killing watchdog-script (pid 144) with signal SIGTERM
    2019-11-14 01:05:36,354 INFO waiting for start-script, watchdog-script to die
    2019-11-14 01:05:37,355 DEBG fd 16 closed, stopped monitoring <POutputDispatcher at 22528258681544 for <Subprocess at 22528259310312 with name watchdog-script in state STOPPING> (stdout)>
    2019-11-14 01:05:37,355 DEBG fd 20 closed, stopped monitoring <POutputDispatcher at 22528258681712 for <Subprocess at 22528259310312 with name watchdog-script in state STOPPING> (stderr)>
    2019-11-14 01:05:37,355 INFO stopped: watchdog-script (terminated by SIGTERM)
    2019-11-14 01:05:37,355 DEBG received SIGCHLD indicating a child quit
    2019-11-14 01:05:37,355 DEBG killing start-script (pid 143) with signal SIGTERM
    2019-11-14 01:05:38,357 DEBG fd 11 closed, stopped monitoring <POutputDispatcher at 22528258681152 for <Subprocess at 22528259215200 with name start-script in state STOPPING> (stdout)>
    2019-11-14 01:05:38,357 DEBG fd 15 closed, stopped monitoring <POutputDispatcher at 22528258681320 for <Subprocess at 22528259215200 with name start-script in state STOPPING> (stderr)>
    2019-11-14 01:05:38,357 INFO stopped: start-script (terminated by SIGTERM)
    2019-11-14 01:05:38,357 DEBG received SIGCHLD indicating a child quit
    Created by...
    ___. .__ .__
    \_ |__ |__| ____ | |__ ____ ___ ___
    | __ \| |/ \| | \_/ __ \\ \/ /
    | \_\ \ | | \ Y \ ___/ > <
    |___ /__|___| /___| /\___ >__/\_ \
    \/ \/ \/ \/ \/
    https://hub.docker.com/u/binhex/
    
    2019-11-14 01:05:44.724363 [info] System information Linux 09c127340e4c 4.19.56-Unraid #1 SMP Tue Jun 25 10:19:34 PDT 2019 x86_64 GNU/Linux
    2019-11-14 01:05:44.751494 [info] PUID defined as '99'
    2019-11-14 01:05:44.787506 [info] PGID defined as '100'
    2019-11-14 01:05:44.846095 [info] UMASK defined as '000'
    2019-11-14 01:05:44.869266 [info] Permissions already set for volume mappings
    2019-11-14 01:05:44.906574 [info] DELUGE_DAEMON_LOG_LEVEL defined as 'info'
    2019-11-14 01:05:44.935647 [info] DELUGE_WEB_LOG_LEVEL defined as 'info'
    2019-11-14 01:05:44.967666 [info] VPN_ENABLED defined as 'yes'
    2019-11-14 01:05:45.007881 [info] OpenVPN config file (ovpn extension) is located at /config/openvpn/TorGuard.port-forwared.IPs.ovpn
    dos2unix: converting file /config/openvpn/port-forwared.IPs.ovpn to Unix format...
    2019-11-14 01:05:45.048223 [info] VPN remote line defined as 'remote [IP REDACTED] 1912'
    2019-11-14 01:05:45.074104 [info] VPN_REMOTE defined as '[IP REDACTED]'
    2019-11-14 01:05:45.101580 [info] VPN_PORT defined as '1912'
    2019-11-14 01:05:45.128008 [info] VPN_PROTOCOL defined as 'udp'
    2019-11-14 01:05:45.158134 [info] VPN_DEVICE_TYPE defined as 'tun0'
    2019-11-14 01:05:45.189768 [info] VPN_PROV defined as 'custom'
    2019-11-14 01:05:45.225091 [info] LAN_NETWORK defined as '192.168.1.0/24'
    2019-11-14 01:05:45.262771 [info] NAME_SERVERS defined as '[IP REDACTED],[IP REDACTED],[IP REDACTED],[IP REDACTED],[IP REDACTED]'
    2019-11-14 01:05:45.303731 [info] VPN_USER defined as 'XXX'
    2019-11-14 01:05:45.331435 [info] VPN_PASS defined as 'XXX'
    2019-11-14 01:05:45.356103 [info] VPN_OPTIONS not defined (via -e VPN_OPTIONS)
    2019-11-14 01:05:45.380372 [info] ENABLE_PRIVOXY defined as 'no'
    2019-11-14 01:05:45.406197 [info] Starting Supervisor...
    2019-11-14 01:05:45,613 INFO Included extra file "/etc/supervisor/conf.d/delugevpn.conf" during parsing
    2019-11-14 01:05:45,613 INFO Set uid to user 0 succeeded
    2019-11-14 01:05:45,616 INFO supervisord started with pid 7
    2019-11-14 01:05:46,618 INFO spawned: 'privoxy-script' with pid 142
    2019-11-14 01:05:46,620 INFO spawned: 'start-script' with pid 143
    2019-11-14 01:05:46,622 INFO spawned: 'watchdog-script' with pid 144
    2019-11-14 01:05:46,622 INFO reaped unknown pid 8
    2019-11-14 01:05:46,629 DEBG fd 8 closed, stopped monitoring <POutputDispatcher at 23094013354672 for <Subprocess at 23094013250136 with name privoxy-script in state STARTING> (stdout)>
    2019-11-14 01:05:46,629 DEBG fd 10 closed, stopped monitoring <POutputDispatcher at 23094012725232 for <Subprocess at 23094013250136 with name privoxy-script in state STARTING> (stderr)>
    2019-11-14 01:05:46,629 INFO success: privoxy-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
    2019-11-14 01:05:46,630 INFO success: start-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
    2019-11-14 01:05:46,630 INFO success: watchdog-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
    2019-11-14 01:05:46,630 INFO exited: privoxy-script (exit status 0; expected)
    2019-11-14 01:05:46,630 DEBG received SIGCHLD indicating a child quit
    2019-11-14 01:05:46,631 DEBG 'start-script' stdout output:
    [info] VPN is enabled, beginning configuration of VPN
    
    2019-11-14 01:05:46,632 DEBG 'watchdog-script' stderr output:
    dos2unix: converting file /config/core.conf to Unix format...
    
    2019-11-14 01:05:46,637 DEBG 'start-script' stdout output:
    [warn] Username contains characters which could cause authentication issues, please consider changing this if possible
    
    2019-11-14 01:05:46,641 DEBG 'start-script' stdout output:
    [warn] Password contains characters which could cause authentication issues, please consider changing this if possible
    
    2019-11-14 01:05:46,700 DEBG 'start-script' stdout output:
    [info] Default route for container is 172.17.0.1
    
    2019-11-14 01:05:46,703 DEBG 'start-script' stdout output:
    [info] Adding [IP REDACTED] to /etc/resolv.conf
    
    2019-11-14 01:05:46,705 DEBG 'start-script' stdout output:
    [info] Adding [IP REDACTED] to /etc/resolv.conf
    
    2019-11-14 01:05:46,708 DEBG 'start-script' stdout output:
    [info] Adding [IP REDACTED] to /etc/resolv.conf
    
    2019-11-14 01:05:46,711 DEBG 'start-script' stdout output:
    [info] Adding [IP REDACTED] to /etc/resolv.conf
    
    2019-11-14 01:05:46,713 DEBG 'start-script' stdout output:
    [info] Adding [IP REDACTED] to /etc/resolv.conf
    
    2019-11-14 01:05:46,716 DEBG 'start-script' stdout output:
    [IP REDACTED]
    
    2019-11-14 01:05:46,755 DEBG 'start-script' stdout output:
    [info] Docker network defined as 172.17.0.0/16
    
    2019-11-14 01:05:46,760 DEBG 'start-script' stdout output:
    [info] Adding 192.168.1.0/24 as route via docker eth0
    
    2019-11-14 01:05:46,761 DEBG 'start-script' stdout output:
    [info] ip route defined as follows...
    --------------------
    
    2019-11-14 01:05:46,762 DEBG 'start-script' stdout output:
    default via 172.17.0.1 dev eth0
    172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.2
    192.168.1.0/24 via 172.17.0.1 dev eth0
    
    2019-11-14 01:05:46,762 DEBG 'start-script' stdout output:
    --------------------
    
    2019-11-14 01:05:46,764 DEBG 'start-script' stdout output:
    iptable_mangle 16384 5
    ip_tables 24576 12 iptable_filter,iptable_nat,iptable_mangle
    
    2019-11-14 01:05:46,765 DEBG 'start-script' stdout output:
    [info] iptable_mangle support detected, adding fwmark for tables
    
    2019-11-14 01:05:46,821 DEBG 'start-script' stdout output:
    [info] iptables defined as follows...
    --------------------
    
    2019-11-14 01:05:46,823 DEBG 'start-script' stdout output:
    -P INPUT DROP
    -P FORWARD ACCEPT
    -P OUTPUT DROP
    -A INPUT -i tun0 -j ACCEPT
    -A INPUT -s 172.17.0.0/16 -d 172.17.0.0/16 -j ACCEPT
    -A INPUT -i eth0 -p udp -m udp --sport 1912 -j ACCEPT
    -A INPUT -i eth0 -p tcp -m tcp --dport 8112 -j ACCEPT
    -A INPUT -i eth0 -p tcp -m tcp --sport 8112 -j ACCEPT
    -A INPUT -s 192.168.1.0/24 -i eth0 -p tcp -m tcp --dport 58846 -j ACCEPT
    -A INPUT -p icmp -m icmp --icmp-type 0 -j ACCEPT
    -A INPUT -i lo -j ACCEPT
    -A OUTPUT -o tun0 -j ACCEPT
    -A OUTPUT -s 172.17.0.0/16 -d 172.17.0.0/16 -j ACCEPT
    -A OUTPUT -o eth0 -p udp -m udp --dport 1912 -j ACCEPT
    -A OUTPUT -o eth0 -p tcp -m tcp --dport 8112 -j ACCEPT
    -A OUTPUT -o eth0 -p tcp -m tcp --sport 8112 -j ACCEPT
    -A OUTPUT -d 192.168.1.0/24 -o eth0 -p tcp -m tcp --sport 58846 -j ACCEPT
    -A OUTPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT
    -A OUTPUT -o lo -j ACCEPT
    
    2019-11-14 01:05:46,823 DEBG 'start-script' stdout output:
    --------------------
    
    2019-11-14 01:05:46,824 DEBG 'start-script' stdout output:
    [info] Starting OpenVPN...
    
    2019-11-14 01:05:46,853 DEBG 'start-script' stdout output:
    Thu Nov 14 01:05:46 2019 WARNING: file 'credentials.conf' is group or others accessible
    Thu Nov 14 01:05:46 2019 OpenVPN 2.4.7 [git:makepkg/2b8aec62d5db2c17+] x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Feb 19 2019
    Thu Nov 14 01:05:46 2019 library versions: OpenSSL 1.1.1b 26 Feb 2019, LZO 2.10
    
    2019-11-14 01:05:46,854 DEBG 'start-script' stdout output:
    Thu Nov 14 01:05:46 2019 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts
    
    2019-11-14 01:05:46,854 DEBG 'start-script' stdout output:
    [info] OpenVPN started
    
    2019-11-14 01:05:46,854 DEBG 'start-script' stdout output:
    Thu Nov 14 01:05:46 2019 Outgoing Control Channel Authentication: Using 256 bit message hash 'SHA256' for HMAC authentication
    Thu Nov 14 01:05:46 2019 Incoming Control Channel Authentication: Using 256 bit message hash 'SHA256' for HMAC authentication
    
    2019-11-14 01:05:46,855 DEBG 'start-script' stdout output:
    Thu Nov 14 01:05:46 2019 TCP/UDP: Preserving recently used remote address: [AF_INET][IP REDACTED]:1912
    Thu Nov 14 01:05:46 2019 Socket Buffers: R=[212992->786432] S=[212992->786432]
    Thu Nov 14 01:05:46 2019 UDP link local: (not bound)
    Thu Nov 14 01:05:46 2019 UDP link remote: [AF_INET][IP REDACTED]:1912
    
    2019-11-14 01:05:46,873 DEBG 'start-script' stdout output:
    Thu Nov 14 01:05:46 2019 TLS: Initial packet from [AF_INET][IP REDACTED]:1912, sid=fd606e88 b4a0a141
    
    2019-11-14 01:05:46,923 DEBG 'start-script' stdout output:
    Thu Nov 14 01:05:46 2019 VERIFY OK: depth=1, CN=TG-VPN-CA
    
    2019-11-14 01:05:46,923 DEBG 'start-script' stdout output:
    Thu Nov 14 01:05:46 2019 VERIFY KU OK
    Thu Nov 14 01:05:46 2019 Validating certificate extended key usage
    Thu Nov 14 01:05:46 2019 ++ Certificate has EKU (str) TLS Web Server Authentication, expects TLS Web Server Authentication
    Thu Nov 14 01:05:46 2019 VERIFY EKU OK
    Thu Nov 14 01:05:46 2019 VERIFY OK: depth=0, CN=server
    
    2019-11-14 01:05:48,641 DEBG 'start-script' stdout output:
    Thu Nov 14 01:05:48 2019 WARNING: 'link-mtu' is used inconsistently, local='link-mtu 1582', remote='link-mtu 1569'
    Thu Nov 14 01:05:48 2019 WARNING: 'tun-mtu' is used inconsistently, local='tun-mtu 1532', remote='tun-mtu 1500'
    Thu Nov 14 01:05:48 2019 WARNING: 'comp-lzo' is present in local config but missing in remote config, local='comp-lzo'
    Thu Nov 14 01:05:48 2019 WARNING: 'cipher' is used inconsistently, local='cipher AES-256-GCM', remote='cipher AES-128-CBC'
    Thu Nov 14 01:05:48 2019 WARNING: 'auth' is used inconsistently, local='auth [null-digest]', remote='auth SHA256'
    Thu Nov 14 01:05:48 2019 WARNING: 'keysize' is used inconsistently, local='keysize 256', remote='keysize 128'
    
    2019-11-14 01:05:48,642 DEBG 'start-script' stdout output:
    Thu Nov 14 01:05:48 2019 Control Channel: TLSv1.2, cipher TLSv1.2 DHE-RSA-AES256-GCM-SHA384, 2048 bit RSA
    Thu Nov 14 01:05:48 2019 [server] Peer Connection Initiated with [AF_INET][IP REDACTED]:1912
    
    2019-11-14 01:05:49,891 DEBG 'start-script' stdout output:
    Thu Nov 14 01:05:49 2019 SENT CONTROL [server]: 'PUSH_REQUEST' (status=1)
    
    2019-11-14 01:05:49,909 DEBG 'start-script' stdout output:
    Thu Nov 14 01:05:49 2019 PUSH: Received control message: 'PUSH_REPLY,redirect-gateway def1,dhcp-option DNS [IP REDACTED],dhcp-option DNS 1.0.0.1,sndbuf 524288,rcvbuf 524288,route [IP REDACTED],topology net30,ping 5,ping-restart 30,compress,ifconfig [IP REDACTED] [IP REDACTED],peer-id 12'
    
    2019-11-14 01:05:49,910 DEBG 'start-script' stdout output:
    Thu Nov 14 01:05:49 2019 OPTIONS IMPORT: timers and/or timeouts modified
    Thu Nov 14 01:05:49 2019 OPTIONS IMPORT: compression parms modified
    Thu Nov 14 01:05:49 2019 OPTIONS IMPORT: --sndbuf/--rcvbuf options modified
    Thu Nov 14 01:05:49 2019 Socket Buffers: R=[786432->1048576] S=[786432->1048576]
    Thu Nov 14 01:05:49 2019 OPTIONS IMPORT: --ifconfig/up options modified
    Thu Nov 14 01:05:49 2019 OPTIONS IMPORT: route options modified
    Thu Nov 14 01:05:49 2019 OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified
    Thu Nov 14 01:05:49 2019 OPTIONS IMPORT: peer-id set
    Thu Nov 14 01:05:49 2019 OPTIONS IMPORT: adjusting link_mtu to 1657
    Thu Nov 14 01:05:49 2019 Outgoing Data Channel: Cipher 'AES-256-GCM' initialized with 256 bit key
    Thu Nov 14 01:05:49 2019 Incoming Data Channel: Cipher 'AES-256-GCM' initialized with 256 bit key
    Thu Nov 14 01:05:49 2019 ROUTE_GATEWAY 172.17.0.1/255.255.0.0 IFACE=eth0 HWADDR=XXX
    
    2019-11-14 01:05:49,910 DEBG 'start-script' stdout output:
    Thu Nov 14 01:05:49 2019 TUN/TAP device tun0 opened
    Thu Nov 14 01:05:49 2019 TUN/TAP TX queue length set to 100
    Thu Nov 14 01:05:49 2019 /usr/bin/ip link set dev tun0 up mtu 1500
    
    2019-11-14 01:05:49,911 DEBG 'start-script' stdout output:
    Thu Nov 14 01:05:49 2019 /usr/bin/ip addr add dev tun0 local [IP REDACTED] peer [IP REDACTED]
    
    2019-11-14 01:05:49,912 DEBG 'start-script' stdout output:
    Thu Nov 14 01:05:49 2019 /root/openvpnup.sh tun0 1500 1585 [IP REDACTED] [IP REDACTED] init
    
    2019-11-14 01:05:49,914 DEBG 'start-script' stdout output:
    Thu Nov 14 01:05:49 2019 /usr/bin/ip route add [IP REDACTED]/32 via 172.17.0.1
    
    2019-11-14 01:05:49,917 DEBG 'start-script' stdout output:
    Thu Nov 14 01:05:49 2019 /usr/bin/ip route add 0.0.0.0/1 via [IP REDACTED]
    
    2019-11-14 01:05:49,918 DEBG 'start-script' stdout output:
    Thu Nov 14 01:05:49 2019 /usr/bin/ip route add 128.0.0.0/1 via [IP REDACTED]
    
    2019-11-14 01:05:49,920 DEBG 'start-script' stdout output:
    Thu Nov 14 01:05:49 2019 /usr/bin/ip route add [IP REDACTED]/32 via [IP REDACTED]
    
    2019-11-14 01:05:49,921 DEBG 'start-script' stdout output:
    Thu Nov 14 01:05:49 2019 Initialization Sequence Completed
    
    2019-11-14 01:05:49,966 DEBG 'watchdog-script' stdout output:
    [info] Deluge listening interface IP 0.0.0.0 and VPN provider IP [IP REDACTED] different, marking for reconfigure
    
    2019-11-14 01:05:49,971 DEBG 'watchdog-script' stdout output:
    [info] Deluge not running
    
    2019-11-14 01:05:49,974 DEBG 'watchdog-script' stdout output:
    [info] Deluge Web UI not running
    
    2019-11-14 01:05:50,106 DEBG 'start-script' stdout output:
    [info] Successfully retrieved external IP address [IP REDACTED]
    
    2019-11-14 01:05:50,176 DEBG 'watchdog-script' stdout output:
    [info] Attempting to start Deluge...
    [info] Removing deluge pid file (if it exists)...
    
    2019-11-14 01:05:50,453 DEBG 'watchdog-script' stdout output:
    [info] Deluge listening interface currently defined as 0.0.0.0
    [info] Deluge listening interface will be changed to 0.0.0.0
    [info] Saving changes to Deluge config file /config/core.conf...
    
    2019-11-14 01:05:50,753 DEBG 'watchdog-script' stdout output:
    [info] Deluge process started
    [info] Waiting for Deluge process to start listening on port 58846...
    
    2019-11-14 01:05:51,015 DEBG 'watchdog-script' stderr output:
    Unhandled error in Deferred:
    
    2019-11-14 01:05:51,017 DEBG 'watchdog-script' stderr output:
    
    Traceback (most recent call last):
    File "/usr/lib/python2.7/site-packages/deluge/main.py", line 241, in start_daemon
    Daemon(options, args)
    File "/usr/lib/python2.7/site-packages/deluge/core/daemon.py", line 170, in __init__
    component.start("PreferencesManager")
    File "/usr/lib/python2.7/site-packages/deluge/component.py", line 296, in start
    deferreds.append(self.components[name]._component_start())
    File "/usr/lib/python2.7/site-packages/deluge/component.py", line 124, in _component_start
    d = maybeDeferred(self.start)
    --- <exception caught here> ---
    File "/usr/lib/python2.7/site-packages/twisted/internet/defer.py", line 151, in maybeDeferred
    result = f(*args, **kw)
    File "/usr/lib/python2.7/site-packages/deluge/core/preferencesmanager.py", line 186, in start
    self._on_set_encryption)
    File "/usr/lib/python2.7/site-packages/deluge/config.py", line 319, in register_set_function
    function(key, self.__config[key])
    File "/usr/lib/python2.7/site-packages/deluge/core/preferencesmanager.py", line 362, in _on_set_encryption
    lt.enc_policy(self.config["enc_out_policy"])
    exceptions.OverflowError: can't convert negative value to unsigned

     

  5. On 10/24/2019 at 6:45 PM, Opawesome said:

    I guess the next step would be to subscribe to a second VPN service, set two different containers with two different VPN services and see if the issue persists or not. I'll keep you posted once I found the time to do so.

    I did that and do not suffer the disconnection issue. Seems like my VPN provider only allows one connection to a given VPN server from a single IP address (or something similar). Setting up a different VPN server for each binhex docker solved my problem.

     

    Thanks again for your guidance @binhex.

    • Like 1
  6. 28 minutes ago, binhex said:

    it could be yes, very odd how having more than one concurrent connection knocks them all out, im assuming you arent using any vpn apps on mobile devices etc that could be tipping you over the 8 concurrent connection limit?.

    Yes, I am sure I am below the limit. If you don't have any other idea, I guess the next step would be to subscribe to a second VPN service, set two different containers with two different VPN services and see if the issue persists or not. I'll keep you posted once I found the time to do so.

     

    Thank you again for your help @binhex !

    Cheers

    G

  7. 25 minutes ago, binhex said:

    which vpn provider are you using?

    I use Torguard. I checked and they allow 8 simultaneous connections so I am well below the limit. Do you think it could be on their side ? That would indeed explain the absence of errors reported on the dockers side.

  8. 33 minutes ago, binhex said:

    well you got no errors, so i think thats it, they are all started and you are good to go.

    Haha ! Well, no errors indeed but I still keep failing to get several binhex "vpn dockers" run simultaneously. If I start the binhex-delugevpn docker alone, everything works fine; but as soon as I start the binhex-privoxyvpn or binhex-qbittorrentvpn, then all connections made in binhex-delugevpn die instantly. 😉

  9. 4 hours ago, binhex said:

    Ok well start the other containers as well from the command line until you see an error
     

    OK. I stopped all dockers and did this :

    Linux 4.19.56-Unraid.
    Last login: Wed Oct 23 23:42:59 +0200 2019 on /dev/pts/1.
    root@MOZART:~# docker restart binhex-delugevpn
    binhex-delugevpn
    root@MOZART:~# docker start binhex-privoxyvpn
    binhex-privoxyvpn
    root@MOZART:~# docker start binhex-qbittorrentvpn
    binhex-qbittorrentvpn
    root@MOZART:~# 

    So now, if I understood correctly, I just wait and see what happens in the console ? Or maybe I need to check the dockers logs ?

    Thanks again for your help.

    G

  10. On 10/22/2019 at 6:15 PM, binhex said:

    that looks fine to me, can you just try starting the troublesome container at the command line, this should tell you whats going on:-

     

    
    docker start <name of my container>

     

    Hi binhex,

     

    Unfortunately, this does not give a lot of info :

     

    Quote

    root@MOZART:~# docker start binhex-privoxyvpn
    binhex-privoxyvpn
    root@MOZART:~#

    Any other idea ?

     

    For the record :

    Quote

    I keep failing to get several binhex "vpn dockers" run simultaneously.
     
    For example: if i start the binhex-delugevpn docker alone, everything works fine; but as soon as I start e.g. binhex-privoxyvpn or binhex-qbittorrentvpn, then all connections made in binhex-delugevpn die.
     
    Please note that all of those dockers are set to use different ports, and that privoxy is set to "no" in binhex-delugevpn and binhex-qbittorrentvpn containers configuration.

    Thanks !

    G

  11. 4 hours ago, binhex said:

    Check for port clashes, especially port 8118 (privoxy)

    Hi binhex,

    Thank you for your answer. The thing is that all dockers are set to use different ports (see screenshot below). Am I missing something ?

    Thanks,

    G

    Capture.thumb.PNG.abe54118ec6487112a27b8c767fe49f9.PNG

  12. Hi all,

     

    I keep failing to get several binhex "vpn dockers" run simultaneously.

     

    For example: if i start the binhex-delugevpn docker alone, everything works fine; but as soon as I start e.g. binhex-privoxyvpn or binhex-qbittorrentvpn, then all connections made in binhex-delugevpn die.

     

    Please note that all of those dockers are set to use différent ports, and that privoxy is set to "no" in binhex-delugevpn and binhex-qbittorrentvpn containers configuration.

     

    Any idea why ?

     

    Many thanks !

    G

     

    PS : thank you binhex for your apps !

  13. 21 hours ago, John_M said:

    When you run a SMART self-test the spin-down delay for that disk gets set temporarily to "never" and sometimes I've noticed it doesn't get restored properly afterwards.

    I believe that is exactly what happened to me. I remember disks spinning down fine in the past, but I did run an extended SMART test on all my drives recently. Good to know.

  14. Hi,

     

    I know there are many topics on the subject but I tried following the advice give in those topics and still cannot manage to get my drives to automatically spin down after a set period of no I/O activity.

     

    Default spin down delay is set to 15 minutes

    Folder Caching is disabled

    File Activity does not show any activity on the drives

    All dockers are stopped

     

    Interestingly, if I spin down a disk manually, it does stay spun down (see e.g. disk 1 in attached screen shot Nb.1)

     

    Any ideas ? I would greatly appreciate your thoughts

     

    Many thanks !

    G

    Capture1.thumb.PNG.1fde492437f3dabfec544b6cb9338f88.PNG

     

    Capture2.thumb.PNG.850db16fe7d5b6ddc59660342d2cd04e.PNG

     

    Capture3.thumb.PNG.a2beeb8e9630c41bb9e1c5c127673b7c.PNG

     

  15. On 4/14/2018 at 9:34 PM, Kuusou said:

    currently utilizing the Krusader docker

    The thing is that Krusader does not respect disk Inclusion and disk Exclusion setup in shares settings.

     

    I.e : in the event where (i) share1 is setup to include only disk1 and (ii) share2 is setup to include only disk2; then if you copy move a file from share1 (on disk1) to share2, Krusader will not move that file to disk2.

     

    Obviously, one would expect that any such file operations performed within the Unraid WebUI do respect disk inclusions and exclusions.

  16. Just now, Hoopster said:

    Nope, nothing special.  I did the same thing not too long ago with a Dell H310.  unRAID tracks disks by serial number, so, you should  be able to just insert the card and plug in the SATA cables removing them from the MB SATA ports.

     

    Just remember to leave SATA SSDs (if you have them) plugged into the MB SATA ports so they are properly TRIMmed.

    Hi Hoopster.

    Thanks for the prompt, clear and concise answer !

    I wish you a good day/night.

    G

  17. Just now, jonathanm said:

    Probably doesn't matter, except it may interfere with stopping the array. Personally I'd try stopping the array and see what happens. If it won't stop, look at the process list in the console and kill any dd processes.

    Stopping the array worked fine. Thank you very much for the advice. As per your suggestion, I removed all the drives according to this method instead : Remove_Drives_Then_Rebuild_Parity

     

    Now Parity is rebuilding as expected. All numbers and speeds look much more normal to me now. The other method, although suposedly safer, definitely didn't look (at least to me) to be working in a way that seemed intended by the developers of Unraid.

  18. 51 minutes ago, Opawesome said:

     

    
    Note that closing this window will abort the execution of this script

     

    I tried to close the window to abort the execution of the script but I don't think it worked. I still see many reads and writes from/on all the disks.

     

    It took more than an hour to do 7GB. I am very worried (the drive is 2TB)...

  19. Hi,

     

    I decided to follow this method to remove a bunch of drives from my array : Clear_Drive_Then_Remove_Drive

     

    It seems however that I am getting the lowest speeds ever during the clearing of the drive : 6MB/s (see below). It even seems to keep getting lower and lower

     

    Any idea why and how to speed things up ? NB : I did change the Disk settings to the "reconstruct drive" mb_write_method.

     

    Many thanks

    G

     Script location: /tmp/user.scripts/tmpScripts/Clear an unRAID array data drive/script
    Note that closing this window will abort the execution of this script
    *** Clear an unRAID array data drive *** v1.4
    
    Checking all array data drives (may need to spin them up) ...
    
    Found a marked and empty drive to clear: Disk 13 ( /mnt/disk13 )
    * Disk 13 will be unmounted first.
    * Then zeroes will be written to the entire drive.
    * Parity will be preserved throughout.
    * Clearing while updating Parity takes a VERY long time!
    * The progress of the clearing will not be visible until it's done!
    * When complete, Disk 13 will be ready for removal from array.
    * Commands to be executed:
    ***** umount /mnt/disk13
    ***** dd bs=1M if=/dev/zero of=/dev/md13 status=progress
    
    You have 60 seconds to cancel this script (click the red X, top right)
    
    Unmounting Disk 13 ...
    Clearing Disk 13 ...
    5819596800 bytes (5.8 GB, 5.4 GiB) copied, 978 s, 5.9 MB/s

     

×
×
  • Create New...