helpermonkey

Members
  • Posts

    214
  • Joined

Everything posted by helpermonkey

  1. oh - whoops - i didnt realize someone else had joined in. i thought you were talking to me ;-). I'm all good now back up and running - thanks for your help.
  2. Are you sure? After i restarted - the first thing i did was launch plex. I got an error so i logged in - went to the dockers page - saw the dockers were turned off. I went to the settings to turn it back on and wasn't able to do it. I went to the main page and saw the cache drive was listed in the correct place but with the Unmountable error message. So if it wasn't wiped - what exactly happened where my server was working fine - i hit upgrade - then restart - and it was no longer functioning properly.
  3. Okay so i've formatted the drive and i see the docker.img file on the cache drive ... can i just hit restore here? here are my shares as well.
  4. Okay - good to learn- so what would have happened to cause the files to be deleted as after the "upgrade" it says to reboot to take effect - i always considered those part of the "upgrade".
  5. interesting - so before the upgrade - everything worked and was fine - after the upgrade - the filesystem was gone. If i can provide anything that might help the devs team let me know. Nothing else was done other than the upgrade.
  6. so i'm slightly curious as to how the upgrade deleted the filesystem but that being said - what's the best way to do that given that the device is showing as unmountable?
  7. good question - not really - i just can't ever remember having set it up like that .... while this drive is technically on the newer side of things - i upgraded from the previous one which i setup years ago and honestly don't ever remember BTRFS - but it's not like i was looking keenly at that and the mind is a mysterious thing. See attached 🙂 buddha-diagnostics-20210303-1030.zip
  8. yeah - i didn't have it set as BTRFS .... so since i've never restored like this - what's the best guide to follow?
  9. i figured that would be the case - thanks for confirming. Standing by for next steps.
  10. I first noticed the problem because docker wasn't started - i then couldn't turn it on and finally I arrived at my main page and the cache drive won't mount. i have tried to check the disk filesystems but the array needs to be started. when the array is up (in or out of maintenance mode), none of the options are available to me. Here some of my log from my cache drive: Mar 2 10:49:52 Buddha kernel: ata3: SATA max UDMA/133 abar m2048@0xf7e3a000 port 0xf7e3a200 irq 27 Mar 2 10:49:52 Buddha kernel: ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Mar 2 10:49:52 Buddha kernel: ata3.00: supports DRM functions and may not be fully accessible Mar 2 10:49:52 Buddha kernel: ata3.00: ATA-11: Samsung SSD 860 EVO 1TB, S5B3NMFNA09237W, RVT04B6Q, max UDMA/133 Mar 2 10:49:52 Buddha kernel: ata3.00: 1953525168 sectors, multi 1: LBA48 NCQ (depth 32), AA Mar 2 10:49:52 Buddha kernel: ata3.00: supports DRM functions and may not be fully accessible Mar 2 10:49:52 Buddha kernel: ata3.00: configured for UDMA/133 Mar 2 10:49:52 Buddha kernel: ata3.00: Enabling discard_zeroes_data Mar 2 10:49:52 Buddha kernel: sd 3:0:0:0: [sde] 1953525168 512-byte logical blocks: (1.00 TB/932 GiB) Mar 2 10:49:52 Buddha kernel: sd 3:0:0:0: [sde] Write Protect is off Mar 2 10:49:52 Buddha kernel: sd 3:0:0:0: [sde] Mode Sense: 00 3a 00 00 Mar 2 10:49:52 Buddha kernel: sd 3:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 2 10:49:52 Buddha kernel: sde: sde1 Mar 2 10:49:52 Buddha kernel: ata3.00: Enabling discard_zeroes_data Mar 2 10:49:52 Buddha kernel: sd 3:0:0:0: [sde] Attached SCSI disk Mar 2 10:50:22 Buddha emhttpd: Samsung_SSD_860_EVO_1TB_S5B3NMFNA09237W (sde) 512 1953525168 Mar 2 10:50:22 Buddha emhttpd: import 30 cache device: (sde) Samsung_SSD_860_EVO_1TB_S5B3NMFNA09237W Mar 2 10:50:22 Buddha emhttpd: read SMART /dev/sde Mar 2 10:50:29 Buddha emhttpd: shcmd (67): mount -t btrfs -o noatime,space_cache=v2 /dev/sde1 /mnt/cache Mar 2 10:50:29 Buddha root: mount: /mnt/cache: wrong fs type, bad option, bad superblock on /dev/sde1, missing codepage or helper program, or other error. Mar 2 12:57:31 Buddha emhttpd: read SMART /dev/sde Mar 2 12:59:37 Buddha emhttpd: read SMART /dev/sde Mar 2 13:00:05 Buddha emhttpd: shcmd (507): mount -t btrfs -o noatime,space_cache=v2 /dev/sde1 /mnt/cache Mar 2 13:00:05 Buddha root: mount: /mnt/cache: wrong fs type, bad option, bad superblock on /dev/sde1, missing codepage or helper program, or other error. Mar 2 13:00:30 Buddha emhttpd: read SMART /dev/sde Mar 2 13:00:51 Buddha ool www[2874]: /usr/local/emhttp/plugins/dynamix/scripts/btrfs_check 'start' '/dev/sde1' 'Samsung_SSD_860_EVO_1TB_S5B3NMFNA09237W' '--readonly' Mar 2 13:02:02 Buddha emhttpd: read SMART /dev/sde Mar 2 13:02:50 Buddha emhttpd: shcmd (705): mount -t btrfs -o noatime,space_cache=v2 /dev/sde1 /mnt/cache Mar 2 13:02:50 Buddha root: mount: /mnt/cache: wrong fs type, bad option, bad superblock on /dev/sde1, missing codepage or helper program, or other error. When I unassign the drive - i am unable to hit mount (but it still appears in the pool drop down). I do have a backup but wondering if i just preclear the drive and start over again - will Unraid rebuild that drive like it does data drives?
  11. I'm getting a fatal error in the log which you can find here: https://pastebin.com/m1vRwqwC Any suggestions? I haven't changed anything with regards to configuration in a long long time and it stopped working within the last few days. I'm running 6.8.3
  12. Hi, I'm having trouble accessing the GUI - the docker seems to start okay and I'm on Unraid 6.8.3. I'm not entirely sure what's going on but I grabbed this log off the Unraid docker GUI: https://pastebin.com/m1vRwqwC Any suggestions?
  13. roger that - out of curiosity - when you say "be careful about all the connections" is there something else to do other than to re-enable docker and stuff like that which get disabled during the process?
  14. Not currently, no. I'm just going from a 128gb Cache to 1TB cache and adding an 8TB data drive to replace a 2TB data drive.
  15. Hi there, I have purchased a new drive for my cache drive as well as a new data drive. The data drive will be used to replace my smallest data drive. In the past, I've replaced a data drive, but never my cache drive and certainly not two drives "at once"- To that end, should i replace the cache drive then the data drive or the other way around? Similarly, is there away to replace both concurrently or do i have to do one at a time? Lastly, I intend to follow this guide: https://wiki.unraid.net/Replace_A_Cache_Drive and watch this video: https://www.youtube.com/watch?v=ij8AOEF1pTU Are these guides still the best to follow?
  16. cool - glad it helped contribute and wasn't a complete waste of everyone's time 🙂
  17. @binhex that was it! the password was too long. Odd that it has worked for years but anywho - all good now.
  18. yeah - its my Pxxxxxxx but my password is over 99 characters. I'll shorten that up right now and give that a try. I'll be back momentarily.
  19. yup - i did that already - i entered in the password and userid i use to login to the website. I've checked it in the container settings and i've checked it the credentials file that's created in the openvpn directory and it matches my credentials. I've been setup like that for years. Strange isn't it?
  20. Here's my log file 🙂 i've removed my userid and password. supervisord.log
  21. that was a great suggestion! (sadly it didn't work) but i appreciate the help and definitely know that they can' sometimes repackage things for those that are logged in versus those that arent. Strange. FWIW in case anyone else reads this - here is the error message: 2020-11-09 13:55:13,017 DEBG 'start-script' stdout output: [info] Starting OpenVPN (non daemonised)... 2020-11-09 13:55:13,022 DEBG 'start-script' stdout output: 2020-11-09 13:55:13 DEPRECATED OPTION: ncp-disable. Disabling cipher negotiation is a deprecated debug feature that will be removed in OpenVPN 2.6 2020-11-09 13:55:13,022 DEBG 'start-script' stdout output: 2020-11-09 13:55:13 DEPRECATED OPTION: --cipher set to 'aes-256-gcm' but missing in --data-ciphers (AES-256-GCM:AES-128-GCM). Future OpenVPN version will ignore --cipher for cipher negotiations. Add 'aes-256-gcm' to --data-ciphers or change --cipher 'aes-256-gcm' to --data-ciphers-fallback 'aes-256-gcm' to silence this warning. 2020-11-09 13:55:13 WARNING: file 'credentials.conf' is group or others accessible 2020-11-09 13:55:13 OpenVPN 2.5.0 [git:makepkg/a73072d8f780e888+] x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Oct 27 2020 2020-11-09 13:55:13 library versions: OpenSSL 1.1.1h 22 Sep 2020, LZO 2.10 2020-11-09 13:55:13,022 DEBG 'start-script' stdout output: 2020-11-09 13:55:13 NOTE: the current --script-security setting may allow this conf
  22. so i have verified that my username and password are correct - i do see where it says you should only use simple alpha numeric numbers; however, i have had the same password for years so that can't be the problem. Perhaps the logs i posted in this recent thread might help us figure out what's going on:
  23. i have - i've used CA Montreal and Netherlands and keep getting that message. Currently trying CA Montreal and this is what the top of that file looks like: client dev tun proto udp remote ca-montreal.privacy.network 1198 resolv-retry infinite nobind persist-key cipher aes-256-gcm ncp-disable auth sha1 tls-client remote-cert-tls server auth-user-pass credentials.conf compress verb 1 <crl-verify> here's the recent log: 2020-11-07 23:02:03,600 DEBG 'start-script' stdout output: 2020-11-07 23:02:03 DEPRECATED OPTION: --cipher set to 'aes-256-gcm' but missing in --data-ciphers (AES-256-GCM:AES-128-GCM). Future OpenVPN version will ignore --cipher for cipher negotiations. Add 'aes-256-gcm' to --data-ciphers or change --cipher 'aes-256-gcm' to --data-ciphers-fallback 'aes-256-gcm' to silence this warning. 2020-11-07 23:02:03,601 DEBG 'start-script' stdout output: 2020-11-07 23:02:03 WARNING: file 'credentials.conf' is group or others accessible 2020-11-07 23:02:03 OpenVPN 2.5.0 [git:makepkg/a73072d8f780e888+] x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Oct 27 2020 2020-11-07 23:02:03 library versions: OpenSSL 1.1.1h 22 Sep 2020, LZO 2.10 2020-11-07 23:02:03,601 DEBG 'start-script' stdout output: 2020-11-07 23:02:03 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts 2020-11-07 23:02:03,601 DEBG 'start-script' stdout output: 2020-11-07 23:02:03 CRL: loaded 1 CRLs from file -----BEGIN X509 CRL----- MIICWDCCAUAwDQYJKoZIhvcNAQENBQAwgegxCzAJBgNVBAYTAlVTMQswCQYDVQQI EwJDQTETMBEGA1UEBxMKTG9zQW5nZWxlczEgMB4GA1UEChMXUHJpdmF0ZSBJbnRl cm5ldCBBY2Nlc3MxIDAeBgNVBAsTF1ByaXZhdGUgSW50ZXJuZXQgQWNjZXNzMSAw HgYDVQQDExdQcml2YXRlIEludGVybmV0IEFjY2VzczEgMB4GA1UEKRMXUHJpdmF0 ZSBJbnRlcm5ldCBBY2Nlc3MxLzAtBgkqhkiG9w0BCQEWIHNlY3VyZUBwcml2YXRl aW50ZXJuZXRhY2Nlc3MuY29tFw0xNjA3MDgxOTAwNDZaFw0zNjA3MDMxOTAwNDZa MCYwEQIBARcMMTYwNzA4MTkwMDQ2MBECAQYXDDE2MDcwODE5MDA0NjANBgkqhkiG 9w0BAQ0FAAOCAQEAQZo9X97ci8EcPYu/uK2HB152OZbeZCINmYyluLDOdcSvg6B5 jI+ffKN3laDvczsG6CxmY3jNyc79XVpEYUnq4rT3FfveW1+Ralf+Vf38HdpwB8EW B4hZlQ205+21CALLvZvR8HcPxC9KEnev1mU46wkTiov0EKc+EdRxkj5yMgv0V2Re ze7AP+NQ9ykvDScH4eYCsmufNpIjBLhpLE2cuZZXBLcPhuRzVoU3l7A9lvzG9mjA 5YijHJGHNjlWFqyrn1CfYS6koa4TGEPngBoAziWRbDGdhEgJABHrpoaFYaL61zqy MR6jC0K2ps9qyZAN74LEBedEfK7tBOzWMwr58A== -----END X509 CRL----- 2020-11-07 23:02:03,601 DEBG 'start-script' stdout output: 2020-11-07 23:02:03 TCP/UDP: Preserving recently used remote address: [AF_INET]199.36.223.162:1198 2020-11-07 23:02:03 UDP link local: (not bound) 2020-11-07 23:02:03 UDP link remote: [AF_INET]199.36.223.162:1198 2020-11-07 23:02:03,727 DEBG 'start-script' stdout output: 2020-11-07 23:02:03 [montreal411] Peer Connection Initiated with [AF_INET]199.36.223.162:1198 2020-11-07 23:02:04,729 WARN received SIGTERM indicating exit request 2020-11-07 23:02:04,729 DEBG killing watchdog-script (pid 173) with signal SIGTERM 2020-11-07 23:02:04,729 INFO waiting for start-script, watchdog-script to die 2020-11-07 23:02:04,746 DEBG fd 11 closed, stopped monitoring <POutputDispatcher at 23342285760160 for <Subprocess at 23342286252352 with name watchdog-script in state STOPPING> (stdout)> 2020-11-07 23:02:04,746 DEBG fd 15 closed, stopped monitoring <POutputDispatcher at 23342285988768 for <Subprocess at 23342286252352 with name watchdog-script in state STOPPING> (stderr)> 2020-11-07 23:02:04,746 INFO stopped: watchdog-script (terminated by SIGTERM) 2020-11-07 23:02:04,747 DEBG received SIGCHLD indicating a child quit 2020-11-07 23:02:04,747 DEBG killing start-script (pid 172) with signal SIGTERM 2020-11-07 23:02:04,896 DEBG 'start-script' stdout output: 2020-11-07 23:02:04 AUTH: Received control message: AUTH_FAILED is it possible that it's having a problem b/c I have 2 factor enabled on PIA? I can't imagine that's it b/c that's been in place for a long long time but that's the only thing i can think of ... shrug.