ezzys

Members
  • Posts

    33
  • Joined

  • Last visited

Everything posted by ezzys

  1. Did you ever solve this? I am getting same error with upgrade to 6.12.2?
  2. Any luck with this. I am getting the same issue?
  3. I just upgraded to 6.12.2 and kept getting the following error message "*ERROR* Unexpected DP dual mode adaptor ID 01 (or 03)" to the extent my logs were filling up with the error. I did a google search and the other posts with the error all had AS Rock motherboards, so not sure whether it is related. I have ASRock B360M-HDV. I have now reverted back to 6.11.5. Does anyone have any idea what might be causing this? Diagnostics attached. unraid1-diagnostics-20230707-2359.zip
  4. Thanks changed cabels and appears to be going okay - will monitor and change if I get anymore errors.
  5. An old 2TB drive started showing errors and they have reacurred again after a reboot at the weekend. I have run an extended smart test and is passed (attached). What does this mean? Is the drive okay or should it be replaced. I have bought another 2TB drive as it was cheap so can replace, but want to know whether the drive is knackered or it could be pre-cleared and used again? unraid1-smart-20230510-1854.zip
  6. I have just installed qbitorrentvpn, however I cannot seem to get the WEBUI to work. I am using PIA VPN. The logs suggest it is all working. The final message in the log afer a restart is: 2023-03-11 17:08:47,265 DEBG 'watchdog-script' stdout output: [info] qBittorrent process listening on port 8082 I did change the ports as I had a clash, but made sure I was consistant with what I changed. Host Port 1 Containter Port: 6881 = 6881 Host Port 2 Containter Port: 6881 = 6881 Host Port 3 Container Port: 8080 = 8082 Host Port 4 Container Port: 8118 = 8128 Container Variable: WEBUI+PORT = 8082 After leaving it run for a while I just see the following in the log 2023-03-12 19:38:49,166 DEBG 'start-script' stdout output: [info] Successfully assigned and bound incoming port '54605' Any suggestions on what to adjust, any log information that will help troubleshoot?
  7. Okay thanks - my approach was to use disk 1 that was disabled for the rebuild. Given it is not showing any errors does that sound okay? Also how to I do that - the disabled disk is not showing as an unassigned device nor when the array is started does it let me select that disk? I assume becuase it is mounted. Do I need to disconnect the disabled drive and reboot and then readd the drive in on a second reboot?
  8. Thanks both, I ran "check file system" on disabled disk 1 - see below: It looks okay? Should I simply create a new array configuration and and rebuild parity?
  9. Hi, Diagnostics attached with array started. Also attached data from my syslog server which was running when drive failed. unraid1-diagnostics-20230219-0115.zip 2023-02-19.txt
  10. One of my drives just failed on me. It occured when I tried to start a Windows 10 VM (which didnt start and came up with an error message) - possibly related? I tried to reboot the system, but it locked up and I had to force power it off and restart. A load of error messages came up on the reboot, but it came up with a green message once I got to the GUI. I have also successfully rebooted it since the drive failed and think the only error is the failed drive - I did not see the same sort of error messages on the second restart. I ran a short smart test on the failed drive and no errors came back . Output attached. Should i run an extended smart test? I am going to address the VM issue seperatly as it was only used for testing so nothing lost. However I would be greatful for advise on next steps. Should I rebuild the array from the parity OR should I rebuild the parity on basis drive shows as disabled as it is out of sync (i.e. create a new array configuration)? unraid1-smart-20230219-0025.zip
  11. Thanks rebuild array from parity and run the file check. I got the following message when I used tags -nv. Phase 1 - find and verify superblock... - block cache size set to 1417896 entries Phase 2 - using internal log - zero log... zero_log: head block 1792898 tail block 1792898 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - agno = 20 - agno = 21 - agno = 22 - agno = 23 - agno = 24 - agno = 25 - agno = 26 - agno = 27 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 2 - agno = 3 - agno = 0 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - agno = 20 - agno = 21 - agno = 22 - agno = 23 - agno = 24 - agno = 25 - agno = 26 - agno = 27 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - agno = 20 - agno = 21 - agno = 22 - agno = 23 - agno = 24 - agno = 25 - agno = 26 - agno = 27 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Wed Jan 25 18:14:31 2023 Phase Start End Duration Phase 1: 01/25 18:13:38 01/25 18:13:39 1 second Phase 2: 01/25 18:13:39 01/25 18:13:39 Phase 3: 01/25 18:13:39 01/25 18:14:16 37 seconds Phase 4: 01/25 18:14:16 01/25 18:14:16 Phase 5: Skipped Phase 6: 01/25 18:14:16 01/25 18:14:31 15 seconds Phase 7: 01/25 18:14:31 01/25 18:14:31 Total run time: 53 seconds My understanding is the above is the output from a dry run. Do I need to re-run with just the -v tag or should I do somthing else? I cannot see any errors or warnings. Thanks
  12. I recently had a drive fail on me. However as I was trying to locate the right drive I managed to drop and break another drive. Luckily (prehapse), this drive was the parity drive. So at this point I had two failed drives. And becuase I dont have dual parity - I could not rebuild the array. I managed to copy of the data from the original drive that failed onto a spare using windows explorer (one drive was pluged into my second unraid machine and I did a network copy onto a NTSF formatted drive in windows). After all this I set up a new array config. However the original drive that failed was being shown so I added that back in and put a new drive in and set of the parity sync. The parity sync completed however during the parity sync I got errors saying "current pending sector is 64" on the old drive. The old drive has failed for me again today (after a couple of days with no issues) so realised I need to replace that drive. My thoughts were: Create a new config with my existing drives (miuns the one that has failed) and add in a new one (to repolace the failed one) and readd the parity drive and set of a sync. I believe this should leave me with the data on all my drives minus the one that failed. I will then manually copy across all the data from the old drive that failed back onto the array. I can copy the data from the failed drive (if it can still be read) or the drive I copyued the data across to when it originally failed. Now my questions: Is the partiy drive I recently created any good? Or does the "current pending sector is 64" mean that data is likely to be corrupt? My thoughts are yes, and probably not a good idea to rely on it for a rebuild - hence the new config. What tools should I use to get data off my old failed drive. I previously used windows explorer to do this as I wanted to get as much data off as quickly as I could. It seemed to work, but not sure if any errors or issues would of stopped it. Should I of used another tool. I am happy to try another approach. Cheers
  13. Sorry for slow response on this - I reverted back to 6.10.3 over the xmas period, but now looking to try and fix the issue. I am back on 6.11.5 and attached is the syslog from most recent shutdown approx 5 mins after boot (note that the problem only occurs when array is started). 2023-01-13.txt
  14. I recently upgraded from 6.10.3 to 6.11.3 and experienced shutdowns approx. 5 minutes afterboot. I dont think it crashed as the GUI log said the system was shutting down. As a result I reverted back to 6.10.3. However I have just tried upgrading to 6.11.5 (with the hope the issue might of been solved) and I am experiencing the same issue. At first it seemed okay, but it shutdown at midnight, approx 6 hours after a reboot to upgrade. Since then it will not stay on for more than 5 minutes without doing what looks like a controlled shutdown - it does not trigger any parity checks etc. Any suggestions on next steps? unraid1-diagnostics-20221122-1947.zip
  15. Hi all, I have set up wireguard and I can access internet and local lan from my android phone when out and about. However when I tried to use the wireguard config with ubuntu I can only access the internet, not the lan. This means that I cannot access my unraid server (the main purpose of running wireguard). I have pretty much used all the standard / default settings and I am tunnelling all traffic, with 0.0.0.0/0 in my config file. Does anyone have any suggestions?
  16. I had Authelia set up and running with LDAP (FreeIPA). However after having my server down for last few weeks due to a house move it wont start. I get the error: level=error msg="invalid configuration key 'authentication_backend.ldap.skip_verify' was replaced by 'authentication_backend.ldap.tls.skip_verify'" Any suggestions on how to resolve this?
  17. I am trying to set up the email server in nextcloud, but I get the following error: A problem occurred while sending the email. Please revise your settings. (Error: Expected response code 250 but got code "501", with message "501 Syntactically invalid HELO argument(s) ") I have doubled checked that the email server settings are correct and work (I also use them with another docker and it works) Any suggestions?
  18. It was not connecting to the vnc. That was the error in the consol. Just had to disable unlock origin add on in Firefox for the cloudberry webpage and the error with the connection to the vnc disappeared.
  19. Just tried prviate browser and it works. Looking at the browser tools it was blocking / causing issues with the connection to the websocket. Looks like unblock origin was the cause. Disabled it on crashplan and it now works fine.
  20. Hi I am having trouble using a reverse proxy to access CrashPlan (note I am using Nginx Proxy Manager to manage this). When accessing crashplan via the reverse proxy I get a red cross and error 1006 server disconnect. I got around this on another docker (Cloudberry) by using HTTPS and setting a login to the gui. I have tried enabling "Secure Connection:" in the docker template, but this did not work. Any suggestions appreciated. The conf file is below. server { set $forward_scheme http; set $server "[local ip of server]"; set $port 7810; listen 8080; listen [::]:8080; listen 4443 ssl http2; listen [::]:4443; server_name crashplan.mydomain.com; # Let's Encrypt SSL include conf.d/include/letsencrypt-acme-challenge.conf; include conf.d/include/ssl-ciphers.conf; ssl_certificate /etc/letsencrypt/live/npm-14/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/npm-14/privkey.pem; # Block Exploits include conf.d/include/block-exploits.conf; access_log /config/log/proxy_host-10.log proxy; location / { # Force SSL include conf.d/include/force-ssl.conf; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $http_connection; proxy_http_version 1.1; # Proxy! include conf.d/include/proxy.conf; } # Custom include /data/nginx/custom/server_proxy[.]conf; }
  21. I have been playing around with the settings and I have manged to get it to work. Previously I was using HTTP and port 7802. However I have just changed it to HTTPS and port 43211 and it comes up with a login screen. When I entered the login details I set in the docker template the reverse proxy worked. Although the GUI looks different!