andr0id

Members
  • Posts

    12
  • Joined

  • Last visited

andr0id's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Oh, it seems that the problem is solved. I can start Unraid and the array. No errors seem to appear in the system logs either. Thank you very much!!! One last question: Is the way I configured the cache with two drives (Cache and Cache2) (see screenshot above) the right way to prevent data loss if one of the two cache SSDs dies?
  2. I ran chkdsk /f /r and /x on the flash drive. It found some sector errors in ./git and repaired them, nothing else was found. After booting in normal mode I have the corrupted directories again, but in safe mode I don't see any errors like the ones in normal mode. I was also able to copy all files from the drive to my Mac and Windows PC without any warnings. I could now format the USB stick and copy the data back to it, but I am not sure if the USB stick is the cause of the error.
  3. I followed the instruction and now I can start the array in safe mode. Unfortunately, this is limited to safe mode. When I boot normally the boot process hangs again with the message "Logger: send Message failed: Bad file descriptor". as I write this text a new message has actually appeared and I can use the terminal. A look at /boot looks scary.
  4. Hey, this is the output Label: none uuid: def06ab6-5d13-4d32-8a61-08a2ab429729 Total devices 1 FS bytes used 488.04GiB devid 2 size 931.51GiB used 829.06GiB path /dev/nvme0n1p1
  5. Hello community, yesterday my NAS suddenly froze during use. No web UI, no share access and no SSH access. Because nothing else worked, I restarted the PC manually. After restarting I only got the message "cache - too many missing/wrong devices" in the UI when starting the array. After that I tried it with a reboot. After this reboot all disks were suddenly no longer available. After another reboot nothing worked, no UI and no SSH connection possible. Today I connected the PC to the screen to see what happens. After the boot sequence shows some disturbing errors the output ends with "Logger: send Message failed: Bad file descriptor". I can then log in via ssh, but the system does not respond to any input. When I start in safe mode I get a UI again. At first there is nothing to see. No missing disks or other notifications. When I try to start the array I get the error "cache - too many missing/wrong devices" again. Also in maintenance mode the array can not be started. I must mention that I installed two cache ssd, which should provide redundancy. One of the two SSDs apparently even broke a few weeks ago, as it is no longer listed. Even changing the slots did not change anything. Therefore, the NAS has been running only with "cache 2" lately. Since I suspected that for some reason the cache must be suddenly populated again, so that the NAS can start I have an old SSD connected and wanted to restore "cache". But when I want to start the array again, it remains with the error message. Since I would probably create more problems if I tried anything now, I would like to ask for help. I have no idea what the logs are supposed to tell me. does anyone have any ideas? nas-diagnostics-20230822-2225.zip syslog-6.txt IMG_9691.HEIC
  6. Thank you. I just replaced the cables and am having the same issues. I will continue to test my switch and network card. Is there any way besides using another switch and just wait if something happens
  7. Hello, currently my Unraid is not running well. I have frequent network disconnects and performance issues. Now in "Fix Common Problems" it is recommended to record and check logs. I have no idea what the log is telling me, nor do I understand where my network problems are coming from. Right now I can't even watch a movie from the NAS and copy data at the same time without the movie stuttering. My currently NAS is connected with a 5GBit network card and the 2,5GBit onboard network port with a 2,5GBit switch, so the network speed should be sufficient. I plan to use both in bond in active-backup mode. I have used the diagnostic tool to download information about my system. Can someone please help me understand what is going on? Translated with www.DeepL.com/Translator (free version) nas-diagnostics-20220821-2208.zip
  8. You can change the url in the Docker Container Config as Extra Parameters (advanced view): --env GITLAB_OMNIBUS_CONFIG="external_url ' YOUR URL ( Example: http://192.168.1.22:1234) '" Default is http://unraid:9080 It will set the url for your GitLab server and use it in the App. Also as the clone url. You also could set your local DNS to map "unraid" to your unraid ip address. I use pihole to block ads and set local DNS settings.
  9. Hey, I am using a Asus Prime A320M-A Motherboard with a AMD Athlon 3000G. If I auto detect a driver for System Temp the k10temp Driver is selected. With that driver I only get 2 Temps and nothing else. I can only select k10temp - Tdie or k10temp - CPU Temp and I think both are CPU temperatures. Can I select a different driver or configure something in the bios/uefi to get more values?
  10. My container just got updated by ca auto update. Where can I find a change log?
  11. Just use the sample for gitea and change it a bit: server { listen 443 ssl; listen [::]:443 ssl; server_name gitlab.*; include /config/nginx/ssl.conf; client_max_body_size 0; # enable for ldap auth, fill in ldap details in ldap.conf #include /config/nginx/ldap.conf; # enable for Authelia #include /config/nginx/authelia-server.conf; location / { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth #auth_request /auth; #error_page 401 =200 /ldaplogin; # enable for Authelia #include /config/nginx/authelia-location.conf; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app GitLab-CE; set $upstream_port 9080; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } } Don't forget to put them all in the same virtual docker network. nginx will look out for a container with the name you set for $upstream_app.
  12. Hello I have a problem with my gitlab container. After starting it, it's running great. After round about one day, I only get a " 502 Whoops, GitLab is taking too much time to respond." message. If I look into my logs, it seams that the Redis database it not available anymore. I can restart the container, but after one day it's the same again. What is going wrong? My container settings: Extra Parameters: --env GITLAB_OMNIBUS_CONFIG="external_url 'https://myurl.com/'; gitlab_rails['lfs_enabled'] = true;" Network Type: Custom network with a LetsEncrypt Container Ports HTTP: 5090 HTTPS:5443 Application Data Storage Path: /mnt/user/git/git files/ <-- Share with Cache enabled (Use cache: yes) Config Storage Path and Config Storage Path: Default Path (/mnt/cache/appdata/gitlab-ce) Custom Path: - Container Path: /var/opt/gitlab/gitlab-rails/shared/lfs-objects - Host Path: /mnt/user/git/git lfs files/ <-- Share with Cache enabled (Use cache: yes) gitlab.rb Changes: To prevent the integrated nginx from trying to reach letsEncrypt and communicating with my letsEncrypt container via http instead. nginx[‘listen_port’] = 9080; nginx[‘listen_https’] = false; The GitLab Container Log: ==> /var/log/gitlab/sidekiq/current <== {"severity":"ERROR","time":"2020-08-25T19:40:21.824Z","message":"Heartbeat thread error: Error connecting to Redis on /var/opt/gitlab/redis/redis.socket (Errno::ECONNREFUSED)"} {"severity":"ERROR","time":"2020-08-25T19:40:22.825Z","message":"Heartbeat thread error: Error connecting to Redis on /var/opt/gitlab/redis/redis.socket (Errno::ECONNREFUSED)"} {"severity":"ERROR","time":"2020-08-25T19:40:23.443Z","message":"Error connecting to Redis on /var/opt/gitlab/redis/redis.socket (Errno::ECONNREFUSED)"} {"severity":"ERROR","time":"2020-08-25T19:40:23.443Z","message":"/opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/redis-4.1.3/lib/redis/client.rb:362:in `rescue in establish_connection'"} {"severity":"WARN","time":"2020-08-25T19:40:23.444Z","error_class":"Redis::CannotConnectError","error_message":"Error connecting to Redis on /var/opt/gitlab/redis/redis.socket (Errno::ECONNREFUSED)","error_backtrace":["config/initializers/zz_metrics.rb:198:in `connect'","lib/gitlab/instrumentation/redis_interceptor.rb:15:in `call'"],"retry":0} {"severity":"ERROR","time":"2020-08-25T19:40:23.445Z","message":"Error connecting to Redis on /var/opt/gitlab/redis/redis.socket (Errno::ECONNREFUSED)"} {"severity":"WARN","time":"2020-08-25T19:40:23.445Z","error_class":"Redis::CannotConnectError","error_message":"Error connecting to Redis on /var/opt/gitlab/redis/redis.socket (Errno::ECONNREFUSED)","error_backtrace":["config/initializers/zz_metrics.rb:198:in `connect'","lib/gitlab/instrumentation/redis_interceptor.rb:15:in `call'"],"retry":0} {"severity":"ERROR","time":"2020-08-25T19:40:23.826Z","message":"Heartbeat thread error: Error connecting to Redis on /var/opt/gitlab/redis/redis.socket (Errno::ECONNREFUSED)"} {"severity":"ERROR","time":"2020-08-25T19:40:24.828Z","message":"Heartbeat thread error: Error connecting to Redis on /var/opt/gitlab/redis/redis.socket (Errno::ECONNREFUSED)"} ==> /var/log/gitlab/redis-exporter/current <== 2020-08-25_19:40:24.88917 time="2020-08-25T19:40:24Z" level=error msg="Couldn't connect to redis instance" ==> /var/log/gitlab/sidekiq/current <== {"severity":"ERROR","time":"2020-08-25T19:40:25.014Z","message":"heartbeat: Error connecting to Redis on /var/opt/gitlab/redis/redis.socket (Errno::ECONNREFUSED)"} {"severity":"ERROR","time":"2020-08-25T19:40:26.468Z","message":"Error connecting to Redis on /var/opt/gitlab/redis/redis.socket (Errno::ECONNREFUSED)"} {"severity":"WARN","time":"2020-08-25T19:40:26.469Z","error_class":"Redis::CannotConnectError","error_message":"Error connecting to Redis on /var/opt/gitlab/redis/redis.socket (Errno::ECONNREFUSED)","error_backtrace":["config/initializers/zz_metrics.rb:198:in `connect'","lib/gitlab/instrumentation/redis_interceptor.rb:15:in `call'"],"retry":0} {"severity":"ERROR","time":"2020-08-25T19:40:26.470Z","message":"Error connecting to Redis on /var/opt/gitlab/redis/redis.socket (Errno::ECONNREFUSED)"} {"severity":"WARN","time":"2020-08-25T19:40:26.470Z","error_class":"Redis::CannotConnectError","error_message":"Error connecting to Redis on /var/opt/gitlab/redis/redis.socket (Errno::ECONNREFUSED)","error_backtrace":["config/initializers/zz_metrics.rb:198:in `connect'","lib/gitlab/instrumentation/redis_interceptor.rb:15:in `call'"],"retry":0} {"severity":"ERROR","time":"2020-08-25T19:40:26.830Z","message":"Heartbeat thread error: Error connecting to Redis on /var/opt/gitlab/redis/redis.socket (Errno::ECONNREFUSED)"} {"severity":"ERROR","time":"2020-08-25T19:40:27.831Z","message":"Heartbeat thread error: Error connecting to Redis on /var/opt/gitlab/redis/redis.socket (Errno::ECONNREFUSED)"} {"severity":"ERROR","time":"2020-08-25T19:40:28.446Z","message":"Error connecting to Redis on /var/opt/gitlab/redis/redis.socket (Errno::ECONNREFUSED)"} {"severity":"ERROR","time":"2020-08-25T19:40:28.447Z","message":"/opt/gitlab/embedded/lib/ruby/gems/2.6.0/gems/redis-4.1.3/lib/redis/client.rb:362:in `rescue in establish_connection'"} {"severity":"WARN","time":"2020-08-25T19:40:28.447Z","error_class":"Redis::CannotConnectError","error_message":"Error connecting to Redis on /var/opt/gitlab/redis/redis.socket (Errno::ECONNREFUSED)","error_backtrace":["config/initializers/zz_metrics.rb:198:in `connect'","lib/gitlab/instrumentation/redis_interceptor.rb:15:in `call'"],"retry":0} {"severity":"ERROR","time":"2020-08-25T19:40:28.448Z","message":"Error connecting to Redis on /var/opt/gitlab/redis/redis.socket (Errno::ECONNREFUSED)"} {"severity":"WARN","time":"2020-08-25T19:40:28.449Z","error_class":"Redis::CannotConnectError","error_message":"Error connecting to Redis on /var/opt/gitlab/redis/redis.socket (Errno::ECONNREFUSED)","error_backtrace":["config/initializers/zz_metrics.rb:198:in `connect'","lib/gitlab/instrumentation/redis_interceptor.rb:15:in `call'"],"retry":0} ==> /var/log/gitlab/puma/puma_stdout.log <== {"timestamp":"2020-08-25T19:40:28.795Z","pid":322,"message":"PumaWorkerKiller: Consuming 3111.04296875 mb with master and 4 workers."} ==> /var/log/gitlab/sidekiq/current <== {"severity":"ERROR","time":"2020-08-25T19:40:28.832Z","message":"Heartbeat thread error: Error connecting to Redis on /var/opt/gitlab/redis/redis.socket (Errno::ECONNREFUSED)"}