5252525111

Members
  • Posts

    46
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

5252525111's Achievements

Rookie

Rookie (2/14)

0

Reputation

  1. Change setting under 'Settings > Global Shares Settings' look for Tunable (support Hard Links): and set it to No.
  2. Found this somewhere, has anyone tried this?
  3. So far it's holding up well. I'm having new issues with NFS but don't think that's related to this at all. Long story short, no more panics. That was one of the main fixes for me. I used a range outside my DHCP and reserve that for containers on unraid. If I recall correctly since I currently don't have access to my system, 100-223 is my LAN range, 50-99 I used for static and 224-255 (192.168.99.224/27) I reserve for containers on unraid.
  4. referencing that my issue might be related to this
  5. Sadly I use NFS everyday. Will see if I can find an alternative for my other devices. I'll look into what limetech suggested and set "Settings/NFS/Tunable (fuse remember)" to 0.
  6. noticed this. Not sure what would cause this to happen. d????????? ? ? ? ? ? user/ drwxrwxrwx 1 nobody users 71 Oct 6 16:15 user0/
  7. I've attached my logs in hopes someone can help. 30 days is the most I've ever managed to get unraid to not crash. But the past couple days I've gone back to daily crashes. This time it seems to crash my shares and containers but the gui is still responsive. After a reboot everything is fine again. No hardware changes or new software I can think of. From my limited understanding, the logs are saying the docker network crashed? Any advise is much appreciated. EDIT: Updated title from "6.9.2 I've been running stable for 30days and now daily crashes." to "6.9.2 30 days stable, suddenly unraid unmounts /mnt/user" tatooine-diagnostics-20211005-1600.zip
  8. Followed the drive. Swapped it out and seems to be good now. Thanks for the help!!! Much appreciated.
  9. Just started copying everting back to the pool already have this Tatooine:~# btrfs dev stats /mnt/app_cache/ [/dev/nvme0n1p1].write_io_errs 129163 [/dev/nvme0n1p1].read_io_errs 0 [/dev/nvme0n1p1].flush_io_errs 3 [/dev/nvme0n1p1].corruption_errs 0 [/dev/nvme0n1p1].generation_errs 0 [/dev/nvme1n1p1].write_io_errs 0 [/dev/nvme1n1p1].read_io_errs 0 [/dev/nvme1n1p1].flush_io_errs 0 [/dev/nvme1n1p1].corruption_errs 0 [/dev/nvme1n1p1].generation_errs 0 I take it, it could be a failing drive. Specifically `nvme0n1p1`?
  10. Thanks @JorgeB. I put `nvme_core.default_ps_max_latency_us=0` in and decided I may as well do a BIOS update. The cache pool was still read only, but I've back everything up and will be formatting the drives. Hopefully it won't occur again.
  11. ran the btrfs stats. seems like both have numbers there. Tatooine:~# btrfs dev stats /mnt/app_cache/ [/dev/nvme0n1p1].write_io_errs 2350795 [/dev/nvme0n1p1].read_io_errs 953269 [/dev/nvme0n1p1].flush_io_errs 74132 [/dev/nvme0n1p1].corruption_errs 9861 [/dev/nvme0n1p1].generation_errs 0 [/dev/nvme1n1p1].write_io_errs 0 [/dev/nvme1n1p1].read_io_errs 0 [/dev/nvme1n1p1].flush_io_errs 0 [/dev/nvme1n1p1].corruption_errs 239 [/dev/nvme1n1p1].generation_errs 0 Is this a btrfs issue or M.2 issue? should i be looking to replace my NVMEs?
  12. Woke up this morning to hundreds of warnings and emails that "Cache pool BTRFS missing device" Decided to restart unraid and now my pool is read only but the drive is there. Going to try copying everything off the cache to the array and format the drives in the pool. Not the first time my btrfs goes to read only, getting frustrated by it. 1. Is what I'm planning to do alright? 2. is there anything I can do to prevent this from happening again? 3. Could someone help me try and figure out what happened? attached is the diagnostics before the restart tatooine-diagnostics-20210510-0630.zip
  13. I believe you need to add the following to your nginx config # Make a regex exception for `/.well-known` so that clients can still # access it despite the existence of the regex rule # `location ~ /(\.|autotest|...)` which would otherwise handle requests # for `/.well-known`. location ^~ /.well-known { # The following 6 rules are borrowed from `.htaccess` location = /.well-known/carddav { return 301 /remote.php/dav/; } location = /.well-known/caldav { return 301 /remote.php/dav/; } # Anything else is dynamically handled by Nextcloud location ^~ /.well-known { return 301 /index.php$uri; } try_files $uri $uri/ =404; } See documentation https://docs.nextcloud.com/server/21/admin_manual/installation/nginx.html and also https://docs.nextcloud.com/server/21/admin_manual/issues/general_troubleshooting.html#service-discovery