gamerkonks

Members
  • Posts

    55
  • Joined

  • Last visited

Everything posted by gamerkonks

  1. Hi there, I'm having issue with an external USB HDD. I plug it in and it appears in unassigned devices, but I'm unable to mount the drive. It only has the format option. I know the drive is fine because I can access the data when I plug it into my Windows machine. My current work around it to pass it through to a VM in Unraid. Disk log below. kernel: sd 15:0:0:0: [sdl] Very big device. Trying to use READ CAPACITY(16). kernel: sd 15:0:0:0: [sdl] 35156590592 512-byte logical blocks: (18.0 TB/16.4 TiB) kernel: sd 15:0:0:0: [sdl] 4096-byte physical blocks kernel: sd 15:0:0:0: [sdl] Write Protect is off kernel: sd 15:0:0:0: [sdl] Mode Sense: 47 00 10 08 kernel: sd 15:0:0:0: [sdl] No Caching mode page found kernel: sd 15:0:0:0: [sdl] Assuming drive cache: write through kernel: sd 15:0:0:0: [sdl] Attached SCSI disk emhttpd: WD_Elements_25A3_546456468464654-0:0 (sdl) 512 35156590592 emhttpd: read SMART /dev/sdl
  2. The forks that I'm aware have updated to >= 1.3 (chia version numbers) BTCgreen to 1.3.1 Cactus to 1.3.3 Flax to 1.3.3 SHIBgreen to 1.3.1
  3. I did this by copying the scripts to my array somewhere. Looks like you've copied them to your nextcloud appdata directory. You could map them into your docker container by editing the template and adding an additional path. I just copied them directly into the docker container by using docker cp /pathtoscripts/* nextcloud:/ Then open a console window of your nextcloud instance. Then grant execute permissions to both scripts with chmod a+x solvable_files.sh Then I had to install mysql with apk add --update mysql mysql-client It's my understanding that these will be cleared when your nextcloud docker container updates next, so don't really need to worry about this. Then ran the solvable files script with the following parameters. ./solvable_files.sh /local mysql ip user password db list noscan local is the container directory that is mapped to my data on the array. ip is the ip address of my mariadb instance, 192.168.x.x. For me this is the same ip as my server. user is the username of the db, mine is nextcloud. password is the password for that user. Then you can either use list to list the files, or fix to attempt to fix them. Then scan or noscan. Hope this helps.
  4. Hi there, I have 2 disks. Disk 1 has my Media share, Disk 2 has everything else. I have had FIP running for a while now but recently selected my Media share to be excluded, via Settings -> FIP -> Excluded folders and files: I have my Disk verification schedule to run monthly. When my monthly Disk verification started running today, I noticed that it was reading from Disk 1, and in Tools -> FIP, I see that Disk 1 is currently processing file xxx of 88805, when I'm not expecting it to verify anything on Disk 1. Is this because there are existing hashes from this share, since it wasn't excluded previously? Thanks.
  5. I would've thought so, and I did leave it for a while, but I saw that the activity on the cache drive stopped and the size of the swap file was no longer increasing, so something must've stopped.
  6. As a work around I've just run the commands from the script manually in the terminal. dd if=/dev/zero of=/mnt/cache/swap/swapfile bs=1M count=32768 btrfs property set /mnt/cache/swap/swapfile compression none mkswap -L swapfile /mnt/cache/swap/swapfile chmod 600 /mnt/cache/swap/swapfile swapon -v /mnt/cache/swap/swapfile
  7. Hi there, I've been using this and it's been working great, but I've tried to increase the size of my swap file and it just gets stuck loading. Checking syslog, it looks like it times out after 3 minutes. Is there any way to increase the timeout? Feb 13 11:39:32 NAS rc.swapfile[8110]: New swap file configuration is being implemented Feb 13 11:39:32 NAS rc.swapfile[8123]: Restarting swap file with new configuration ... Feb 13 11:41:10 NAS rc.swapfile[10902]: Swap file /mnt/cache/swap/swapfile stopped Feb 13 11:41:10 NAS rc.swapfile[10904]: Swap file /mnt/cache/swap/swapfile removed Feb 13 11:41:13 NAS rc.swapfile[11070]: Creating swap file /mnt/cache/swap/swapfile please wait ... Feb 13 11:44:13 NAS nginx: 2022/02/13 11:44:13 [error] 19957#19957: *933134 upstream timed out (110: Connection timed out) while reading upstream, client: 10.8.0.2, server: , request: "POST /update.htm HTTP/2.0", upstream: "http://unix:/var/run/emhttpd.socket:/update.htm", host: "192.168.8.9", referrer: "https://192.168.8.9/Settings/swapfile"
  8. For some reason, grafana latest is pulling 7.5.13 for me. I think I had 8.3.3 installed last, but after it updated overnight, I got a bunch of error messages pop up on my dashboard such as "Templating Failed to upgrade legacy queries", and all my panels had null data sources. I saw that it was now running 7.5.13 I changed the tag to 8.3.4 and now it seems to be working as normal again. Does anyone know why this happened?
  9. I don't need any UD devices to have Enhanced macOS interoperability. I only need that for shares from my array.
  10. I've found exFAT more convenient, as it's compatible with Windows, macOS and PS4. Ideally I'd like to be able to plug them directly into the machines, as well as be able to share the disks over the network via Unraid.
  11. I've run into another issue. I have a few unassigned devices mounted, some exFAT, some NTFS. I noticed from my Windows 10 client, with the exFAT drives, I could create files, but not rename or delete them over SMB. I would get a "request not supported" error. Although I could modify / delete files on those drives using Krusader. Eventually I found some posts on another forum related to issues with vfs_fruit and exFAT. I had Enhanced macOS interoperability set to Yes. I changed it to No and tried again. Now I can r/w to the exFAT drives over SMB, but I can't export my time machine share as such. My guess is that having enhanced masOS interoperability enabled adds vfs_fruit to smb-settings.conf, which also requires vfs_catia and vfs_streams_xattr. According to documentation, the file system that is shared with this vfs_streams_xattr must support xattrs. Which I guess exFAT does not? If this is the case, is there anyway for read/write to work for exFAT drives over SMB while having masOS interoperability enabled?
  12. Hi there, I'm having an issue where my external USB hard drive will all of a sudden cause high IO wait on my system. The last time it happened was yesterday (10/01/22) at around 12:36 In Syslog the first relevant message I see is Jan 10 12:36:33 TheNAS kernel: usb 6-1: reset SuperSpeed Gen 1 USB device number 2 using xhci_hcd The only files on the drive are chia plots for farming. I only have necessary shares included in the cache dirs plugin, so that shouldn't be accessing the drive. When it happens I am unable to unmount the drive from the Web UI and the only fix seems to be to physically unplug the drive. How can I diagnose the root cause of the issue? Could it be hardware related, (cable, drive, port)? Thanks thenas-diagnostics-20220110-2348.zip
  13. That must be it. I did get an error saying the flash drive was read only last time I booted, but it seems fine now.
  14. Does anyone know what would cause an unclean shutdown when shutting down with the array stopped?
  15. Yeah, I had to update the values directly in the config file.
  16. I think so, but as a test, I've removed groups from my user and the rule and getting the same problem. My access control is this default_policy: deny rules: ## Rules applied to everyone - domain: "*.duckdns.org" policy: one_factor
  17. Hi there, I'm trying to get Authelia up and running. I'm using it with SWAG, using the default authelia-server.conf and authelia-location.conf. When I try and access an application that is reverse proxied and setup to use authelia, it correctly goes to the Authelia login page. When I log in correctly, it seems to redirect successfully, but without any session info (the user name is blank), and I end up at the Authelia login page again. I've tried removing redis from the config to see if in memory session handling would make a difference, but no change. Relevant log belong. Thanks. time="2021-12-28T23:51:14+11:00" level=debug msg="Check authorization of subject username= groups= ip=x.x.x.x and object https://xxx.duckdns.org/ (method GET)." time="2021-12-28T23:51:14+11:00" level=info msg="Access to https://xxx.duckdns.org/ (method GET) is not authorized to user <anonymous>, responding with status code 401" method=GET path=/api/verify remote_ip=x.x.x.x time="2021-12-28T23:51:20+11:00" level=debug msg="Mark 1FA authentication attempt made by user 'test'" method=POST path=/api/firstfactor remote_ip=x.x.x.x time="2021-12-28T23:51:20+11:00" level=debug msg="Successful 1FA authentication attempt made by user 'test'" method=POST path=/api/firstfactor remote_ip=x.x.x.x time="2021-12-28T23:51:20+11:00" level=debug msg="Check authorization of subject username=test groups=admins,dev ip=x.x.x.x and object https://xxx.duckdns.org/ (method )." time="2021-12-28T23:51:20+11:00" level=debug msg="Required level for the URL https://xxx.duckdns.org/ is 1" method=POST path=/api/firstfactor remote_ip=x.x.x.x time="2021-12-28T23:51:20+11:00" level=debug msg="Redirection URL https://xxx.duckdns.org/ is safe" method=POST path=/api/firstfactor remote_ip=x.x.x.x time="2021-12-28T23:51:20+11:00" level=debug msg="Check authorization of subject username= groups= ip=x.x.x.x and object https://xxx.duckdns.org/ (method GET)." time="2021-12-28T23:51:20+11:00" level=info msg="Access to https://xxx.duckdns.org/ (method GET) is not authorized to user <anonymous>, responding with status code 401" method=GET path=/api/verify remote_ip=x.x.x.x
  18. I'll try out your elasticsearch container. I'm currently using the elasticsearch container with the tag elasticsearch:5.6.14. Hopefully I'll be able to copy all my old indexes over.
  19. Disappointed by the locked features in v2, but can't manage to get v1.5 working again. Had to remove security from ElasticSearch5 to get it working again, apparently that's a paid feature now? And then had to run docker exec -it diskover bash pip3 install 'click==7.1.2' --force-reinstall exit docker restart diskover To get diskover 1.5 to stop throwing errors in the log. You need to reapply this every time you make a change to the docker template or if it updates though. If I try and go to the Web UI for diskover 1.5, I get a server error 500.
  20. Thanks, I think I've got it working, just waiting on the first index to build. I think it may be because the crontab file has the first line commented out. It's defaulted to 3am everyday, remove the "#" and save it and check if it builds a new index at 3am.
  21. Here's my template for v1.x I might need some help getting v2.x set up. I'd like to give it a go.
  22. Seems to have been updated to v2 now. You'll need to update the config to a new folder to let it continue.