gamerkonks

Members
  • Posts

    36
  • Joined

  • Last visited

Everything posted by gamerkonks

  1. Yeah, I figured as much. I don't think I even need the data from that collector, so now I'm just trying to pass a flag to disable it (--no-collector.mdadm)
  2. Okay, so I've taught myself go and looked at the source code and like Galileo mentioned, node exported is expecting the format of /proc/mdstat to be like in the link he posted, i.e. with spaces. My /proc/mdstat is similar to Galileo's and doesn't have spaces, so node exporter isn't able to parse my /proc/mdstat I don't understand how what he did could have solved the problem...
  3. I already have the mapping to host and post argument in my container configuration, but getting the same error. Is there any other things to try / ways to troubleshoot?
  4. I've noticed an issue recently, when I click on Disk Log Information, the popup comes up, but is blank except for the "Done" button. I see unassigned.devices logging in Syslog. When I click the Disk Log Information button, this error is generated in Syslog. emhttpd: error: run_cmd, 882: No such file or directory (2): invalid cmd: /webGui/scripts/disk_log
  5. I thought I formatted it with UD after I ran a preclear on it. I'll try formatting it again. Thanks.
  6. Apologies for the delay. See attached diagnostic and screenshot. thenas-diagnostics-20220623-1943.zip
  7. Hi there, I'm having issue with an external USB HDD. I plug it in and it appears in unassigned devices, but I'm unable to mount the drive. It only has the format option. I know the drive is fine because I can access the data when I plug it into my Windows machine. My current work around it to pass it through to a VM in Unraid. Disk log below. kernel: sd 15:0:0:0: [sdl] Very big device. Trying to use READ CAPACITY(16). kernel: sd 15:0:0:0: [sdl] 35156590592 512-byte logical blocks: (18.0 TB/16.4 TiB) kernel: sd 15:0:0:0: [sdl] 4096-byte physical blocks kernel: sd 15:0:0:0: [sdl] Write Protect is off kernel: sd 15:0:0:0: [sdl] Mode Sense: 47 00 10 08 kernel: sd 15:0:0:0: [sdl] No Caching mode page found kernel: sd 15:0:0:0: [sdl] Assuming drive cache: write through kernel: sd 15:0:0:0: [sdl] Attached SCSI disk emhttpd: WD_Elements_25A3_546456468464654-0:0 (sdl) 512 35156590592 emhttpd: read SMART /dev/sdl
  8. The forks that I'm aware have updated to >= 1.3 (chia version numbers) BTCgreen to 1.3.1 Cactus to 1.3.3 Flax to 1.3.3 SHIBgreen to 1.3.1
  9. I did this by copying the scripts to my array somewhere. Looks like you've copied them to your nextcloud appdata directory. You could map them into your docker container by editing the template and adding an additional path. I just copied them directly into the docker container by using docker cp /pathtoscripts/* nextcloud:/ Then open a console window of your nextcloud instance. Then grant execute permissions to both scripts with chmod a+x solvable_files.sh Then I had to install mysql with apk add --update mysql mysql-client It's my understanding that these will be cleared when your nextcloud docker container updates next, so don't really need to worry about this. Then ran the solvable files script with the following parameters. ./solvable_files.sh /local mysql ip user password db list noscan local is the container directory that is mapped to my data on the array. ip is the ip address of my mariadb instance, 192.168.x.x. For me this is the same ip as my server. user is the username of the db, mine is nextcloud. password is the password for that user. Then you can either use list to list the files, or fix to attempt to fix them. Then scan or noscan. Hope this helps.
  10. Hi there, I have 2 disks. Disk 1 has my Media share, Disk 2 has everything else. I have had FIP running for a while now but recently selected my Media share to be excluded, via Settings -> FIP -> Excluded folders and files: I have my Disk verification schedule to run monthly. When my monthly Disk verification started running today, I noticed that it was reading from Disk 1, and in Tools -> FIP, I see that Disk 1 is currently processing file xxx of 88805, when I'm not expecting it to verify anything on Disk 1. Is this because there are existing hashes from this share, since it wasn't excluded previously? Thanks.
  11. I would've thought so, and I did leave it for a while, but I saw that the activity on the cache drive stopped and the size of the swap file was no longer increasing, so something must've stopped.
  12. As a work around I've just run the commands from the script manually in the terminal. dd if=/dev/zero of=/mnt/cache/swap/swapfile bs=1M count=32768 btrfs property set /mnt/cache/swap/swapfile compression none mkswap -L swapfile /mnt/cache/swap/swapfile chmod 600 /mnt/cache/swap/swapfile swapon -v /mnt/cache/swap/swapfile
  13. Hi there, I've been using this and it's been working great, but I've tried to increase the size of my swap file and it just gets stuck loading. Checking syslog, it looks like it times out after 3 minutes. Is there any way to increase the timeout? Feb 13 11:39:32 NAS rc.swapfile[8110]: New swap file configuration is being implemented Feb 13 11:39:32 NAS rc.swapfile[8123]: Restarting swap file with new configuration ... Feb 13 11:41:10 NAS rc.swapfile[10902]: Swap file /mnt/cache/swap/swapfile stopped Feb 13 11:41:10 NAS rc.swapfile[10904]: Swap file /mnt/cache/swap/swapfile removed Feb 13 11:41:13 NAS rc.swapfile[11070]: Creating swap file /mnt/cache/swap/swapfile please wait ... Feb 13 11:44:13 NAS nginx: 2022/02/13 11:44:13 [error] 19957#19957: *933134 upstream timed out (110: Connection timed out) while reading upstream, client: 10.8.0.2, server: , request: "POST /update.htm HTTP/2.0", upstream: "http://unix:/var/run/emhttpd.socket:/update.htm", host: "192.168.8.9", referrer: "https://192.168.8.9/Settings/swapfile"
  14. For some reason, grafana latest is pulling 7.5.13 for me. I think I had 8.3.3 installed last, but after it updated overnight, I got a bunch of error messages pop up on my dashboard such as "Templating Failed to upgrade legacy queries", and all my panels had null data sources. I saw that it was now running 7.5.13 I changed the tag to 8.3.4 and now it seems to be working as normal again. Does anyone know why this happened?
  15. I don't need any UD devices to have Enhanced macOS interoperability. I only need that for shares from my array.
  16. I've found exFAT more convenient, as it's compatible with Windows, macOS and PS4. Ideally I'd like to be able to plug them directly into the machines, as well as be able to share the disks over the network via Unraid.
  17. I've run into another issue. I have a few unassigned devices mounted, some exFAT, some NTFS. I noticed from my Windows 10 client, with the exFAT drives, I could create files, but not rename or delete them over SMB. I would get a "request not supported" error. Although I could modify / delete files on those drives using Krusader. Eventually I found some posts on another forum related to issues with vfs_fruit and exFAT. I had Enhanced macOS interoperability set to Yes. I changed it to No and tried again. Now I can r/w to the exFAT drives over SMB, but I can't export my time machine share as such. My guess is that having enhanced masOS interoperability enabled adds vfs_fruit to smb-settings.conf, which also requires vfs_catia and vfs_streams_xattr. According to documentation, the file system that is shared with this vfs_streams_xattr must support xattrs. Which I guess exFAT does not? If this is the case, is there anyway for read/write to work for exFAT drives over SMB while having masOS interoperability enabled?
  18. Hi there, I'm having an issue where my external USB hard drive will all of a sudden cause high IO wait on my system. The last time it happened was yesterday (10/01/22) at around 12:36 In Syslog the first relevant message I see is Jan 10 12:36:33 TheNAS kernel: usb 6-1: reset SuperSpeed Gen 1 USB device number 2 using xhci_hcd The only files on the drive are chia plots for farming. I only have necessary shares included in the cache dirs plugin, so that shouldn't be accessing the drive. When it happens I am unable to unmount the drive from the Web UI and the only fix seems to be to physically unplug the drive. How can I diagnose the root cause of the issue? Could it be hardware related, (cable, drive, port)? Thanks thenas-diagnostics-20220110-2348.zip
  19. That must be it. I did get an error saying the flash drive was read only last time I booted, but it seems fine now.
  20. Does anyone know what would cause an unclean shutdown when shutting down with the array stopped?
  21. Yeah, I had to update the values directly in the config file.
  22. I think so, but as a test, I've removed groups from my user and the rule and getting the same problem. My access control is this default_policy: deny rules: ## Rules applied to everyone - domain: "*.duckdns.org" policy: one_factor