nickp85

Members
  • Posts

    219
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

nickp85's Achievements

Explorer

Explorer (4/14)

15

Reputation

  1. Yes multiple successful backups including a large one after installing 12.4. Nothing extra in my SMB options. I can confirm though I cannot browse through Finder for my Unraid shares. I have to use the Go menu to manually connect to them. Also, .local is not working anymore for some reason so I am using the DNS name configured on my router to reach the server. I manually connected to the TimeMachine share and then added the disk. Once I did that, it can find it again each time. My TimeMachine share is set to private but not hidden. #unassigned_devices_start #Unassigned devices share includes include = /tmp/unassigned.devices/smb-settings.conf #unassigned_devices_end #vfs_recycle_start #Recycle bin configuration [global] syslog only = Yes syslog = 0 logging = 0 log level = 0 vfs:0 #vfs_recycle_end
  2. same issue here, could no longer connect to my TimeMachine share and when remove the disk and look again for disk, nothing shows up. Was working on previous versions of 6.10. Actually, can't seem to connect to any SMB shares on Unraid from my Mac. I also did the 12.4 update this evening and it's broken both ways. **Update - I was able to mount the TimeMachine share manually in finder using smb://nicknas2.homenet (DNS domain on my router). It had previously been using nicknas2.local which doesn't appear to be working anymore. After manually connecting to the share and re-associating the disk in TimeMachine, it seems to have started trying to back up. Fingers crossed... **Update 2 - the backup ran to completion so now it's using the unraid server name with my router's DNS domain instead of .local. Will keep an eye on it
  3. @jonp any thoughts here? Seems to be something wrong with installing stuff from the Nerd pack
  4. I have 3 packages that keep saying update available but won't actually update. It says they are installing and completes but then GUI still says update available. How can I fix this?
  5. Tried to move everything off but my docker.img file would not move off. I am using the XFS format for the docker image. Every time it would try to move it there were errors in the log about the corruption. I ended up deleting my docker img file entirely and then I even formatted each disk using XFS in unassigned devices before deleting all the partitions again and setting the pool back up. Ran a scrub while empty and no errors found. Now moving everything back. Interesting enough though even though docker.img would not move (I even tried to stop/start array to break anything that may have it locked), docker would still start up fine if I turned it on. I know the XFS image format is newer to Unraid for Docker so could there be some weirdness with xfs docker image on a btrfs cache pool? Pulled a fresh diagnostic after rebuilding and transferring all back plus rebuilding my dockers. Update: Ran scrub after all my stuff was back the way it should be and came back clean. At least I have a good starting point now. nicknas2-diagnostics-20220331-2348.zip
  6. Going back means I lose my Windows 11 VM since no virtual TPM unfortunately Sent from my iPhone using Tapatalk
  7. Update, let the HCI test run overnight and still no errors at all. @JorgeB really starting to doubt this is a hardware issue and instead something with btrfs or Unraid
  8. the 24 hour memtest I ran was the uefi one from the memtest site using a different stick. The Unraid one doesn’t boot uefi. I assigned 28GB to my Windows 11 VM and ran 12 copies of that HCI tool for 2000 MB each (last one was 1000 MB) and all ran to completion without error. Unraid was showing 98% system memory used and Windows showed 97% used. I also used qemu-img convert to duplicate my VM’s raw image file and deleted the old one thinking possibly some of the corruption was related to the img file. The system still shows 6 uncorrectable errors without a file path when doing a scrub but no new errors for over 24 hours. The system is definitely stable… guess next step is to try to recreate the cache pool again for the second time in a week. as I’ve mentioned, would not even known there was an error if I hadn’t looked. The machine is on 24/7 and use it for multiple tasks in Unraid plus gaming VM. No trouble other than the btrfs log lines.
  9. It doesn’t. Based on what I’ve read online the output of the scrub is supposed to show the file path as well but in this case it does not. Does that mean the uncorrectable error is somewhere in free space?
  10. it doesn’t, those are the log file lines from the scrub. I have 6 uncorrectable errors. Just gives the logical numbers and which device.
  11. @JorgeB is there some way to tell the file that has corruption? I have 6 uncorrectable errors but the log output is not specific what file where is corrupted. I would like to find out if I can. If it's a file I can live without I'll just delete it and see if I get further corruption. I'm really thinking mover is just moving the corrupted files off and back on whenever I rebuild the cache [ 8803.054697] BTRFS error (device nvme0n1p1): bdev /dev/nvme1n1p1 errs: wr 0, rd 0, flush 0, corrupt 12, gen 0 [ 8803.057091] BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 0, flush 0, corrupt 14, gen 0 [ 8803.058429] BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 0, flush 0, corrupt 15, gen 0 [ 8803.059273] BTRFS error (device nvme0n1p1): unable to fixup (regular) error at logical 127944531968 on dev /dev/nvme1n1p1 [ 8803.059390] BTRFS error (device nvme0n1p1): bdev /dev/nvme1n1p1 errs: wr 0, rd 0, flush 0, corrupt 13, gen 0 [ 8803.059559] BTRFS error (device nvme0n1p1): unable to fixup (regular) error at logical 127941509120 on dev /dev/nvme1n1p1 [ 8803.060744] BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 0, flush 0, corrupt 16, gen 0 [ 8803.061994] BTRFS error (device nvme0n1p1): unable to fixup (regular) error at logical 127941509120 on dev /dev/nvme0n1p1 [ 8803.063656] BTRFS error (device nvme0n1p1): unable to fixup (regular) error at logical 127944531968 on dev /dev/nvme0n1p1 [ 8803.064907] BTRFS error (device nvme0n1p1): unable to fixup (regular) error at logical 127948152832 on dev /dev/nvme1n1p1 [ 8803.065907] BTRFS error (device nvme0n1p1): unable to fixup (regular) error at logical 127948152832 on dev /dev/nvme0n1p1 I was looking online and it says you can use btrfs-inspect-internal to resolve logical blocks to path/file names however this utility does not seem to be included in Unraid. The dmesg logging doesn't seem to include sector or path info like I see online that it should, only the logical number and device
  12. I am booting with UEFI so booted off a separate stick which has a bootable latest version of memtest86 on it
  13. Cross posting here as recommended. I am running v6.10rc4. almost 24 hours and 15 passes on memtest the machine is totally stable. I don’t have any other noticeable issues other than seeing the btrfs corruption in the disk log. Running Plex and such with Docker and a Windows 11 VM with the C drive on the cache. isn’t there any other option besides btrfs? I feel like it causes so many problems and I’ve seen numerous posts online about stability issues especially in a raid configuration. can someone help? Want to make sure I am not on a path to lose the data on cache. I’ve already rebuilt the cache by moving everything off with mover, deleting and recreating the pool then putting it all back. Fresh pool started producing corruption errors within 24 hours. Additional note here, after finishing memtest I booted up normal but didn't start my Windows 11 VM for a few hours. Now I started it up and am using it and the errors are popping in the log. Could my img file just be corrupt and using mover to move it on or off the array is just bringing the corruption along. Is there an effective way to copy the contents from the img file to a new one? My Windows 11 VM has been operating fine. thanks
  14. Going on 22 hours, 14 passes done and zero errors. This machine is definitely stable. the corruption errors are observed for both nvme drives using the terminal command to print them out by device so I’m doubting both drives are bad. something fishy is going on. I did not have this issue until either rc3 or rc4. Would not have noticed if I hadn’t looked at the disk log randomly while I was in the console one day.