Jump to content

johnsanc

Members
  • Content Count

    139
  • Joined

  • Last visited

Community Reputation

1 Neutral

About johnsanc

  • Rank
    Advanced Member
  • Birthday 03/14/1984

Converted

  • Gender
    Male
  • Location
    Charlotte, NC

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. @johnnie.black - THANK YOU! I'm glad the RAID1 bug has been logged and hopefully there is a fix soon so people don't think they are protected when they really aren't. If it weren't for digging through the forums or responses like yours people would never know. Also thanks for the monitoring tip. I added this script just now. Great idea!
  2. Thanks - running that now... I first tried it without that because the help content on the page said thats what is used by default. I guess thats not the case in all scenarios. Now does this look like it's going to be correct when it's done? Data, RAID1: total=204.00GiB, used=203.17GiB System, RAID1: total=32.00MiB, used=48.00KiB Metadata, RAID1: total=1.00GiB, used=256.00KiB Metadata, single: total=1.00GiB, used=568.91MiB GlobalReserve, single: total=228.12MiB, used=21.38MiB EDIT: just refreshed and I think this is looking better: Data, RAID1: total=205.00GiB, used=204.17GiB System, RAID1: total=32.00MiB, used=48.00KiB Metadata, RAID1: total=1.00GiB, used=588.12MiB GlobalReserve, single: total=228.27MiB, used=0.00B
  3. Well since I had a backup of the cache I just went ahead and formatted. No surprise that everything in the pool was wiped out after doing that. Everything appeared fine on the surface now, clean slate. So then I went to copy things back to the cache... after awhile I decided to check the cache settings and I noticed this: Why is only the Data in RAID1? Data, RAID1: total=205.00GiB, used=204.18GiB System, single: total=4.00MiB, used=48.00KiB Metadata, single: total=1.01GiB, used=588.72MiB GlobalReserve, single: total=228.83MiB, used=0.00B
  4. OK so my backup is complete and I reseated the cables. Everything booted up fine. Before the array was started I went to add disk2 back to its rightful slot... now it says disk2 is part of the cache pool and disk1 (which was previously good) is now unmountable with no filesystem. From the web GUI it looks like my only option is to format disk1. Is this correct? I would think that disk2 would need to be formatted and re-added... Doesn't make sense to me. If I try to remove disk2 again I get errors saying that disk2 is missing now and I cannot change the number of slots to 1. I guess my worries aren't completely unfounded. The only thing noteworthy I see in the log is this: Oct 13 16:58:39 Tower emhttpd: shcmd (212): mount -t btrfs -o noatime,nodiratime,degraded -U ec85aae9-18b3-4066-9749-8195b3bee6e8 /mnt/cache Oct 13 16:58:39 Tower kernel: BTRFS info (device sdr1): allowing degraded mounts Oct 13 16:58:39 Tower kernel: BTRFS info (device sdr1): disk space caching is enabled Oct 13 16:58:39 Tower kernel: BTRFS info (device sdr1): has skinny extents Oct 13 16:58:39 Tower kernel: BTRFS warning (device sdr1): devid 3 uuid 1d8f16d9-c1d4-4ba2-8a87-729545783443 is missing Oct 13 16:58:39 Tower kernel: BTRFS info (device sdr1): bdev /dev/sdr1 errs: wr 0, rd 2, flush 0, corrupt 0, gen 0 Oct 13 16:58:39 Tower kernel: BTRFS info (device sdr1): bdev /dev/sds1 errs: wr 1, rd 2, flush 0, corrupt 0, gen 0 Oct 13 16:58:39 Tower kernel: BTRFS warning (device sdr1): chunk 14780647735296 missing 1 devices, max tolerance is 0 for writeable mount Oct 13 16:58:39 Tower kernel: BTRFS warning (device sdr1): writeable mount is not allowed due to too many missing devices Oct 13 16:58:39 Tower emhttpd: /mnt/cache mount error: No file system Oct 13 16:58:39 Tower kernel: BTRFS error (device sdr1): open_ctree failed
  5. Yes thats what I thought, but I was bitten before where the entire pool was overwritten instead of just the disk so I am extra cautious. Right now I'm backing everything up to my array just in case something goes south when trying to rebuild my pool. Any explanation as to why it showed my cache as 2TB even though I only had a single 1TB disk at the time? That part I really don't understand.
  6. So basically back everything up to the array and start over? Also, is there an explanation for what happened in my case? Why did the cache size appear twice as large as the single physical disk remaining?
  7. OK - well I did not add a 2nd disk, I started the Array to backup the disk but now its rebalancing AGAIN... what is going on?!
  8. OK I rebooted and now my other drive is available to add to cache, but I get a warning that says "All existing data on this device will be OVERWRITTEN when array is Started". If I were to add my disk back would it rebalance back to the original state? I use two 1TB Samsung EVO SSDs These were in RAID1, mirrored I never did anything to change the RAID setting, whatever happened happened automatically when I rebooted the first time.
  9. Thanks - it just finished rebalancing and it did not do what I assumed it would do... Instead of balancing to a single drive now it looks like I have a single "Cache" which is twice the size of my physical drive. If I shut down the array I don't see my other disk even available for slot2. How can the size of my cache be twice the size? Why does unraid AUTOMATICALLY rebalance when the array starts in this scenario? That seems like an action that the user should have to initiate. Now when I stop the array and try to restart it I cannot even restart. My only option is to reboot or power down. I get a warning at the bottom saying I have a stale configuration.
  10. I'm not really sure what happened... I woke up this morning and most of my dockers and VMs were disabled, I had what looked like some I/O errors with one of my cache drives in the cache pool. I rebooted and when the system came back up I only see one cache drive now and there is a BTRFS operation running, disk log below. I can also see on my cache disk tab that there is only one disk and the size still shows as 2TB but it appears to be freeing up space and reducing used space... I'm hoping this is just rebalancing back to one disk or something but I cannot clearly tell from the UI what is going on. What exactly is happening here and how do I get my cache pool back to normal? I assumed I should just let this run instead of trying to stop it. I fear I will lose all my cache data if I try to interfere. Oct 13 10:55:36 Tower kernel: ata7: SATA max UDMA/133 abar m512@0xecf00000 port 0xecf00100 irq 48 Oct 13 10:55:36 Tower kernel: ata7: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Oct 13 10:55:36 Tower kernel: ata7.00: supports DRM functions and may not be fully accessible Oct 13 10:55:36 Tower kernel: ata7.00: ATA-11: Samsung SSD 860 EVO 1TB, [REDACTED], [REDACTED], max UDMA/133 Oct 13 10:55:36 Tower kernel: ata7.00: 1953525168 sectors, multi 1: LBA48 NCQ (depth 32), AA Oct 13 10:55:36 Tower kernel: ata7.00: supports DRM functions and may not be fully accessible Oct 13 10:55:36 Tower kernel: ata7.00: configured for UDMA/133 Oct 13 10:55:36 Tower kernel: ata7.00: Enabling discard_zeroes_data Oct 13 10:55:36 Tower kernel: sd 7:0:0:0: [sdr] 1953525168 512-byte logical blocks: (1.00 TB/932 GiB) Oct 13 10:55:36 Tower kernel: sd 7:0:0:0: [sdr] Write Protect is off Oct 13 10:55:36 Tower kernel: sd 7:0:0:0: [sdr] Mode Sense: 00 3a 00 00 Oct 13 10:55:36 Tower kernel: sd 7:0:0:0: [sdr] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 13 10:55:36 Tower kernel: ata7.00: Enabling discard_zeroes_data Oct 13 10:55:36 Tower kernel: sdr: sdr1 Oct 13 10:55:36 Tower kernel: ata7.00: Enabling discard_zeroes_data Oct 13 10:55:36 Tower kernel: sd 7:0:0:0: [sdr] Attached SCSI disk Oct 13 10:55:36 Tower kernel: BTRFS: device fsid ec85aae9-18b3-4066-9749-8195b3bee6e8 devid 1 transid 14803447 /dev/sdr1 Oct 13 10:55:52 Tower emhttpd: Samsung_SSD_860_EVO_1TB_[REDACTED] (sdr) 512 1953525168 Oct 13 10:55:52 Tower emhttpd: import 30 cache device: (sdr) Samsung_SSD_860_EVO_1TB_[REDACTED] Oct 13 10:56:02 Tower kernel: BTRFS info (device sdr1): allowing degraded mounts Oct 13 10:56:02 Tower kernel: BTRFS info (device sdr1): disk space caching is enabled Oct 13 10:56:02 Tower kernel: BTRFS info (device sdr1): has skinny extents Oct 13 10:56:02 Tower kernel: BTRFS warning (device sdr1): devid 3 uuid 1d8f16d9-c1d4-4ba2-8a87-729545783443 is missing Oct 13 10:56:02 Tower kernel: BTRFS warning (device sdr1): devid 3 uuid 1d8f16d9-c1d4-4ba2-8a87-729545783443 is missing Oct 13 10:56:02 Tower kernel: BTRFS info (device sdr1): bdev (null) errs: wr 1, rd 2, flush 0, corrupt 0, gen 0 Oct 13 10:56:02 Tower kernel: BTRFS info (device sdr1): bdev /dev/sdr1 errs: wr 0, rd 2, flush 0, corrupt 0, gen 0 Oct 13 10:56:03 Tower kernel: BTRFS info (device sdr1): enabling ssd optimizations Oct 13 10:56:03 Tower kernel: BTRFS info (device sdr1): resizing devid 1 Oct 13 10:56:03 Tower kernel: BTRFS info (device sdr1): new size for /dev/sdr1 is 1000204853248 Oct 13 10:56:03 Tower kernel: BTRFS info (device sdr1): relocating block group 14118149029888 flags data Oct 13 10:56:03 Tower kernel: BTRFS info (device sdr1): found 1 extents Oct 13 10:56:03 Tower kernel: BTRFS info (device sdr1): found 1 extents Oct 13 10:56:03 Tower kernel: BTRFS info (device sdr1): relocating block group 14117008179200 flags data|raid1 Oct 13 10:56:03 Tower kernel: BTRFS info (device sdr1): found 1 extents Oct 13 10:56:03 Tower kernel: BTRFS info (device sdr1): found 1 extents Oct 13 10:56:03 Tower kernel: BTRFS info (device sdr1): relocating block group 14112230211584 flags data|raid1 etc...
  11. Sounds good, thanks again for all your help with this. I'm glad it was actually something very simple, albeit a little tricky to find.
  12. I found a resolution to the issue I posted here and earlier in this thread if DEBUG is set to "yes" in the domain.cfg then the VMs UI will choke and appears to load forever. Changing this to "no" solved the issue. Still not really sure how that was ever set to "yes" in the first place though... just thought I would share in case anyone else runs into this.
  13. I FIGURED IT OUT!!! For some reason in my domain.cfg I had DEBUG set to "yes" - I have no idea how this even gets set to "yes" since I dont see anything for that in the webUI. I manually changed this to "no" and it worked. Not sure why that debug flag would cause the WebUI to choke, but at least I figured out the issue. Maybe thats something easy to fix with the next update. SERVICE="enable" IMAGE_FILE="/mnt/user/system/libvirt/libvirt.img" IMAGE_SIZE="1" DEBUG="yes" DOMAINDIR="/mnt/user/domains/vm/" MEDIADIR="/mnt/user/domains/iso/" VIRTIOISO="/mnt/user/domains/iso/virtio-win-0.1.160-1.iso" BRNAME="virbr0" VMSTORAGEMODE="auto" DISKDIR="/mnt/" TIMEOUT="60" HOSTSHUTDOWN="shutdown" (Restored my flash backup because the clean install was going to be an even bigger headache...)
  14. I was finally able to get it to work with stock .cfgs that were auto-generated with the exception of the disks cfg. I added that in so I could at least get to the VMs screen. - Although something is wrong with my cache devices. It did not recreate my cache pool properly. I think there is a defect in the VMs portion of the webGUI code. Also, if I start fresh with just the key and super.dat shouldn't it have assigned my cache drives properly? They are both populated as unassigned initially, which is a bit concerning.
  15. Yes its the same issue with all browsers across all devices I have, even in the unraid GUI mode browser. When I removed the .cfgs my cache drives are not included for some reason and so when I start the array the paths do not exist, so I cannot even get to the point where I can startup the VM service. I thought that including just the configs would be a good "bare minimum" - I'm not sure what else I can safely strip out.