jslay

Members
  • Posts

    8
  • Joined

  • Last visited

Everything posted by jslay

  1. Someone who can edit the code for this, please change the default to a reasonable time until we can at least set it ourselves without having to edit a file, where the changes are wiped out when you update any notification settings. An "advanced view" would be nice where we could define our own cron schedules vs presets.
  2. This is happening to me (sometimes, it will just time out and I never get the tab to complete loading, left with a blank table). This started after I had my cache disks both die on me at the same time, wiping out existing VMs and libvirt. I got new cache drives in, got moved back onto cache drives, rebuilt new VMs from scratch (as I lost the img from before). When it timesout, I receive ERR_HTTP2_PROTOCOL_ERROR. I can refresh a few more times, and eventually, I will get the tab to load where I can manage my VMs via GUI again. I've made sure that everything is back on the SSD cache drives, and nothing is on spinning disk (including libvirt). Does feel like an issue with loading libvirt, as once I can eventually get it to load the VM tab, it will refresh normally with no issue for some time until the VM tab hasn't been accessed for a while. unraid-diagnostics-20220505-2312.zip
  3. Well, came back to it after letting it sit, and the cache drives are just gone now. Rebooted, still missing. Nothing under /dev for nvme. Nothing under BIOS. They ded.
  4. Welp, it's my turn for this one I guess. Looks like I lost the superblock on one of my cache drives in the cache pool (as well as the other one showing signs of dying as well). I have been trying to recover some of the data (unsuccessfully), and am wondering if there are any other paths for me before formatting the pool and losing data. I have been following the @JorgeB guide Feb 15 16:13:19 unraid kernel: BTRFS info (device nvme0n1p1): turning on async discard Feb 15 16:13:19 unraid kernel: BTRFS info (device nvme0n1p1): using free space tree Feb 15 16:13:19 unraid kernel: BTRFS info (device nvme0n1p1): has skinny extents Feb 15 16:13:20 unraid kernel: BTRFS info (device nvme0n1p1): enabling ssd optimizations Feb 15 16:13:20 unraid kernel: BTRFS info (device nvme0n1p1): start tree-log replay Feb 15 16:13:20 unraid kernel: blk_update_request: critical medium error, dev nvme1n1, sector 2099264 op 0x1:(WRITE) flags 0x1800 phys_seg 4 prio class 0 Feb 15 16:13:20 unraid kernel: blk_update_request: critical medium error, dev nvme1n1, sector 2099424 op 0x1:(WRITE) flags 0x1800 phys_seg 1 prio class 0 Feb 15 16:13:20 unraid kernel: blk_update_request: critical medium error, dev nvme1n1, sector 2099488 op 0x1:(WRITE) flags 0x1800 phys_seg 2 prio class 0 Feb 15 16:13:20 unraid kernel: blk_update_request: critical medium error, dev nvme1n1, sector 2099616 op 0x1:(WRITE) flags 0x1800 phys_seg 1 prio class 0 Feb 15 16:13:20 unraid kernel: blk_update_request: critical medium error, dev nvme1n1, sector 2099680 op 0x1:(WRITE) flags 0x1800 phys_seg 1 prio class 0 Feb 15 16:13:20 unraid kernel: blk_update_request: critical medium error, dev nvme1n1, sector 2099776 op 0x1:(WRITE) flags 0x1800 phys_seg 3 prio class 0 Feb 15 16:13:20 unraid kernel: blk_update_request: critical medium error, dev nvme1n1, sector 2099904 op 0x1:(WRITE) flags 0x1800 phys_seg 1 prio class 0 Feb 15 16:13:20 unraid kernel: blk_update_request: critical medium error, dev nvme1n1, sector 2100000 op 0x1:(WRITE) flags 0x1800 phys_seg 2 prio class 0 Feb 15 16:13:20 unraid kernel: blk_update_request: critical medium error, dev nvme1n1, sector 2100096 op 0x1:(WRITE) flags 0x1800 phys_seg 1 prio class 0 Feb 15 16:13:20 unraid kernel: blk_update_request: critical medium error, dev nvme1n1, sector 2100160 op 0x1:(WRITE) flags 0x1800 phys_seg 1 prio class 0 Feb 15 16:13:20 unraid kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme1n1p1 errs: wr 0, rd 0, flush 1, corrupt 0, gen 0 Feb 15 16:13:20 unraid kernel: BTRFS warning (device nvme0n1p1): chunk 507969011712 missing 1 devices, max tolerance is 0 for writable mount Feb 15 16:13:20 unraid kernel: BTRFS: error (device nvme0n1p1) in write_all_supers:3845: errno=-5 IO failure (errors while submitting device barriers.) Feb 15 16:13:20 unraid kernel: BTRFS warning (device nvme0n1p1): Skipping commit of aborted transaction. Feb 15 16:13:20 unraid kernel: BTRFS: error (device nvme0n1p1) in cleanup_transaction:1942: errno=-5 IO failure Feb 15 16:13:20 unraid kernel: BTRFS: error (device nvme0n1p1) in btrfs_replay_log:2279: errno=-5 IO failure (Failed to recover log tree) Feb 15 16:13:20 unraid root: mount: /mnt/cache: can't read superblock on /dev/nvme1n1p1. Only way I can get this to mount is mount -o ro,notreelog,nologreplay /dev/nvme0n1p1 /x But subsequently, trying to copy data out of it results in all sorts of I/O errors and unreadable/incomplete files. btrfs restore is failing as well. Restoring /mnt/user/Backups/cache_backup/domains/vm1/vdisk1.img offset is 1114112 offset is 1138688 offset is 16384 offset is 81920 offset is 176128 offset is 20480 offset is 4096 offset is 3416064 offset is 3465216 offset is 3538944 offset is 3854336 offset is 3928064 offset is 163840 offset is 6578176 offset is 12173312 offset is 12193792 offset is 4096 offset is 12288 offset is 1114112 offset is 1228800 offset is 1478656 offset is 1585152 offset is 1703936 offset is 1769472 offset is 1916928 offset is 1982464 offset is 2027520 offset is 2056192 offset is 2076672 offset is 2121728 offset is 2306048 offset is 2351104 offset is 2433024 offset is 2441216 offset is 2482176 offset is 2498560 offset is 2666496 offset is 2707456 offset is 2727936 offset is 2748416 offset is 2863104 offset is 3002368 offset is 3014656 offset is 3104768 offset is 3207168 offset is 3272704 offset is 3297280 offset is 3469312 offset is 3493888 offset is 3563520 offset is 3629056 offset is 3801088 offset is 3895296 offset is 3928064 offset is 3948544 offset is 4005888 offset is 4022272 offset is 4149248 offset is 4202496 offset is 4227072 offset is 4243456 offset is 4321280 offset is 4345856 offset is 4452352 offset is 4472832 offset is 4575232 offset is 4603904 offset is 4636672 offset is 4648960 offset is 4915200 offset is 4988928 offset is 5320704 offset is 5386240 offset is 6975488 offset is 7143424 offset is 8925184 offset is 9699328 offset is 9854976 offset is 9904128 We seem to be looping a lot on /mnt/user/Backups/cache_backup/domains/vm1/vdisk1.img, do you want to keep going on ? (y/N/a): Restoring /mnt/user/Backups/cache_backup/appdata/server/some_file1.txt ERROR: exhausted mirrors trying to read (2 > 1) Error copying data for /mnt/user/Backups/cache_backup/appdata/server/some_file1.txt Restoring /mnt/user/Backups/cache_backup/appdata/server/some_file2.txt ERROR: exhausted mirrors trying to read (2 > 1) Error copying data for /mnt/user/Backups/cache_backup/appdata/server/some_file2.txt Restoring /mnt/user/Backups/cache_backup/appdata/server/some_file3.txt ERROR: exhausted mirrors trying to read (2 > 1) Error copying data for /mnt/user/Backups/cache_backup/appdata/server/some_file3.txt unraid-diagnostics-20220215-1616.zip
  5. Overview: Support for NexusOSS Docker Image in jslay repo. Application: Nexus OSS - https://www.sonatype.com/products/repository-oss Docker Hub: https://hub.docker.com/r/sonatype/nexus3/ GitHub and Documentation: https://github.com/jslay88/unraid_apps/blob/master/templates/README/NexusOSS.md
  6. Facepalm. Thanks, sure enough. Any safe way to move this other than? rsync -avX /mnt/disk3/disk4/ /mnt/disk3
  7. So late last night as I was wrapping up to go to bed, I had a parity disk failure. This is a single parity disk setup (stock wasn't there at the time of the original purchase to get the 2nd parity, figured I'd wait a while and get a different batch to offset the parities anyway). Fortunately, I still have enough head room on the array to salvage one of the data disks and turn it into a parity disk. I followed the wiki in regards to replacing smaller disks with a single larger, as I was essentially doing a similar pattern, although disk sizes aren't changing, and the rsync'd target disk already exists and will remain. I rsync'd disk4 over to disk3, to turn disk4 into the new parity disk. rsync -avX /mnt/disk4 /mnt/disk3 At the end of that, I ran it again just to ensure nothing was left behind. I checked the Main tab and noticed disk3's used space had effectively grown by the same amount of data that was on disk4. So, being late at night, and trusting rsync, I went forward with stopping the array, building a new config, and moving disk4 over to parity and started the array again to get parity building rolling for the next day. This ended up putting me to bed around 4:30AM. Come to this morning, I get up, get work started, and start checking the status of parity. I then start to browse back over my data and notice all the files that were copied over to disk3 via rsync, are not showing up or accessible, yet disk3 is still showing that used disk space. The original files that were on disk3 are still there and accessible, just the rsync'd data from disk4 is missing. Is there something I did wrong or out of order here? Am I screwed and have phantom used space? Is there some way to have unraid pickup the copied data assuming it did copy successfully since rsync was showing it did?
  8. There are a few things I can think of that would be quite beneficial to have API integrations in unRAID for Docker and VMs. Docker Features Some sort of docker-compose.yml ingestion would be nice. Start/stop/restart/edit/status/check for update/update containers VM Features Create/Delete VMs/Disks Start/Stop/Restart/Status/Edit VMs ARP-Scan/Ping to get interface addresses for VMs This would allow us to automate VM creation and subsequent automation of the VM configuration post creation. Someone could easily tie in a set of curl requests into an Ansible module for unRAID, which would increase exposure for unRAID as well.