couzin2000

Members
  • Posts

    37
  • Joined

Converted

  • Gender
    Male
  • Location
    Canada

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

couzin2000's Achievements

Noob

Noob (1/14)

2

Reputation

2

Community Answers

  1. My ZFS Master Plugin is stuck in "pending" update. There isn't any way I can click the icon to open a contextual menu and stop the app, or anything. As well, CA doesn't allow me to "force" the update, nor can I completely remove the plugin using GUI. I wan to know what can be done to either force the update or remove it and reinstall it. I havent really used the plugin yet, so I don't have anything setup except for running SOME drives as ZFS. Removing or forcing an update will not make me lose anything, I think. Any command line controls I should be aware of?
  2. I'M currently running into the exact same issue. Didn't try to reboot yet, but I'm wondering how you guys got to get it back up and running? Right now I'M at 6.12.8, trying to update 2024.03.03.66 to 2024.03.31.76. Any ideas?
  3. I know - but Preferences.xml is not part of the container if I'm not mistaken. It contains GUIDs and real settings for your specific server, and mine. Still not sure how to recup this file.
  4. Check in /config/Plex Media Server. My Preferences.xml is present, but seems empty. Not sure how that could have happened. A permissions error? not sure how to regen the file as I do not have a backup of this (and this is the last time I don't have one).
  5. I had a power outage last night. Power was only restored this morning. I am using a UPS so the system seems to have gracefully shut down. However, after rebooting the unraid server, I amd able to run all my Dockers except for binhex-plex. I've reinstalled the container and image. I seem to be unable to reach the PMS container. I've ran checks on my network and all the proper ports are still opened. Nothing seems out of place. Yet I cannot acces PMS. Anyone have an idea about what this can be?
  6. I have a question about Host Path 2 on the template. this IS presumably where I store the Plex database files, right? I don't mean the edia itself, but all of the save files, the cuesheets where I decide to save where I'm at in a movie, which show ahs played and which hasnt... is that correct? If this is the case, my host path is set to /mnt/user. I'm gussing this is going to the array, or is it going onto the cache drive?
  7. Confirmed, by turning the ZFS Master "Refresh interval" to "no Refresh", my ZFS-formatted disks stop spinning up all the time. Thank you"
  8. The solution in this post (resetting all data in browser) did not work at all for me. I'm also hoping we can all turn 2 columns back to 3. Somehow the Features video on unraid website shows 3 to 2 to 3 columns, yet I can't make that happen. #Bummed
  9. I just had a chance to skim the surface of your post, but it looks like it would work for sure. I'll test this later today and report back. Thank you!!
  10. The only disadvantage I can see is not being able to boot after a critical failure, therefore having to use the backup from the boot device without seeing what can be at fault in the log. This is why I'd rather have the log be removeable and readable elsewhere.
  11. I kept searching for a few days but I can't seem to find the topic correctly. Currently using Unraid 6.11.5. Running smoothly but I still want to monitor the system log from time to time. However there is very little recorded onto my syslog, as it is only in RAM. I'd like to add a separate USB thumbdrive and point the syslog to record logs to it. I think my main issue here is that I can't create a Pool device out of that thumbdrive and then hotswap it whenever I need to read its contents. Because it's a pool, I keep thinking it will not be readable from my main Win10 workstation. So how do I send the syslog to the USB key so that I can remove it whenever I want and hard-read it from a different PC? I need configuration how-to steps, if they're available. Thank you!
  12. OMG... I've been trying to fix my Unraid server for the last 3 months, and I thought I'd fixed it but all Dockers disappeared... and with this simple link, you solved all this for me! Thank you so much!!!
  13. Well, I tried, but somehow memtest won't load. i'm starting to be very afraid of this. i lost my Docker vDisk last night, no way to find it again, it doesnt seem to exist anymore. so yes, corruption is definitely present. But I can't fix it with memtest right now. I may have no choice but to keep generating more for the moment. so in the meantime, how to I wipe this clean? Can't I just remove the btrfs disk and reformat it to xfs, then start fresh?
  14. I am encountering data corruption on a single-disk btrfs pool. I moved my shares around so that Mover could place all of them on the other pool disk. Now, I am trying to re-enable my Docker containers (Plex, Crafty-4, etc) and Unraid is finding NONE. Only message I get is "No Docker containers defined". When moving with Mover, do I have to change the Docker data-root location? Is it preferable to have it as a folder or a Vdisk? I tried changing this setting to locate on the Vdisk where my files currently reside.... yet nothing. Please help!!
  15. So I read this: Because I was trying to move an UNUSED hass VM. When I set it up to move it from my "Cache" pool to my array, this is what I got: Oct 21 00:50:58 sebunraid1 root: mover: started Oct 21 00:50:58 sebunraid1 move: file: /mnt/cache/downloads/vmstorage/haos_ova-10.0.qcow2 Oct 21 00:51:00 sebunraid1 kernel: BTRFS warning (device sdh1): csum failed root 5 ino 789 off 3631906816 csum 0x7983a069 expected csum 0x7983a061 mirror 1 Oct 21 00:51:00 sebunraid1 kernel: BTRFS error (device sdh1): bdev /dev/sdh1 errs: wr 0, rd 0, flush 0, corrupt 479, gen 0 Oct 21 00:51:00 sebunraid1 kernel: BTRFS warning (device sdh1): csum failed root 5 ino 789 off 3631906816 csum 0x7983a069 expected csum 0x7983a061 mirror 1 Oct 21 00:51:00 sebunraid1 kernel: BTRFS error (device sdh1): bdev /dev/sdh1 errs: wr 0, rd 0, flush 0, corrupt 480, gen 0 Oct 21 00:51:00 sebunraid1 kernel: BTRFS warning (device sdh1): csum failed root 5 ino 789 off 3631906816 csum 0x7983a069 expected csum 0x7983a061 mirror 1 Oct 21 00:51:00 sebunraid1 kernel: BTRFS error (device sdh1): bdev /dev/sdh1 errs: wr 0, rd 0, flush 0, corrupt 481, gen 0 Oct 21 00:51:00 sebunraid1 shfs: copy_file: /mnt/cache/downloads/vmstorage/haos_ova-10.0.qcow2 /mnt/disk7/downloads/vmstorage/haos_ova-10.0.qcow2.partial (5) Input/output error Oct 21 00:51:00 sebunraid1 kernel: BTRFS warning (device sdh1): csum failed root 5 ino 789 off 3631906816 csum 0x7983a069 expected csum 0x7983a061 mirror 1 Oct 21 00:51:00 sebunraid1 kernel: BTRFS error (device sdh1): bdev /dev/sdh1 errs: wr 0, rd 0, flush 0, corrupt 482, gen 0 Oct 21 00:51:01 sebunraid1 move: move_object: /mnt/cache/downloads/vmstorage/haos_ova-10.0.qcow2 Input/output error Oct 21 00:51:01 sebunraid1 kernel: XFS (sdi1): Metadata corruption detected at xfs_dinode_verify+0xa0/0x732 [xfs], inode 0x41ba27f dinode Oct 21 00:51:01 sebunraid1 kernel: XFS (sdi1): Unmount and run xfs_repair Oct 21 00:51:01 sebunraid1 kernel: XFS (sdi1): First 128 bytes of corrupted metadata buffer: Oct 21 00:51:01 sebunraid1 kernel: 00000000: 49 4e 41 ff 03 01 00 00 00 00 00 63 00 00 00 64 INA........c...d Oct 21 00:51:01 sebunraid1 kernel: 00000010: 00 00 00 03 00 00 00 00 00 00 00 00 00 00 00 00 ................ Oct 21 00:51:01 sebunraid1 kernel: 00000020: 35 27 21 3c 8b c9 b9 db 35 27 21 3c 8b c9 b9 db 5'!<....5'!<.... Oct 21 00:51:01 sebunraid1 kernel: 00000030: 35 27 21 3c 8b c9 b9 db 00 00 00 00 00 00 00 16 5'!<............ Oct 21 00:51:01 sebunraid1 kernel: 00000040: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Oct 21 00:51:01 sebunraid1 kernel: 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 40 f5 24 e0 ............@.$. Oct 21 00:51:01 sebunraid1 kernel: 00000060: ff ff ff ff 90 cc 1a ae 00 00 00 00 00 00 00 04 ................ Oct 21 00:51:01 sebunraid1 kernel: 00000070: 00 00 00 71 00 01 44 9e 00 00 00 00 00 00 00 08 ...q..D......... Oct 21 00:51:01 sebunraid1 kernel: XFS (sdi1): Metadata corruption detected at xfs_dinode_verify+0xa0/0x732 [xfs], inode 0x41ba27f dinode Oct 21 00:51:01 sebunraid1 kernel: XFS (sdi1): Unmount and run xfs_repair Oct 21 00:51:01 sebunraid1 kernel: XFS (sdi1): First 128 bytes of corrupted metadata buffer: Oct 21 00:51:01 sebunraid1 kernel: 00000000: 49 4e 41 ff 03 01 00 00 00 00 00 63 00 00 00 64 INA........c...d Oct 21 00:51:01 sebunraid1 kernel: 00000010: 00 00 00 03 00 00 00 00 00 00 00 00 00 00 00 00 ................ Oct 21 00:51:01 sebunraid1 kernel: 00000020: 35 27 21 3c 8b c9 b9 db 35 27 21 3c 8b c9 b9 db 5'!<....5'!<.... Oct 21 00:51:01 sebunraid1 kernel: 00000030: 35 27 21 3c 8b c9 b9 db 00 00 00 00 00 00 00 16 5'!<............ Oct 21 00:51:01 sebunraid1 kernel: 00000040: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Oct 21 00:51:01 sebunraid1 kernel: 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 40 f5 24 e0 ............@.$. Oct 21 00:51:01 sebunraid1 kernel: 00000060: ff ff ff ff 90 cc 1a ae 00 00 00 00 00 00 00 04 ................ Oct 21 00:51:01 sebunraid1 kernel: 00000070: 00 00 00 71 00 01 44 9e 00 00 00 00 00 00 00 08 ...q..D......... Oct 21 00:51:01 sebunraid1 root: find: '/mnt/docker/appdata/binhex-plex/Plex Media Server/Media/localhost/9/bb2e252632253f6ce86cf38df9c8762e9c23acd.bundle': Structure needs cleaning Oct 21 00:51:01 sebunraid1 kernel: XFS (sdi1): Metadata corruption detected at xfs_dinode_verify+0xa0/0x732 [xfs], inode 0x41ba27f dinode Oct 21 00:51:01 sebunraid1 kernel: XFS (sdi1): Unmount and run xfs_repair Oct 21 00:51:01 sebunraid1 kernel: XFS (sdi1): First 128 bytes of corrupted metadata buffer: Oct 21 00:51:01 sebunraid1 kernel: 00000000: 49 4e 41 ff 03 01 00 00 00 00 00 63 00 00 00 64 INA........c...d Oct 21 00:51:01 sebunraid1 kernel: 00000010: 00 00 00 03 00 00 00 00 00 00 00 00 00 00 00 00 ................ Oct 21 00:51:01 sebunraid1 kernel: 00000020: 35 27 21 3c 8b c9 b9 db 35 27 21 3c 8b c9 b9 db 5'!<....5'!<.... Oct 21 00:51:01 sebunraid1 kernel: 00000030: 35 27 21 3c 8b c9 b9 db 00 00 00 00 00 00 00 16 5'!<............ Oct 21 00:51:01 sebunraid1 kernel: 00000040: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Oct 21 00:51:01 sebunraid1 kernel: 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 40 f5 24 e0 ............@.$. Oct 21 00:51:01 sebunraid1 kernel: 00000060: ff ff ff ff 90 cc 1a ae 00 00 00 00 00 00 00 04 ................ Oct 21 00:51:01 sebunraid1 kernel: 00000070: 00 00 00 71 00 01 44 9e 00 00 00 00 00 00 00 08 ...q..D......... Oct 21 00:51:01 sebunraid1 move: error: move, 392: Structure needs cleaning (117): lstat: /mnt/docker/appdata/binhex-plex/Plex Media Server/Media/localhost/9/bb2e252632253f6ce86cf38df9c8762e9c23acd.bundle Oct 21 00:51:01 sebunraid1 root: mover: finished Why?? How can I fix this BTRFS error? If I remove the drive altogether (which is what I'm trying to do), will this mess up everything?