Jump to content

fkick

Members
  • Posts

    18
  • Joined

  • Last visited

Posts posted by fkick

  1. Hi all,

     

    My Unraid system has been quite stable over the last few years, but this past week I've been seeing what appear to be random hard crashes where the system completely locks out including both the gui and terminal access from the system itself. 

     

    I haven't made any recent changes to dockers or VMs, and the issue has occurred both under 6.10.3 and 6.11.1. 

     

    Attached are my diags. As the system has completely hard crashed, I've had to perform unsafe shutdown/reboots each time and it looks like I'm losing the log files from before the crash. 

     

    The latest crash was today at about 7:40 am CST.

     

    Thank you

    tower-diagnostics-20221019-1048.zip

  2. Hi all,

     

    I've been using unraid for the last few years as media NAS for home and wanted to light up a VM to test some things with. 

     

    My processor is older, but does have VT-x support. It's an Intel i3-4330 @ 3.5ghz and the mother board is an ASRock Z87M Extreme4.

     

    Checking the Settings -> VM Manager, my LIbvirt storage location is set to /mnt/user/system/libvirt/libvirt.img, but when I "Enable VMs" the Status always read's "Stopped". Visiting the VM tab, I see the error "Libvirt Service Failed To Start". Checking with MC, it doesn't look like the libvirt.img file is being created.

     

    My previous VM experience is limited to VMWare and Parallels, so I'd appreciate any advice. Diags attached. 

     

    Thank you!

    tower-diagnostics-20211118-1741.zip

  3. All right, so I was able to get my system back up today and running 6.8... here's what I did.

     

    1. Create a new USB key and enable UEFI boot (and turn on UEFI boot in my BIOS). 

    2. Following the instructions at the post below, I added root=sda to my syslinux/syslinux.cfg file.

     

    3. Attempt a boot without moving my old Config folder over.

    4. Once that boot was successful, move the old Config over to the new drive and reboot.

     

    Previous to Unraid 6.8, I was not using UEFI boot, but for some reason it appears it's required in the new kernel.

     

    Hope that helps!

     

  4. 1 hour ago, itimpi said:

    The problem is that the issue looks to be hardware related rather than software to there is no guarantee that a future release will fix your problem.

    I don't believe this is a hardware failure problem.

     

    A clean flash drive with 6.8 also doesn't work for me, however, I did do some early RC testing with RC 1 and 2, and I was able to boot. This am I tracked down my flash backups with 6.8RC2 on it and I'm able to boot fine, however if I try a 6.8RC4 flash drive (I don't have a RC3 copy) I'm getting the same kernel panic issues as above. So something that changed from RC2 to RC4 seems to be the issue.

  5. It looks like I was mistaken with the "Save new hashing results to flash" being the cause of my issue. I woke this am to some drive over heating warnings as this plugin was again walking through every file on my array, even though all my builds were showing up to date (and the disk it was currently walking through had no media added to it). 

     

    I did still have "Automatically protect new and modified files on", however, if this is supposed to only generate and check for hashes on new content, does this need  to scan of every item in the array? 

  6. After doing some more testing, it looks like the setting "Save new hashing results to flash" is what is causing the the system to rescan every file every day, rather then just updating the previously exported hashes. Once I disabled this setting, I'm no longer getting the hours of reads on my drives. Looks like I'll just need to export my hashes manually if I want them on the flash key.

     

    Thanks!

  7. 1 hour ago, Benson said:

    You may monitor "bunker" process for know more FIP doing what and other process such as "md5sum" "sha1sum" "b2sum" etc.

    Thanks,

     

    Looking at my Processes that are running, I'm seeing Bunker scanning files and inotifywait running with the following:

    root     32155 21440  0 11:30 ?        00:00:00 /bin/bash /usr/local/emhttp/plugins/dynamix.file.integrity/scripts/bunker -Aq1 -b2 -f /boot/config/plugins/dynamix.file.integrity/saved/disks.export.20180814.new.hash /mnt/disk1/TV/File1.m4v
    root     32173 32155  3 11:30 ?        00:00:01 /usr/bin/b2sum /mnt/disk1/TV/File1.m4v
    root     32392 21440  0 11:30 ?        00:00:00 /bin/bash /usr/local/emhttp/plugins/dynamix.file.integrity/scripts/bunker -Aq1 -b2 -f /boot/config/plugins/dynamix.file.integrity/saved/disks.export.20180814.new.hash /mnt/disk1/TV/File2.m4v
    root     32723 21440  0 11:30 ?        00:00:00 /bin/bash /usr/local/emhttp/plugins/dynamix.file.integrity/scripts/bunker -Aq1 -b2 -f /boot/config/plugins/dynamix.file.integrity/saved/disks.export.20180814.new.hash /mnt/disk1/TV/File2.m4v
    root     32744 32723  3 11:30 ?        00:00:01 /usr/bin/b2sum /mnt/disk1/TV/File3.m4v
    root     19889     2  0 11:31 ?        00:00:00 [kworker/u8:1]
    root     21439     1  0 Aug13 ?        00:00:05 inotifywait -dsrqo /var/run/hash.pipe -e close_write --exclude ^/mnt/disk[0-9]+/(.*docker\.img$|.*\.AppleDB/|.*\.tmp$|.*\.nfo$|.*\.temp$|.*\.itdb$|.*\.DS_Store$|\.Trash\-99/|docker/|.*\.AppleDB/|.*\.DS_Store$) --format %w%f /mnt/disk1 /mnt/disk2 /mnt/disk3 /mnt/disk4 /mnt/disk5 /mnt/disk6 /mnt/disk7 /mnt/disk8
    root     21440     1  0 Aug13 ?        00:01:25 /bin/bash /usr/local/emhttp/plugins/dynamix.file.integrity/scripts/watcher A -b2 (.*docker\.img$|.*\.AppleDB/|.*\.tmp$|.*\.nfo$|.*\.temp$|.*\.itdb$|.*\.DS_Store$|\.Trash\-99/|docker/|.*\.AppleDB/|.*\.DS_Store$) /mnt/disk1 /mnt/disk2 /mnt/disk3 /mnt/disk4 /mnt/disk5 /mnt/disk6 /mnt/disk7 /mnt/disk8
    root     22768 21440  0 11:32 ?        00:00:00 /bin/bash /usr/local/emhttp/plugins/dynamix.file.integrity/scripts/bunker -Aq1 -b2 -f /boot/config/plugins/dynamix.file.integrity/saved/disks.export.20180814.new.hash /mnt/disk1/TV/File4.m4v
    root     22789 22768  3 11:32 ?        00:00:04 /usr/bin/b2sum /mnt/disk1/TV/File4.m4v
    root     23381     2  0 11:27 ?        00:00:00 [kworker/1:2]
    root     23931     2  0 11:27 ?        00:00:00 [kworker/3:2]
    root     24814 15807  0 11:19 ?        00:00:00 [afpd] 

    And if I look at the integrity tab, it is showing Drive 1 as Up-todate though Bunker appears to be parsing it (with drive 6 and 7 not up to date, even though they were about 30 minutes ago and no media has been added or modified). See attached

     

    Thanks for your help!

     

    Integrity.thumb.png.cfcf11d9ee000cf2ef66aa455d62fa1b.png

  8. 10 hours ago, Benson said:

     

    Just the new/modified file hash for all array disks.

    For showing "O" fixing, select the disks and build.

    Thanks Benson. It seems every day I'm getting the "O" on random drives in the array, even if no new files are added, and I'm getting a lot of reads on drives for hours in the background since enabling the "auto". 

     

    Is there a way to see what the plugin is doing when run automatically in the gui? I know I can see manually run operations but can't see anything in the log regarding an auto run.

     

    Thanks

  9. Hi

    When "Automatically protect new and modified files" is turned on, is the entire drive that the new file is added to scanned and recalculated, or just the new/modified file? Also, I'm noticing that the "build up-to-date" on a few of my drives is showing "O" even with the auto protect option turned on.

     

    Thanks!

×
×
  • Create New...