Jump to content

Fuggin

Members
  • Posts

    273
  • Joined

  • Last visited

Posts posted by Fuggin

  1. My own fault, but I forgot to stop unbalance when shutting down the server. Now I have to wait for the entire rsync process for a particular folder of shows to finish moving before it unmounts the drives. I thought stopping unbalance would stop after finishing a single file transfer but it doesn't. It only stops when the whole folder is moved.

     

    I am assuming this is normal plugin behavior?

  2. Based on what I have seen on discord alone, there has been a rash of USB drives dying prematurely. Few users have suggested allowing the Unraid OS to load on a cache drive or dedicated OS SSD thus removing the read/write frequency on the USB drive and just access it to verify license upon boot. Perhaps do a randomly timed license check to verify that someone doesn't try to move it onto another system.

  3. FYI....changing CPU Scaling Governor from Performance to On Demand caused kernel panics. At first I thought it was hardware issues but I swapped new hardware with older and known working hardware and was still getting kernel panics within a day of resetting the server. Changed it back to Performance and it hasn't crashed since.

     

    Using Supermicro mobo with E5-2600 V2 series CPUs and running 6.10.RC2.

  4. I wanted to shrink my array and preserved my array/cache pools (I did double check I had both selected). Removed my drives I wanted to shrink, booted into unraid and only my cache pool was preserved. I do have a backup of my disk assignments. If I assign the disks that were originally assigned, how do I know the data would still be there?

  5. 36 minutes ago, doron said:

    Just to clarify, this appears to be unrelated to the plugin, and applies to both SATA and SAS drives. Those drives (mainly Seagates) are spun up soon after being spun down. Seems to be kernel related, though not sure.

    Sent from my tracking device using Tapatalk
     

    Ok...most of my drives are HGST and it still happens, so it appears to be kernel related then.

  6. On 12/7/2021 at 11:05 PM, Fuggin said:

    I keep getting connection fail in the IPMI IP address field.

     

    I can log in through the browser no problem, but won't connect in unraid. Please advise.

    Got it working but I can't see any of the sensors. Using Supermicro X9DRL-3F/iF board. I have Supermicro X9 board selected but won't show anything.

  7. 1 minute ago, trurl said:

    Are you trying to write to user shares or directly to the disks? If user shares, which ones.

    Just using unbalance to move files around...keeps saying there isn't enough space even though I have 50TB of free space available.

  8. As title says, for some reason I can't write files to the following disks: 9, 14, 16, 17, 18, 19, 21, 22, 23

     

    10,11 are on the excluded list...gonna pull them soon.

     

    I have moved the disks to other slots in the server, checked all connections, check global shares, etc...diskspeed benchmarks the drives fine still can't write to them. Looking for ideas and where else to look.

     

     

    tower-diagnostics-20211117-1901.zip

  9. Yeah then I am really confused...I know I didn't assign a data drive to a parity slot...I've done this before...it just doesn't make sense what happened.

     

    All my drives showed up...I assigned my new parity drives first because their serial numbers were unique and similar enough, then the rest were the data drives that were in the system already...I just when down the line and re-assigned them. I started the array, it asked to rebuild the parity drives and it went ahead and did so. It finished rebuilding overnight, I woke up and 48TB was gone.

  10. So...I know what I did wrong...and I thought I have done this many times and never had problems.

     

    I hit new config in tools, added the larger, new parity drives, rearranged the physical locations of the data disks AND their order in the array.....

     

    Pretty sure that did the damage. Lesson learned...

  11. 13 hours ago, trurl said:

    Your array disks are all mountable, and each contains a lot of data. Your pools are nearly empty. It looks like sdh (sabcomplete) had some problems but was later formatted.

     

    If there are any missing files, someone or something must have deleted them. Parity rebuild can't delete files because parity doesn't know anything about files, only bits. And parity rebuild doesn't change anything on any data disk. It reads all data disks to get the result of the parity calculation(s) to write to the parity disk(s), but it isn't reading files, just bits.

     

    Even if you were rebuilding a data disk, a problem rebuilding wouldn't result in deleted files, because rebuild from parity doesn't know anything about files. You would get filesystem corruption and unmountable disk instead of missing files.

     

     

    Thank you. I am going to look into Sonarr logs and see if there was something there that caused the files to get deleted since it was mostly TV media files.

  12. 7 minutes ago, trurl said:

    Did all of that data exist when you booted your server Oct 28 17:57:47 ?

    yes...

     

    Spoiler

    total 16K
    drwxr-xr-x 34 root   root  680 Nov  2 13:12 ./
    drwxr-xr-x 20 root   root  440 Nov  2 13:22 ../
    drwxrwxrwx  1 nobody users  90 Nov  2 13:22 cache/
    drwxrwxrwx  9 nobody users 161 Nov  2 13:22 disk1/
    drwxrwxrwx  9 nobody users 161 Nov  2 13:22 disk10/
    drwxrwxrwx  9 nobody users 161 Nov  2 13:22 disk11/
    drwxrwxrwx  9 nobody users 129 Nov  2 13:22 disk12/
    drwxrwxrwx  9 nobody users 161 Nov  2 13:22 disk13/
    drwxrwxrwx  8 nobody users 141 Nov  2 13:22 disk14/
    drwxrwxrwx  9 nobody users 161 Nov  2 13:22 disk15/
    drwxrwxrwx  8 nobody users 141 Nov  2 13:22 disk16/
    drwxrwxrwx  8 nobody users 113 Nov  2 13:22 disk17/
    drwxrwxrwx  8 nobody users 141 Nov  2 13:22 disk18/
    drwxrwxrwx  9 nobody users 161 Nov  2 13:22 disk19/
    drwxrwxrwx  9 nobody users 161 Nov  2 13:22 disk2/
    drwxrwxrwx  8 nobody users 141 Nov  2 13:22 disk20/
    drwxrwxrwx  8 nobody users 141 Nov  2 13:22 disk21/
    drwxrwxrwx  7 nobody users 130 Nov  2 13:22 disk22/
    drwxrwxrwx  9 nobody users 158 Nov  2 13:22 disk3/
    drwxrwxrwx  9 nobody users 161 Nov  2 13:22 disk4/
    drwxrwxrwx  9 nobody users 161 Nov  2 13:22 disk5/
    drwxrwxrwx  9 nobody users 161 Nov  2 13:22 disk6/
    drwxrwxrwx  9 nobody users 161 Nov  2 13:22 disk7/
    drwxrwxrwx  9 nobody users 161 Nov  2 13:22 disk8/
    drwxrwxrwx  8 nobody users 141 Nov  2 13:22 disk9/
    drwxrwxrwt  2 nobody users  40 Oct 28 17:58 disks/
    drwxrwxr-x  2 nobody users   6 Nov  2 13:22 movies-cache/
    drwxrwxrwt  2 nobody users  40 Oct 28 17:58 remotes/
    drwxrwxrwx  2 nobody users   6 Nov  2 18:42 sabcomplete/
    drwxr-xr-x  2 root   root   40 Nov  2 13:12 sabcomplete-cache/
    drwxrwxrwx  4 nobody users  35 Nov  2 16:06 sabdownloads/
    drwxrwxrwx  3 nobody users  22 Nov  2 13:22 tv-cache/
    drwxrwxrwx  1 nobody users 161 Nov  2 18:42 user/
    drwxrwxrwx  1 nobody users 161 Nov  2 13:22 user0/

     

  13. 2 minutes ago, trurl said:

    Are you absolutely sure it didn't happen before replacing parity? As already mentioned, parity contains none of your data, and rebuilding parity changes none of the disks that have your data.

    I am absolutely certain...My array was 97% full while it was rebuilding....it was almost done rebuilding when I went to bed. Woke up and blammo....array was only 60% full...it's just bizarre.

  14. 2 minutes ago, trurl said:

    Are you sure you didn't accidentally move them? Or maybe one of your dockers moved or deleted them?

     

    Why do you have 100G for docker.img and for libvirt.img? 20G is often more than enough for docker.img, and I don't think anyone has ever needed more than the default 1G for libvirt.img.

    I never move them manually....it's all handled by sabnzbd/radarr/sonarr....  All I know is that all I did was install new larger parity drives and this happened. I am not worried about the data loss but I just need help knowing where to look and seeing what happened so I don't do it again. This is the first time since I started using Unraid (2012-2013-ish) that I have lost this much data.

    As for the img sizes...I don't know.....did that so long ago...

  15. 2 hours ago, JorgeB said:

    Everything looks normal to me, all disks are mounting and none of the them is empty, certainly didn't have anything to do with the scheduled check, where are you missing data from?

    TV_Shows share...odd thing though, not everything in that share was lost...just about 10-20 files still remained. I looked through the files and can't find anything that would have caused their deletion...

  16. 1 minute ago, itimpi said:

    No - the idea is to identify that you have a problem that you can investigate further.   If the problem is a data drive then you do NOT want it to cause invalid corrections to be made to parity which could then prejudice recovery of that drive’s contents if is later replaced.   I would only recommend having the option to correct parity set for a check which is initiated manually after you have decided that is the most appropriate action.

     

    Still not clear how data loss can result from building parity as that is only reading from the data drives. 

    Well...crap...I didn't know that...ok...gonna leave it off from here on out...

     

    Anyways...any insight on what caused the data loss would be helpful...if the data drives are fine, can I still put my old parity drives back and rebuild the array?

  17. 26 minutes ago, itimpi said:

    I am confused as writing parity should never affect the content of the data drives :(    Perhaps if you post your system’s diagnostics zip file we might be able to get a better idea of what happened and the current state of the system.

     

    BTW:  it is recommended that scheduled parity checks are set to be non-correcting - your description makes it sound as this is not how you have yours set?

    Yes...my corrections were set on (I have always had it on, assuming the whole point was to write corrections on the array if there was a problem)....the timing of the upgrading the parity drives with my monthly parity scheduler messed it up I think....I've uprgraded parity drives before but this is the first time I have ever lost data as a result.

     

    Diags attached...thanks for the help.

     

    tower-diagnostics-20211102-1238.zip

×
×
  • Create New...