Jump to content

sparkus

Members
  • Content Count

    11
  • Joined

  • Last visited

Community Reputation

0 Neutral

About sparkus

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Ha @Tucubanito07 I apparently didn't read well. Sorry to have thrown you off the track.
  2. @Tucubanito07 might be the same error that @fatalfurry and I reported earlier. nobody from unraid has really responded to it.
  3. So I guess I'm one of the few that has experienced an issue, yay. (See earlier post, with my config attached) I'm not familiar with the process limetech uses to dispensate/respond to those of us that are having issues with the upgrade....can anyone help me with a description of that process?
  4. Tried to update today, got what I think is the same error as @fatalfurry above. Fell back to backup 6.8.0 and it booted without issue. diagnostics attached. tower-diagnostics-20200113-1708.zip
  5. Hello, I'm having problems with being unable to update plugins, and being unable to upgrade docker containers without them crashing. I took a look at the diagnostics, and it could be the flash drive I think, but if someone could help me out I'd appreciate it. I also have a cache drive that's an SSD and is not mirrored, so that could be an issue. Thanks! tower-diagnostics-20180110-1705.zip
  6. so I mounted a backup device (this time with xfs) and backed up the drive in question. Started the array in maintenance mode, then used check --readonly Produced this result: checking extents incorrect offsets 15996 43 bad block 746107731968 Errors found in extent allocation tree or chunk allocation incorrect offsets 15996 43 Checking filesystem on /dev/md then I ran check --repair Produced the following result. checking extents incorrect offsets 15996 43 Fixed 0 roots. checking free space cache checking fs roots checking csums checking root refs enabling repair mode Checking filesystem on /dev/md1 UUID: ebc5a243-841f-47d6-890a-08aed9f8fd23 Shifting item nr 7 by 15953 bytes in block 745197944832 cache and super generation don't match, space cache will be invalidated found 2019719933952 bytes used err is 0 total csum bytes: 1969407612 total tree bytes: 2099068928 total fs tree bytes: 17039360 total extent tree bytes: 18350080 btree space waste bytes: 71212365 file data blocks allocated: 2017638678528 referenced 2017602125824 I don't know what most of this means, so I ran the check --readonly Produced this result: checking extents checking free space cache checking fs roots checking csums checking root refs Checking filesystem on /dev/md1 UUID: ebc5a243-841f-47d6-890a-08aed9f8fd23 cache and super generation don't match, space cache will be invalidated found 2019273338880 bytes used err is 0 total csum bytes: 1969407612 total tree bytes: 2098462720 total fs tree bytes: 17039360 total extent tree bytes: 18350080 btree space waste bytes: 71171756 file data blocks allocated: 2017192771584 referenced 2017156218880 I took this as a good sign, and am running a parity check now. Thanks for all the help. I'm just documenting this in case someone else stumbles upon it later.
  7. Thank you all for answering my questions. If I just have a btrfs error on one drive can I just back up the one drive worth of data for the recovery? Or should I have a total backup of all the information? Offsite Total backup at my upload speed would take a couple of years, but a single drive I could probably afford to do the backup of.
  8. Thanks for the info. What's the best way to backup an individual drive then?
  9. I have a parity drive, are you insinuating I need a better backup than that?
  10. [/dev/md1].write_io_errs 0 [/dev/md1].read_io_errs 0 [/dev/md1].flush_io_errs 0 [/dev/md1].corruption_errs 0 [/dev/md1].generation_errs 0 Running scrub now.
  11. So I haven't been checking the system log, and I just did, and I found that this error has been happening for about a month or so. I read some of the file system repair tools, but I'm not sure how to figure out which device is md1. I'm gathering I should do a BTRFS scrub on this, but any other advice on how to possibly solve would be awesome sauce. Apr 17 12:14:41 Tower kernel: BTRFS critical (device md1): corrupt leaf, slot offset bad: block=746107731968, root=1, slot=6 tower-diagnostics-20170518-1404.zip