Jump to content

johner

Members
  • Posts

    59
  • Joined

  • Last visited

Posts posted by johner

  1. On 1/26/2024 at 4:00 AM, ThatGuyOrlando said:

    Updated to latest version and now the container won't start

    ERROR: Current DB version 228 is newer than the current DB version 227, which means that it was created by a newer and incompatible version of Trilium. Upgrade to the latest version of Trilium to resolve this issue.

    nothing custom on this instance. Any ideas on how to resolve?

    I have a similar error, I tried removing the docker and image, then reinstalling, now getting:

     

    ERROR: Current DB version 228 is newer than app db version 197

  2. 25 minutes ago, JonathanM said:

    How do you handle user shares that have part of their storage go away when that pool is down? For instance, I keep some VM vdisks on the parity array, and some in a pool. They all seamlessly are accessible via /mnt/user/domains, and I move them as needed for speed or space, and they all just work no matter which disk actually holds the file.

    That’s a design and user experience question.

     

    obviously those specific VMs will not be able to be running if ‘their’ storage is offline.

     

    what experience do ‘you’ want? With this requirement enabled, it could be: prevent any vm/docker from using the parity array? (Maybe an easy MVP release), or more complex where the stop solution goes and traces back users of its array and stops any specific (singular VMs, specific shares etc.) services first - I suggest this as a later/subsequent release if people want this experience.

     

    first things first, agree the requirements, accept the requirements, scope a release, then design it.

    • Upvote 1
  3. 7 hours ago, JonathanM said:

    How do you accomplish that when the containers need the array to function? And before you say, "all my containers can be run without the array" remember that the vast majority of the containers or VM's people use with Unraid use the array for bulk or working storage.

     

    This is not a simple ask, it would require major rewrites of almost all of Unraid, and change how things work on a fundamental level.

    Array = parity array…

     

    docker and VMs should be able to continue to run on a cache etc.

     

    basically make each array independent with their own start/stop/check buttons etc.

     

    this might then also support multiple parity arrays…

  4. The beef people have, is it adds layers of complexity. unraid has native VM capability.

     

    esxi 8 has specific hardware requirements that not everyone can or want to meet.

     

    I use proxmox which is a lot more forgiving in this regard. I’d still like to have one management console, and my dockers to continue running when I perform work on the array. I also like the community apps that are VM image builders like the macOS one, a lot more faff trying to get these running on prox or esxi.

     

    I think there is enough of an ask here for it to be a valid requirement for lime tech to figure out a design that works and meets their licensing requirements.

  5.  
    You can technically already do this by pulling the customized raid6 module from unraid and re-compiling md to use it - then you're just depending on whatever OS you're running to do the VM/container work. 

    Yes exactly, for those that are curious, search the forum. There is a guy in Europe (Thomas Hellstrom) that has done this and even written his only cli based status/management tool. I tried to get him to publish it on GitHub but he’s yet to do it (well hadn’t a few months back when I last checked).


    Sent from my iPhone using Tapatalk
  6. Just seeing some of the comments on ‘you don’t know the internals’ - damn right I don’t and I don’t care as none of my business, it doesn’t change the fact that as a consumer I have valid requirements.

    Will they get implemented, hey I don’t know, or if when (due to design choices maybe by lime tech), but I do know market forces and eventually someone will release a product that meets these requirements.

    Ultimately I think people are paying for two things:

    1) the unraid parity solution (best efficiency ref disk space) - this is technically free given the open source code is available, so really they’re paying for the UI to configure it

    2) the docker community App Store, which is also community/open source as I understand it.

    I’m sure if enough frustrated people don’t get their diva-ish ways one will crack and start building a module for OMV on some other open source base platform.


    Sent from my iPhone using Tapatalk

    • Like 1
  7. Looking for some help, so I have the restarting unraid issue of having to reset the owner etc, however, that's not the real issue for me, the issue i have is plex see's the mount, I can see the files when adding the /data/[MySubFolder] to the plex library, but plex doesn't have permission to load that file to scan it to actually add the file to the library (thus library is empty)

     

    when i ls -la, the files have:

     

    -rw-r--r--

     

    Do they need execute also?

     

    I have set the ownership to 911:911, chmod 777 has not impact.

     

    What am i doing wrong?

     

    I'm considering installing rclone 'in' the plex docker at this rate, but i'm sure that'll be a new world of pain, and not as 'clean'

    • Upvote 1
  8. On 9/11/2021 at 5:28 AM, JonathanM said:

    didn't address what you want to do differently than what is currently happening.

    Imagine if ZFS or BTRFS designed their solution so that if you took one pool offline they all went offline?

     

    my requirement is to be able to independently start and stop a pool, be it the parity array (or plural if multiple arrays arrive in the future) or a cache pool. Only the services using that particular array/pool to be impacted. A warning to be presented listing which services are impacted, e.g. ‘Specific share 3’ is set to use ‘cache pool 5’ it will be remounted without cache, OR all dockers will be taken offline due to system image being stored on this pool OR this list of VMs will be hibernated as their disk images are stored on this pool, others will continue to run. And so on

     

    design considerations - I’m trying hard to not tell you how to design the solution, as I hate it when business teams at work focus on the solution and not the requirement 🙂

     

    thx!!!

  9. This, I have to run unraid as a VM on esxi for this exact constraint! I could run bare metal if the pools were separated out and could be managed independently (eg a setting to force vms and docker related contents to a specific cache drive that can be independently managed from other ‘pools’).

    Understandably for docker too!

    I thought about using my second unraid license as a separate vm on esxi to run just dockers (because I love the ui and App Store), but this constraint on being tied to an unraid array prevents running unraid for just docker - unless I missed something!?

    Please find a way to make this happen team!


    Sent from my iPhone using Tapatalk

  10. hi,

     

    I stumbled across this, and it looks like what i need to detect duplicates across disks under the logical share - so thanks.

     

    When I run without options, i get no dupes error after about 30s - all good (i assume!).

     

    I then went to run it with -c to double check

     

    bash unRAIDFindDuplicates.sh -c

     

    And it immediately responds with the help text.

     

    Q1: Is this a defect or am I doing something wrong?

     

    I then tried:

     

    bash unRAIDFindDuplicates.sh -z

     

    It said no duplicates (again after about 30s), then sat there reporting nothing for a few mins and eventually came back with lots of errors such as: (this is a subset):

     

    ls: cannot access '/mnt/disk*//appdata/ESPHome/hot_water_system/.piolibdeps/hot_water_system/ESPAsyncTCP-esphome/examples/SyncClient/.esp31b.skip': No such file or directory
    ls: cannot access '/mnt/disk*//appdata/FileBot/log/nginx/error.log': No such file or directory
    ls: cannot access '/mnt/disk*//appdata/FileBot/xdg/cache/openbox/openbox.log': No such file or directory
    ls: cannot access '/mnt/disk*//appdata/FileBot/.licensed_version': No such file or directory
    ls: cannot access '/mnt/disk*//appdata/FileBot/error.log': No such file or directory
    ls: cannot access '/mnt/disk*//appdata/Grafana-Unraid-Stack/data/influxdb/wal/telegraf/autogen/250/_01402.wal': No such file or directory
    ls: cannot access '/mnt/disk*//appdata/Grafana-Unraid-Stack/data/influxdb/wal/_internal/monitor/258/_00094.wal': No such file or directory
    ls: cannot access '/mnt/disk*//appdata/Grafana-Unraid-Stack/data/influxdb/wal/home_assistant/autogen/255/_00003.wal': No such file or directory
    ls: cannot access '/mnt/disk*//appdata/Grafana-Unraid-Stack/data/loki/index/index_2573': No such file or directory
    ls: cannot access '/mnt/disk*//appdata/Grafana-Unraid-Stack/data/loki/index/index_2520': No such file or directory
    ls: cannot access '/mnt/disk*//appdata/Grafana-Unraid-Stack/data/loki/index/index_2525': No such file or directory
    ls: cannot access '/mnt/disk*//appdata/Grafana-Unraid-Stack/data/loki/index/index_2551': No such file or directory
    ls: cannot access '/mnt/disk*//appdata/Grafana-Unraid-Stack/data/loki/index/index_2552': No such file or directory
    ls: cannot access '/mnt/disk*//appdata/Grafana-Unraid-Stack/data/loki/index/index_2579': No such file or directory
    ls: cannot access '/mnt/disk*//appdata/Grafana-Unraid-Stack/data/loki/index/index_2609': No such file or directory

     

    Q2: What is it trying to do when checking for zero length dupes that it isn't when running with no options?

     

    I then ran in verbose out of interest

    bash unRAIDFindDuplicates.sh -v

     

    I noticed two things:

     

    1 - this error half way through:

    List duplicate files
    unRAIDFindDuplicates.sh: line 373: verbose_to_bpth: command not found
    checking /mnt/disk1

     

    Q3: Will this error affect the actual dupe check? - I assume not.

     

    2 - it doesn't seem take into consideration additional cache drives that are an option to define in v6.9 (I have a second called 'scratch')

     

    Q4: Would you be willing to add something that can dynamically check for additional cache drive config and include in the no option execution?

     

    I then tried the -D option to add the additional cache drive (/mnt/scratch) to be treated as an array drive, and it went a bit screwy!

     

    bash unRAIDFindDuplicates.sh -v -D /mnt/scratch

     

    Output (killed with ctrl-c in the end)

     

    ============= STARTING unRAIDFIndDuplicates.sh ===================
    
    Included disks:
       /mnt/disk/mnt/scratch
       /mnt/disk1
       ...
    ...
    List duplicate files
    unRAIDFindDuplicates.sh: line 373: verbose_to_bpth: command not found
    unRAIDFindDuplicates.sh: line 404: cd: /mnt/disk/mnt/scratch: No such file or directory
    checking /mnt/disk/mnt/scratch
        [SHARENAMEREDACTED]
    	...
    Duplicate Files
    ---------------
    **Looks like it's now listing every file below here (these may be genuine - TBC)**
    ...

     

    I'm running 6.9.2 if that helps in anyway.

     

    Thanks!

    John

  11. I'd rewrite this as: The ability to individually start/stop/configure pools. (In context of the new multi pool feature of 6.9)

     

    It's really annoying when you want to work on say, one cache or the like, and everything has to stop (especially docker and the VMs when their storage is else where).

  12. Ok I deleted the file, and ran scrum from the cli:

     

    I think the abort is only on the missing disk, the 'aggregate' status is therefore aborted.

     

    root@Tower:~# btrfs scrub status -d /mnt/cache/
    UUID:             19b3ef51-b232-46af-ab0b-1bc724bd5a50
    scrub device /dev/sdad1 (id 1) history
    Scrub started:    Fri Oct 16 19:42:00 2020
    Status:           finished
    Duration:         0:08:00
    Total to scrub:   197.03GiB
    Rate:             284.85MiB/s
    Error summary:    no errors found
    scrub device /dev/sdag1 (id 2) history
    Scrub started:    Fri Oct 16 19:42:00 2020
    Status:           finished
    Duration:         0:01:14
    Total to scrub:   55.03GiB
    Rate:             271.81MiB/s
    Error summary:    no errors found
    scrub device  (id 3) history
    Scrub started:    Fri Oct 16 19:42:00 2020
    Status:           aborted
    Duration:         0:00:00
    Total to scrub:   142.00GiB
    Rate:             0.00B/s
    Error summary:    no errors found

    Now I need to figure out how to add the missing disk back in... google here I come...

  13. Apologies! Ok I've run it again, I only see 1 file impacted, if it is just this one then that is a result, as that file is the old backup from when i moved from Prox to Unraid, so it can go.

     

    @JorgeB How do I tell if any metadata is corrupt? I note 42 csum errors, but not 42 line entries:

     

    Oct 16 18:51:09 Tower ool www[10213]: /usr/local/emhttp/plugins/dynamix/scripts/btrfs_scrub 'start' '/mnt/cache' ''
    Oct 16 18:53:25 Tower kernel: scrub_handle_errored_block: 32 callbacks suppressed
    Oct 16 18:53:25 Tower kernel: BTRFS warning (device sdad1): checksum error at logical 492250292224 on dev /dev/sdad1, physical 49713471488, root 5, inode 19007, offset 47740817408, length 4096, links 1 (path: appdata/Plex-Media-Server/plexlib.zip)
    Oct 16 18:53:25 Tower kernel: btrfs_dev_stat_print_on_error: 32 callbacks suppressed
    Oct 16 18:53:25 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 85, gen 0
    Oct 16 18:53:25 Tower kernel: scrub_handle_errored_block: 32 callbacks suppressed
    Oct 16 18:53:25 Tower kernel: BTRFS error (device sdad1): unable to fixup (regular) error at logical 492250292224 on dev /dev/sdad1
    Oct 16 18:53:25 Tower kernel: BTRFS warning (device sdad1): checksum error at logical 492250333184 on dev /dev/sdad1, physical 49713512448, root 5, inode 19007, offset 47740858368, length 4096, links 1 (path: appdata/Plex-Media-Server/plexlib.zip)
    Oct 16 18:53:25 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 86, gen 0
    Oct 16 18:53:25 Tower kernel: BTRFS error (device sdad1): unable to fixup (regular) error at logical 492250333184 on dev /dev/sdad1
    Oct 16 18:53:25 Tower kernel: BTRFS warning (device sdad1): checksum error at logical 492250296320 on dev /dev/sdad1, physical 49713475584, root 5, inode 19007, offset 47740821504, length 4096, links 1 (path: appdata/Plex-Media-Server/plexlib.zip)
    Oct 16 18:53:25 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 87, gen 0
    Oct 16 18:53:25 Tower kernel: BTRFS error (device sdad1): unable to fixup (regular) error at logical 492250296320 on dev /dev/sdad1
    Oct 16 18:53:25 Tower kernel: BTRFS warning (device sdad1): checksum error at logical 492250337280 on dev /dev/sdad1, physical 49713516544, root 5, inode 19007, offset 47740862464, length 4096, links 1 (path: appdata/Plex-Media-Server/plexlib.zip)
    Oct 16 18:53:25 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 88, gen 0
    Oct 16 18:53:25 Tower kernel: BTRFS error (device sdad1): unable to fixup (regular) error at logical 492250337280 on dev /dev/sdad1
    Oct 16 18:53:25 Tower kernel: BTRFS warning (device sdad1): checksum error at logical 492250300416 on dev /dev/sdad1, physical 49713479680, root 5, inode 19007, offset 47740825600, length 4096, links 1 (path: appdata/Plex-Media-Server/plexlib.zip)
    Oct 16 18:53:25 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 89, gen 0
    Oct 16 18:53:25 Tower kernel: BTRFS error (device sdad1): unable to fixup (regular) error at logical 492250300416 on dev /dev/sdad1
    Oct 16 18:53:25 Tower kernel: BTRFS warning (device sdad1): checksum error at logical 492250341376 on dev /dev/sdad1, physical 49713520640, root 5, inode 19007, offset 47740866560, length 4096, links 1 (path: appdata/Plex-Media-Server/plexlib.zip)
    Oct 16 18:53:25 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 90, gen 0
    Oct 16 18:53:25 Tower kernel: BTRFS error (device sdad1): unable to fixup (regular) error at logical 492250341376 on dev /dev/sdad1
    Oct 16 18:53:25 Tower kernel: BTRFS warning (device sdad1): checksum error at logical 492250304512 on dev /dev/sdad1, physical 49713483776, root 5, inode 19007, offset 47740829696, length 4096, links 1 (path: appdata/Plex-Media-Server/plexlib.zip)
    Oct 16 18:53:25 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 91, gen 0
    Oct 16 18:53:25 Tower kernel: BTRFS error (device sdad1): unable to fixup (regular) error at logical 492250304512 on dev /dev/sdad1
    Oct 16 18:53:25 Tower kernel: BTRFS warning (device sdad1): checksum error at logical 492250345472 on dev /dev/sdad1, physical 49713524736, root 5, inode 19007, offset 47740870656, length 4096, links 1 (path: appdata/Plex-Media-Server/plexlib.zip)
    Oct 16 18:53:25 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 92, gen 0
    Oct 16 18:53:25 Tower kernel: BTRFS error (device sdad1): unable to fixup (regular) error at logical 492250345472 on dev /dev/sdad1
    Oct 16 18:53:25 Tower kernel: BTRFS warning (device sdad1): checksum error at logical 492250308608 on dev /dev/sdad1, physical 49713487872, root 5, inode 19007, offset 47740833792, length 4096, links 1 (path: appdata/Plex-Media-Server/plexlib.zip)
    Oct 16 18:53:25 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 93, gen 0
    Oct 16 18:53:25 Tower kernel: BTRFS error (device sdad1): unable to fixup (regular) error at logical 492250308608 on dev /dev/sdad1
    Oct 16 18:53:25 Tower kernel: BTRFS warning (device sdad1): checksum error at logical 492250349568 on dev /dev/sdad1, physical 49713528832, root 5, inode 19007, offset 47740874752, length 4096, links 1 (path: appdata/Plex-Media-Server/plexlib.zip)
    Oct 16 18:53:25 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 94, gen 0
    Oct 16 18:53:25 Tower kernel: BTRFS error (device sdad1): unable to fixup (regular) error at logical 492250349568 on dev /dev/sdad1

    The result from the scrum states 'aborted' again, does this mean it only found a sub set of the 42 errors, therefore there could be more damaged files I don't yet know about?

     

    I'll delete the above file and scrub again.

     

    UUID:             19b3ef51-b232-46af-ab0b-1bc724bd5a50
    Scrub started:    Fri Oct 16 18:51:09 2020
    Status:           aborted
    Duration:         0:09:13
    Total to scrub:   396.98GiB
    Rate:             485.58MiB/s
    Error summary:    csum=42
      Corrected:      0
      Uncorrectable:  42
      Unverified:     0

     

  14. So if it is the metadata that is corrupt (see previous reply), how will it manifest to Unraid given I'm running appdata etc. on here for the dockers and VM (just one at the mo). I'm running a nightly backup using the CA plugin (to the spinning rust unraid array), but everything seems to work, nothing 'functionally' complains with the dockers/VM.

     

    Therefore I'm not sure if my backup is good, as there are no issues other than the csum errors - unless I'm missing some errors in the syslog that are indirectly related to these csum issues but I can't see anything obvious.

  15. Ok so here is the output from scrub 1:

    Oct 15 18:16:49 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 2, gen 0
    Oct 15 18:16:49 Tower kernel: BTRFS error (device sdad1): unable to fixup (regular) error at logical 492250333184 on dev /dev/sdad1
    Oct 15 18:16:49 Tower kernel: BTRFS error (device sdad1): unable to fixup (regular) error at logical 492250292224 on dev /dev/sdad1
    Oct 15 18:16:49 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 3, gen 0
    Oct 15 18:16:49 Tower kernel: BTRFS error (device sdad1): unable to fixup (regular) error at logical 492250337280 on dev /dev/sdad1
    Oct 15 18:16:49 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 4, gen 0
    Oct 15 18:16:49 Tower kernel: BTRFS error (device sdad1): unable to fixup (regular) error at logical 492250296320 on dev /dev/sdad1
    Oct 15 18:16:49 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 5, gen 0
    Oct 15 18:16:49 Tower kernel: BTRFS error (device sdad1): unable to fixup (regular) error at logical 492250341376 on dev /dev/sdad1
    Oct 15 18:16:49 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 6, gen 0
    Oct 15 18:16:49 Tower kernel: BTRFS error (device sdad1): unable to fixup (regular) error at logical 492250300416 on dev /dev/sdad1
    Oct 15 18:16:49 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 7, gen 0
    Oct 15 18:16:49 Tower kernel: BTRFS error (device sdad1): unable to fixup (regular) error at logical 492250345472 on dev /dev/sdad1
    Oct 15 18:16:49 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 8, gen 0
    Oct 15 18:16:49 Tower kernel: BTRFS error (device sdad1): unable to fixup (regular) error at logical 492250304512 on dev /dev/sdad1
    Oct 15 18:16:49 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 9, gen 0
    Oct 15 18:16:49 Tower kernel: BTRFS error (device sdad1): unable to fixup (regular) error at logical 492250349568 on dev /dev/sdad1
    Oct 15 18:16:49 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 10, gen 0
    Oct 15 18:16:49 Tower kernel: BTRFS error (device sdad1): unable to fixup (regular) error at logical 492250308608 on dev /dev/sdad1
    Oct 15 18:16:49 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 11, gen 0
    

    And scrub 2:

    Oct 15 18:27:32 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 43, gen 0
    Oct 15 18:27:32 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 44, gen 0
    Oct 15 18:27:32 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 46, gen 0
    Oct 15 18:27:32 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 46, gen 0
    Oct 15 18:27:32 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 47, gen 0
    Oct 15 18:27:32 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 48, gen 0
    Oct 15 18:27:32 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 49, gen 0
    Oct 15 18:27:32 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 50, gen 0
    Oct 15 18:27:33 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 51, gen 0
    Oct 15 18:27:33 Tower kernel: BTRFS error (device sdad1): bdev /dev/sdad1 errs: wr 0, rd 0, flush 0, corrupt 52, gen 0

    I think the different outcome was because I didn't click the repair option the 2nd time.

     

    From a google search I think the above means that the issue is with the metadata as no file names are listed, is this your understanding?

  16. Hi, ah ok thanks.

     

    I ran scrub once and it aborted with 42 uncorrectable errors. I ran it again:

     

    UUID:             19b3ef51-b232-46af-ab0b-1bc724bd5a50
    Scrub started:    Thu Oct 15 18:25:53 2020
    Status:           aborted
    Duration:         0:07:55
    Total to scrub:   396.11GiB
    Rate:             564.68MiB/s
    Error summary:    csum=42
      Corrected:      0
      Uncorrectable:  0
      Unverified:     0

    Aborted again but no uncorrectable errors this time. I didn't see if it actually got to the end of the run, the duration is a little short of what it predicted.

×
×
  • Create New...