Jump to content

Arcaeus

Members
  • Posts

    174
  • Joined

  • Last visited

Posts posted by Arcaeus

  1. On 6/5/2024 at 9:36 AM, binhex said:

    I will need to see a log to help further, please see the following link:- https://github.com/binhex/documentation/blob/master/docker/faq/help.md#unraid-users

    Here are the two log files you requested. I tried turning off the VPN when editing the container, and it worked, but when I turned it back on, I got the same error. The logs below are from when the VPN container variable was set to yes.

    I just tried it again and it's working?? Not sure what changed.

    DelugeVPN_command_execution.txt supervisord.log

  2. Hey everyone. Just updated to the new version of DelugeVPN and now I can't open the webGUI. I use PIA as my VPN client. I personally haven't changed any of the settings, just clicked "Update Container" or whatever through the Docker tab. Now when I click on webUI, I just get a "Connection Refused" error.

     

    I opened the Docker logs and saw some SSL/TLS errors, but I'm not sure what they mean:

    2024-06-05 09:19:37,831 DEBG 'start-script' stdout output:
    2024-06-05 09:19:37 WARNING: file 'credentials.conf' is group or others accessible
    2024-06-05 09:19:37 OpenVPN 2.6.10 [git:makepkg/ba0f62fb950c56a0+] x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] [DCO] built on Mar 20 2024
    2024-06-05 09:19:37 library versions: OpenSSL 3.3.0 9 Apr 2024, LZO 2.10
    2024-06-05 09:19:37 DCO version: N/A
    
    2024-06-05 09:19:38,831 DEBG 'start-script' stdout output:
    2024-06-05 09:19:38 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts
    
    2024-06-05 09:19:38,831 DEBG 'start-script' stdout output:
    2024-06-05 09:19:38 OpenSSL: error:068000E9:asn1 encoding routines::utctime is too short:
    2024-06-05 09:19:38 OpenSSL: error:0688010A:asn1 encoding routines::nested asn1 error:Field=revocationDate, Type=X509_REVOKED
    2024-06-05 09:19:38 OpenSSL: error:0688010A:asn1 encoding routines::nested asn1 error:Field=revoked, Type=X509_CRL_INFO
    2024-06-05 09:19:38 OpenSSL: error:0688010A:asn1 encoding routines::nested asn1 error:Field=crl, Type=X509_CRL
    
    2024-06-05 09:19:38,832 DEBG 'start-script' stdout output:
    2024-06-05 09:19:38 OpenSSL: error:0488000D:PEM routines::ASN1 lib:
    2024-06-05 09:19:38 CRL: cannot read CRL from file [[INLINE]]
    2024-06-05 09:19:38 CRL: loaded 0 CRLs from file -----BEGIN X509 CRL-----
    MIICWDCCAUAwDQYJKoZIhvcNAQENBQAwgegxCzAJBgNVBAYTAlVTMQswCQYDVQQI
    EwJDQTETMBEGA1UEBxMKTG9zQW5nZWxlczEgMB4GA1UEChMXUHJpdmF0ZSBJbnRl
    cm5ldCBBY2Nlc3MxIDAeBgNVBAsTF1ByaXZhdGUgSW50ZXJuZXQgQWNjZXNzMSAw
    HgYDVQQDExdQcml2YXRlIEludGVybmV0IEFjY2VzczEgMB4GA1UEKRMXUHJpdmF0
    ZSBJbnRlcm5ldCBBY2Nlc3MxLzAtBgkqhkiG9w0BCQEWIHNlY3VyZUBwcml2YXRl
    aW50ZXJuZXRhY2Nlc3MuY29tFw0xNjA3MDgxOTAwNDZaFw0zNjA3MDMxOTAwNDZa
    MCYwEQIBARcMMTYwNzA4MTkwMDQ2MBECAQYXDDE2MDcwODE5MDA0NjANBgkqhkiG
    9w0BAQ0FAAOCAQEAQZo9X97ci8EcPYu/uK2HB152OZbeZCINmYyluLDOdcSvg6B5
    jI+ffKN3laDvczsG6CxmY3jNyc79XVpEYUnq4rT3FfveW1+Ralf+Vf38HdpwB8EW
    B4hZlQ205+21CALLvZvR8HcPxC9KEnev1mU46wkTiov0EKc+EdRxkj5yMgv0V2Re
    ze7AP+NQ9ykvDScH4eYCsmufNpIjBLhpLE2cuZZXBLcPhuRzVoU3l7A9lvzG9mjA
    5YijHJGHNjlWFqyrn1CfYS6koa4TGEPngBoAziWRbDGdhEgJABHrpoaFYaL61zqy
    MR6jC0K2ps9qyZAN74LEBedEfK7tBOzWMwr58A==
    -----END X509 CRL-----
    
    2024-06-05 09:19:38 TCP/UDP: Preserving recently used remote address: [AF_INET]84.247.105.223:1198
    2024-06-05 09:19:38 UDPv4 link local: (not bound)
    2024-06-05 09:19:38 UDPv4 link remote: [AF_INET]84.247.105.223:1198
    
    2024-06-05 09:19:38,923 DEBG 'start-script' stdout output:
    2024-06-05 09:19:38 VERIFY ERROR: CRL not loaded
    2024-06-05 09:19:38 OpenSSL: error:0A000086:SSL routines::certificate verify failed:
    2024-06-05 09:19:38 TLS_ERROR: BIO read tls_read_plaintext error
    2024-06-05 09:19:38 TLS Error: TLS object -> incoming plaintext read error
    2024-06-05 09:19:38 TLS Error: TLS handshake failed
    
    2024-06-05 09:19:38,923 DEBG 'start-script' stdout output:
    2024-06-05 09:19:38 SIGHUP[soft,tls-error] received, process restarting
    
    2024-06-05 09:19:38,923 DEBG 'start-script' stdout output:
    2024-06-05 09:19:38 DEPRECATED OPTION: --cipher set to 'aes-128-cbc' but missing in --data-ciphers (AES-256-GCM:AES-128-GCM:CHACHA20-POLY1305). OpenVPN ignores --cipher for cipher negotiations. 
    
    2024-06-05 09:19:38,923 DEBG 'start-script' stdout output:
    2024-06-05 09:19:38 WARNING: file 'credentials.conf' is group or others accessible
    2024-06-05 09:19:38 OpenVPN 2.6.10 [git:makepkg/ba0f62fb950c56a0+] x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] [DCO] built on Mar 20 2024
    2024-06-05 09:19:38 library versions: OpenSSL 3.3.0 9 Apr 2024, LZO 2.10
    2024-06-05 09:19:38 DCO version: N/A
    
    2024-06-05 09:19:39,924 DEBG 'start-script' stdout output:
    2024-06-05 09:19:39 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts
    
    2024-06-05 09:19:39,924 DEBG 'start-script' stdout output:
    2024-06-05 09:19:39 OpenSSL: error:068000E9:asn1 encoding routines::utctime is too short:
    2024-06-05 09:19:39 OpenSSL: error:0688010A:asn1 encoding routines::nested asn1 error:Field=revocationDate, Type=X509_REVOKED
    2024-06-05 09:19:39 OpenSSL: error:0688010A:asn1 encoding routines::nested asn1 error:Field=revoked, Type=X509_CRL_INFO
    
    2024-06-05 09:19:39,924 DEBG 'start-script' stdout output:
    2024-06-05 09:19:39 OpenSSL: error:0688010A:asn1 encoding routines::nested asn1 error:Field=crl, Type=X509_CRL
    2024-06-05 09:19:39 OpenSSL: error:0488000D:PEM routines::ASN1 lib:
    2024-06-05 09:19:39 CRL: cannot read CRL from file [[INLINE]]
    2024-06-05 09:19:39 CRL: loaded 0 CRLs from file -----BEGIN X509 CRL-----
    MIICWDCCAUAwDQYJKoZIhvcNAQENBQAwgegxCzAJBgNVBAYTAlVTMQswCQYDVQQI
    EwJDQTETMBEGA1UEBxMKTG9zQW5nZWxlczEgMB4GA1UEChMXUHJpdmF0ZSBJbnRl
    cm5ldCBBY2Nlc3MxIDAeBgNVBAsTF1ByaXZhdGUgSW50ZXJuZXQgQWNjZXNzMSAw
    HgYDVQQDExdQcml2YXRlIEludGVybmV0IEFjY2VzczEgMB4GA1UEKRMXUHJpdmF0
    ZSBJbnRlcm5ldCBBY2Nlc3MxLzAtBgkqhkiG9w0BCQEWIHNlY3VyZUBwcml2YXRl
    aW50ZXJuZXRhY2Nlc3MuY29tFw0xNjA3MDgxOTAwNDZaFw0zNjA3MDMxOTAwNDZa
    MCYwEQIBARcMMTYwNzA4MTkwMDQ2MBECAQYXDDE2MDcwODE5MDA0NjANBgkqhkiG
    9w0BAQ0FAAOCAQEAQZo9X97ci8EcPYu/uK2HB152OZbeZCINmYyluLDOdcSvg6B5
    jI+ffKN3laDvczsG6CxmY3jNyc79XVpEYUnq4rT3FfveW1+Ralf+Vf38HdpwB8EW
    B4hZlQ205+21CALLvZvR8HcPxC9KEnev1mU46wkTiov0EKc+EdRxkj5yMgv0V2Re
    ze7AP+NQ9ykvDScH4eYCsmufNpIjBLhpLE2cuZZXBLcPhuRzVoU3l7A9lvzG9mjA
    5YijHJGHNjlWFqyrn1CfYS6koa4TGEPngBoAziWRbDGdhEgJABHrpoaFYaL61zqy
    MR6jC0K2ps9qyZAN74LEBedEfK7tBOzWMwr58A==
    -----END X509 CRL-----
    
    2024-06-05 09:19:39 TCP/UDP: Preserving recently used remote address: [AF_INET]172.64.151.73:1198
    2024-06-05 09:19:39 UDPv4 link local: (not bound)
    2024-06-05 09:19:39 UDPv4 link remote: [AF_INET]172.64.151.73:1198

     

    The rest of my system is running fine, no issues with other containers. Diags posted in case they help.

    mediavault-diagnostics-20240605-0920.zip

  3. Ok, figured it out. Here is what worked for me:

     

    1. Shutdown VM
    2. Adjust capacity in Unraid GUI VM page.
    3. Download gparted, move iso to desired location, and set gparted iso as primary boot disk.
    4. Boot into gparted iso and Extend vdisk to include new space.
    5. Apply and shutdown.
    6. Swap back to original vdisk as primary boot drive.
    7. Boot up VM
    8. SSH into VM (I use Moba)
    9. List disks and storage:
      1. df -h
    10. List file system size
      1. lsblk
    11. Since this should be lvm, run lvextend:
      1. lvextend -l +100%FREE <logical volume> 
      2. Mine specifically was:
        1. lvextend -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv 
    12. Resize the file system so it can use all of that space.
      1. resize2fs <file system path>
      2. Mine was
        1. resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
    13. "df -h" now shows correct size:
      1. image.png.d563d801a486d98e26df1da88a2dda20.png
    14. Message shows that disk has been resized successfully, which shows in "lsblk" again.

    adjusted disk size success.png

  4. Hello, 

     

    I have PiHole running in a small Ubuntu VM on my Unraid server (6.12.4) and have run out of space on the root vdisk. As I was watching the SpaceInvaderOne video that was in the guides section of the forum, after he showed how to increase the disk size in the Unraid GUI, he talked about extending the partition (although his video showed Windows).

     

    I looked up a guide on how to do it in Linux, but it shows that I would have to unmount the disk first. Since this is the root disk that the OS and everything is running off of, I can't unmount it, and I can't change anything if the VM is shut down of course. 

     

    So, how would I go about increasing the partition size? Or clone this disk to a larger one?

    adjusted disk size.png

  5. Hello,

     

    Woke up this morning to my disk1 showing as disabled. I've had no issues with it for the last few months and haven't touched any of the cables. My server is currently in a server rack in the basement so highly unlikely that it got bumped or anything. Checking the disk log information, I'm seeing a lot of bad sectors (log attached). I stopped the array and unassigned the disk, then when trying to re-assign it the disk was not in the drop down menu.

     

    Looking through some previous posts on here, I think the disk may be dying. Attempting to run a smart report immediately fails (whether it was in the array or not) and iirc that is one of the older disks in the array. Diags attached below are from when the array was stopped.

     

    I started the array with the disk removed, and the files from disk1 are emulated. Diags from running array are also attached. It looks like the disk (sdp) in UD is unmountable or otherwise can't be added to the array?image.thumb.png.baf3a1dc3f7955cf20b2f20b0c0fb0eb.png

     

    I had some other disk show similar errors in the past which ended up being some data corruption. I wanted to just check with you guys to see if there was anything I could do before replacing the disk, or if there was a way to get this disk back up and running. Even if this disk can be fixed, should I be looking into replacing it anyways as it's already on its way out?

    log.txt mediavault-diagnostics-20221212-1045.zip [array-started] mediavault-diagnostics-20221212-1045.zip

  6. Hey there, 

     

    I'm trying to get your DayZ container running on my k3s cluster. It's up and running well, and just had a question on modding the server. 

     

    Where do I put the mod files that I've downloaded from the Steam Workshop? I've seen documentation from Nitrado for setting up mods on their servers that say to put it in the main parent game folder. I see that your container has an addons folder already, so do I take all the .pbo and .bisign files for each mod and throw them in there, then put the respective "keys" files in the keys folder?

     

    Do you have any documentation on how to do this?

  7. Hello,

     

    I have certain folders in the Media share are staying on the cache drive after Mover has ran. Everything else has no problem moving off of the cache disk to the array, except for these folders. I believe it is because the same file already exists on the array so Mover is skipping it or something. If I look in /mnt/user0 vs. /mnt/user vs. /mnt/cache, I can find the same file in the same folder structure (see attached screenshots).

     

    Media share is set to Cache:Yes

    File permissions of cache folders almost match folders on array, and the user0 file is owned by 'nobody' instead of the rest being 'hydra'

     

    Is there any issue with me deleting the files off of the cache drive directly via CLI/Krusader? Or what is the best way to resolve this?

     

    While I'm in here, I see folders in /mnt/cache/appdata of containers that I've since deleted. Can I delete that appdata folder if I know I'm never going to use that again? Or is there a better way to clean that up?

     

    Diags attached.

    cache file.png

    user file permissions.png

    user0 file permissions.png

    mediavault-diagnostics-20220902-0930.zip

  8. Hello,

     

    I recently got a pair of 16TB WD Golds and wanted to put one as my parity disk (upgrading from a 4TB) and then once that is completed, add in the other 16TB as a data disk plus move the old 4TB as another data disk. I have precleared both 16TB disks and had them mounted in unassigned devices as a local backup while I was figuring out some file system issues and waiting for my cloud backup to finish.

     

    Searching through the forum I found this process from @Squid which states:

     

    Quote

    Stop the array 

    Assign the 12TB as parity drive

    Start the Array and let it build the parity drive

     

    After that's done, stop the array and add the original 8TB as a data drive 5.  Unraid will clear it in the background and when that's done (hours), then the storage space is available.

     

    The standard way does give you a fall back, as the original parity drive is available for you to reinstall if something terrible goes wrong.

     

    I stopped the array and in Main clicked to select the 16TB disk to replace the existing parity drive. Once I did, the screen changed and now none of the disks are showing up:

    image.thumb.png.840759bf2dd36c5392a2538c7b722561.png

     

    Refreshing the page does nothing, and switching back and forth to another page does nothing. Trying to open the log in the top right shows the window for a second and then closes without displaying any info. The unassigned devices section on main is just spinning with the wavy Unraid logo. Attempted to download diagnostics file, but it's just sitting there after clicking download.

     

     

    I wanted to post in here before I did anything else to not potentially mess something up. Thanks for your help.

     

    Unraid Pro v6.10.3

     

     

     

    SOLUTION: This ended up being a DNS issue. I run PiHole as an Ubuntu VM on Unraid, as use that for local DNS resolution. When I stopped the array, it shut down the VM so Unraid would only display the cached webpage data. Once I changed the DNS info on Windows and went to the IP address of my server, the problem resolved itself.

  9. Recently I have set up some Proxmox VMs that use an Unraid NFS share as their main disk. On Unraid 6.9.2, having the 'Support Hard Links'=Yes would cause stale file handle errors when Mover ran (from what I can determine) on Proxmox and in the VMs, causing them to crash or need a hard reboot. Since turning that to 'no', they've been fine.

     

    However, I am in the process of setting up Sonarr and Radarr and in diagnosing some issues their documentation recommends Hard Links to be enabled. Has this issue been resolved in the new 6.10.2 version? Or is that something to do with Unraid itself?

  10. Hello everyone,

    I have 5 NFS shares coming from my Unraid server into my Proxmox cluster of 4 nodes (named Prox-1 through 4 respectively). These shares are:

    • AppData
    • Archives
    • Backups
    • ISOs
    • VM-Datastore

    They connect on Proxmox fine, but the "Backups" share will error out partway through a backup operation run on Proxmox, and the Archives share will randomly disconnect. Attached are the backup operation logs for a couple of the VMs. As you can see, the write info goes to zero after a point and generally stays there. 

     

    I've posted over there as well, but wanted to check with you guys to confirm that it's not something on the Unraid side causing issues.

     

    Is Unraid erroring out? or is something being overloaded? Diags attached if needed.

    VMID 100 backup error log.txt VMID 201 backup error log.txt fstab and config.cfg info.txt mediavault-diagnostics-20220511-1439.zip

  11. 12 minutes ago, JorgeB said:

    Just restart the array, parity is valid.

     

    Ok array is started with disk 7 mounted now, thank you. It looks like there are a lot of files in the lost+found folder, and Main - Array also shows a lot of free space on the disk, much more than was there before. 

     

    What's my next step? Do I need to go through lost+found on the drive and move files back to where they should be or will Unraid put back what was there before? Or do it from the lost+found share? I know moving files from a specific disk was not recommended on Unraid but not sure if that applies here.

  12. 12 minutes ago, JorgeB said:

    Run it again without -n and without -L, if it didn't finish before and asks for -L again use it.

     

    Ok it looks like it completed, here is the output:

    root@MediaVault:~# xfs_repair -v /dev/md7
    Phase 1 - find and verify superblock...
            - block cache size set to 542384 entries
    Phase 2 - using internal log
            - zero log...
    zero_log: head block 0 tail block 0
            - scan filesystem freespace and inode maps...
            - found root inode chunk
    Phase 3 - for each AG...
            - scan and clear agi unlinked lists...
            - process known inodes and perform inode discovery...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
            - setting up duplicate extent list...
            - check for inodes claiming duplicate blocks...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
    Phase 5 - rebuild AG headers and trees...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - reset superblock...
    Phase 6 - check inode connectivity...
            - resetting contents of realtime bitmap and summary inodes
            - traversing filesystem ...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - traversal finished ...
            - moving disconnected inodes to lost+found ...
    Phase 7 - verify and correct link counts...
    
            XFS_REPAIR Summary    Tue May 10 11:15:48 2022
    
    Phase           Start           End             Duration
    Phase 1:        05/10 11:15:48  05/10 11:15:48
    Phase 2:        05/10 11:15:48  05/10 11:15:48
    Phase 3:        05/10 11:15:48  05/10 11:15:48
    Phase 4:        05/10 11:15:48  05/10 11:15:48
    Phase 5:        05/10 11:15:48  05/10 11:15:48
    Phase 6:        05/10 11:15:48  05/10 11:15:48
    Phase 7:        05/10 11:15:48  05/10 11:15:48
    
    Total run time:
    done

     

    What's next? Restart the array and rebuild parity?

  13. Just now, JorgeB said:

    No need to rebuild since the disk is enable, but you do need to run xfs_repair without -n to fix the current corruptions.

     

    Attempted to repair the file system as I was getting local errors on the monitor attached to the bare metal computer. When trying to run 'xfs_repair -v /dev/md7' received this error:

     

    root@MediaVault:~# xfs_repair -v /dev/md7
    Phase 1 - find and verify superblock...
            - block cache size set to 542376 entries
    Phase 2 - using internal log
            - zero log...
    zero_log: head block 159593 tail block 159588
    ERROR: The filesystem has valuable metadata changes in a log which needs to
    be replayed.  Mount the filesystem to replay the log, and unmount it before
    re-running xfs_repair.  If you are unable to mount the filesystem, then use
    the -L option to destroy the log and attempt a repair.
    Note that destroying the log may cause corruption -- please attempt a mount
    of the filesystem before doing this.

     

    Restarted the array regularly, and now disk 7 is showing 'Unmountable: not mounted' in Main - Array (attached).

     

    disk 7 unmountable.png

  14. On 5/6/2022 at 12:46 PM, JorgeB said:

    Correct.

     

    Hey Jorge, I'm trying to figure out some share errors that aren't showing up now. When I ran 'ls -lah /mnt' it's showing question marks for disk 7 despite it showing ok in Main:

     

    root@MediaVault:~# ls -lah /mnt
    /bin/ls: cannot access '/mnt/disk7': Input/output error
    total 16K
    drwxr-xr-x 19 root   root  380 May  6 10:54 ./
    drwxr-xr-x 21 root   root  480 May 10 09:58 ../
    drwxrwxrwx  1 nobody users  80 May  6 11:04 cache/
    drwxrwxrwx  9 nobody users 138 May  6 11:04 disk1/
    drwxrwxrwx  8 nobody users 133 May  7 16:10 disk10/
    drwxrwxrwx  5 nobody users  75 May  7 16:10 disk11/
    drwxrwxrwx  6 nobody users  67 May  6 11:04 disk2/
    drwxrwxrwx  9 nobody users 148 May  6 11:04 disk3/
    drwxrwxrwx  4 nobody users  41 May  6 11:04 disk4/
    drwxrwxrwx 10 nobody users 166 May  6 11:04 disk5/
    drwxrwxrwx  7 nobody users 106 May  6 11:04 disk6/
    d?????????  ? ?      ?       ?            ? disk7/
    drwxrwxrwx  6 nobody users  73 May  7 16:10 disk8/
    drwxrwxrwx  6 nobody users  67 May  6 11:04 disk9/
    drwxrwxrwt  5 nobody users 100 May  6 13:41 disks/
    drwxrwxrwt  2 nobody users  40 May  6 10:52 remotes/
    drwxrwxrwt  2 nobody users  40 May  6 10:52 rootshare/
    drwxrwxrwx  1 nobody users 138 May  6 11:04 user/
    drwxrwxrwx  1 nobody users 138 May  6 11:04 user0/

     

    I ran the file system check on disk 7 and got this output:

    entry ".." at block 0 offset 80 in directory inode 282079658 references non-existent inode 6460593193
    entry ".." at block 0 offset 80 in directory inode 282079665 references non-existent inode 4315820253
    entry "Season 1" in shortform directory 322311173 references non-existent inode 2234721490
    would have junked entry "Season 1" in directory inode 322311173
    entry "Season 2" in shortform directory 322311173 references non-existent inode 4579464692
    would have junked entry "Season 2" in directory inode 322311173
    entry "Season 3" in shortform directory 322311173 references non-existent inode 6453739819
    would have junked entry "Season 3" in directory inode 322311173
    entry "Season 5" in shortform directory 322311173 references non-existent inode 2234721518
    would have junked entry "Season 5" in directory inode 322311173
    entry "Season 6" in shortform directory 322311173 references non-existent inode 4760222352
    would have junked entry "Season 6" in directory inode 322311173
    would have corrected i8 count in directory 322311173 from 3 to 0
    entry ".." at block 0 offset 80 in directory inode 322311206 references non-existent inode 6453779144
    entry ".." at block 0 offset 80 in directory inode 322311221 references non-existent inode 6460505325
    No modify flag set, skipping phase 5
    Inode allocation btrees are too corrupted, skipping phases 6 and 7
    No modify flag set, skipping filesystem flush and exiting.
    
            XFS_REPAIR Summary    Tue May 10 10:06:15 2022
    
    Phase		Start		End		Duration
    Phase 1:	05/10 10:06:11	05/10 10:06:11
    Phase 2:	05/10 10:06:11	05/10 10:06:11
    Phase 3:	05/10 10:06:11	05/10 10:06:15	4 seconds
    Phase 4:	05/10 10:06:15	05/10 10:06:15
    Phase 5:	Skipped
    Phase 6:	Skipped
    Phase 7:	Skipped
    
    Total run time: 4 seconds

     

    Is there any reason to not run the file system check without the -n flag now? After that completes would I rebuild the drive like we did before or how does that work?

     

    New diags attached if needed.

    disk 7 missing info.png

    mediavault-diagnostics-20220510-1009.zip

×
×
  • Create New...