JonathanM

Moderators
  • Posts

    13796
  • Joined

  • Last visited

  • Days Won

    59

Posts posted by JonathanM

  1. 19 hours ago, xrqp said:

    Edit:  I found this in Emby.  I will UNcheck it with the hope that the next update it does not create the port 1900 mapping for DNLA.

    The application itself cannot make changes to the container template. The template content is up to the publisher of the template, and you can override update changes with an edit to your local template. I forget what exactly needs to change, but a good search should reveal what you need.

  2. I would also strongly recommend getting fully migrated away from ReiserFS to XFS before you move to the next stage getting things running. You will need to systematically copy files off the ReiserFS drives onto your XFS drives, format the ReiserFS drives that you copied from to XFS to make room for the next batch of copies.

     

    There is a whole thread dedicated to the different methods and techniques used for this.

  3. 41 minutes ago, Vibranze said:

    Will it detect the 4 disks in the HDD enclosure connected via USB-C? 

    Maybe, depends on the enclosure. USB is not recommended for Unraid's main array for many reasons.

     

    I recommend watching a few youtube videos on Unraid, so you can get a feel for what it is and isn't.

     

    Also, https://wiki.unraid.net/Manual/Overview

     

    The way Unraid's main array works means all the disks must be accessed simultaneously for parity building and recovery operations, and USB is bad at that type of I/O, some enclosures and USB controllers also have a bad habit of resetting connections which can cause a drive to get kicked from the array.

     

    Some have managed to get USB to work somewhat, but it's definitely not ideal.

    • Like 1
  4. 47 minutes ago, JGKahuna said:

    Now the cashe is full and mover will not run to get the files onto the array

    You probably have a share set to cache prefer, which puts the files from the array onto the cache drive, instead of cache yes, which puts new files on the cache and moves them to the array.

    Attach diagnostics to your next post in this thread if you can't solve it.

  5. Parity only applies to the main array, not to pools.

     

    Pools can be configured with various BTRFS RAID levels, but I don't think it's wise to attempt to expand a pool that currently has errors, you will probably end up worse off than before. I think your best option is to follow the replace cache disk procedure, which does entail using the mover to send the data to the main array by setting any cache prefer or only shares to cache yes, and running the mover. Typically you would need to disable the docker engine and VM services in the configuration so their tabs are gone from the GUI before you run the mover.

     

    Attach diagnostics to your next post in this thread for better educated opinions.

  6. If you set a new config, add a blank 18TB as parity, and add your current 18TB as disk1, and the rest of the drives which have data you want to keep as disk2, disk3, etc, then build parity that would accomplish what I think you want.

     

    You will need to copy data from disk to disk manually, and since you already started with one 18TB in UD, you may as well keep doing that manual process to copy the content of all the disks you want to remove, then only build parity with the drives you want to keep. If you copy instead of move, you can keep the removed drives as backup.

  7. 1 hour ago, Caennanu said:

    swapping the new disks per slot and formatting them

    Formatting is NEVER part of a recovery. The emulation includes the file system, so when you format, you tell Unraid to empty the filesystem.

     

    Hopefully you can read all the individual disks and recover your data, it doesn't really sound like you had real disk failures, more failure to read or write.

     

    Posting diagnostics may shed some light on what's currently available, but since you've rebooted multiple times since this started, the original errors that started this are lost.

  8. On 6/17/2022 at 4:24 PM, JonathanM said:
    On 6/17/2022 at 3:52 PM, sysop-gwg said:

    Should we increase the size of our Cache?  As we started this thread, we purchased a 1gb drive to do so.

    In my opinion, if you truly need VM primary drives larger than 265GB, then I would add the 1TB (1gb typo?) as an additional pool, dedicated to just the VM(s) primary vdisk(s). (domains share) Then you can use your current cache drive as storage for appdata and system shares, and as a temporary home for work in progress. Use the storage array for finished work, archives, and reference materials.

     

    • Like 1