Jump to content

zaniix

Members
  • Posts

    18
  • Joined

  • Last visited

Posts posted by zaniix

  1. 1 hour ago, jonathanm said:

    Simplest way is simply replace the cache drive using the method in the wiki, in short,

    1. disable docker and vm services in settings. That should remove those items from the GUI menu.

    2. set ALL user shares cache yes, then invoke the mover. If you did those two steps properly, and nothing is writing to the server for the duration, the cache drive should now be empty except for the docker.img which will get recreated when you reenable the service in step 7.

    3. shut down and physically replace cache drive

    4. start array and format new cache drive

    5. set shares that should live on cache to cache prefer

    6. run mover

    7. enable docker and vm services

    8. done

     

    you can change /mnt/cache/appdata to /mnt/user/appdata if you wish, but I'd wait until you get the cache drive swapped. If you have any other files besides the docker.img on the root of the cache drive, those will need to be dealt with after step 2.

    I thought if I had a path specified like /mnt/cache/appdata  it would only look at the cache drive and not the array.  are you saying that /mnt/user/appdata and /mnt/cache/appdata are the same?

     

     

  2. I made some mistakes  and now I need some help trying to fix it in the simplest way.

    I need to replace my cache drive with a larger one, but I realized that I used my cache drive as the location for 
    /mnt/cache/appdata/    that some dockers are using

    I also set Docker vDisk location:    /mnt/cache/docker.img

    Can I create new shares for these on the array, stop array and copy the files to the new shares, change the docker location settings, start the array?
    swap the cache disk and then set those new shares to cache prefer and invoke mover so the files will live on the cache drive, but have an array path.

     

    Would that work?

  3. 8 minutes ago, trurl said:

    New Config lets you change your disk assignments however you want, then optionally (by default) rebuilds parity. That is all it does. Some people seem to be afraid of it thinking it will reset everything. Just make sure you don't assign any disk with data on it to any parity slot.

    1. Go to Settings - Disk Settings and make sure Default file system is XFS.
    2. Stop the array.
    3. Go to Tools - New Config, keep all assignments. It will let you change them however you want before starting the array.
    4. Assign new disk to parity slot, old parity to new data slot, leave all other assignments the same.
    5. Start the array but DON'T check the box that says parity is valid, because you (obviously) need to build parity.
    6. When parity build completes, format "new" data disk.

    Since parity will be built with the "new" disk already in the array, everything will be in sync.

     

    Then format just writes an empty filesystem to that new disk. (That is all "format" does, many people are very confused about that). An empty filesystem is just the filesystem metadata needed, some of which represents an empty top level folder ready for new folders and files.

     

    Format won't take very long because it doesn't have much to write. Since Unraid treats that write operation just as it does any other, by updating parity, everything is still in sync.

     

    Result is new parity and new empty XFS disk ready to hold the data from another disk so you can format it.

     

    Thank you, that clears that part up.  I just know how easy it can be to forget a small detail and destroy all the data.

     

     

  4. 1 hour ago, trurl said:

    If you do these as 2 separate steps, then you will have to let Unraid clear the "new" data disk so parity remains valid.

     

    If instead you New Config, assign new parity to parity slot and old parity to new data slot, then when parity rebuilds everything will already be in sync and you can just format "new" data disk.

     

    Rest of your plan sounds good.

    So are you saying that I should move the unassigned drive to the parity slot and the old parity drive to a data drive slot and do a copy parity?

    then start array and format?

     

     

     

     

  5. I have searched around a bit and I think I know what I need to do, but I am hoping some of you that are more experienced could chime in if my plan seems stupid.

    Current setup
    uRaid 6.8.3
    One Parity Drive  3TB
    two Data drives each 3TB   - formated ResierFS
    new drive not installed yet 4TB

     

    End goal
    1) new 4TB drive as Parity
    2) old Parity drive as Data Drive formated XFS
    3) all data drives reformatted to XFS

    4) end result one new 4TB parity and Three 3TB data drives

    I realize this is multiple processes, but I am want to make sure I am doing this in the most efficient and safe manner
    with the least risk to my data.

     

    My Plan
    1) add new drive as unassigned and preclear just to make sure it is safe to use
    2) stop all backups to unRaid, shut all VMs and Docker images.
    3) turn off auto start, shutdown remove old parity drive
    4) boot add new drive to parity slot, start array, wait for parity rebuild

    5) shutdown and connect old parity drive, add as data drive, add exclusion to all shares for this drive to prevent any data being written t it
    start array, format XFS
      following this process  https://wiki.unraid.net/File_System_Conversion#Mirroring_procedure_to_convert_drives 
      
     in short copying data to empty drive and then swapping positions and format and restarting array each time.

    6) restart any VMs, Dockers, and backup jobs

    7) backup flash drive config

     

     

    I know this topic has been covered a lot, but there is always a detail or two that I find unclear so I appreciate any help.

  6. I ran a preclear on a drive I just replaced and do not trust in an attempt to understand the preclear results so I can test all drives I use moving forward.

     

    Can you help me understand these results? Id this good or bad and what tells me that

     

     

    ========================================================================1.15
    == invoked as: ./preclear_disk.sh -A /dev/sdd
    == WDCWD5000AAKS-00YGA0   WD-WCAS80792713
    == Disk /dev/sdd has been successfully precleared
    == with a starting sector of 64 
    == Ran 1 cycle
    ==
    == Using :Read block size = 1000448 Bytes
    == Last Cycle's Pre Read Time  : 7:20:48 (18 MB/s)
    == Last Cycle's Zeroing time   : 2:13:43 (62 MB/s)
    == Last Cycle's Post Read Time : 20:00:24 (6 MB/s)
    == Last Cycle's Total Time     : 29:35:54
    ==
    == Total Elapsed Time 29:35:54
    ==
    == Disk Start Temperature: 41C
    ==
    == Current Disk Temperature: -->41<--C, 
    ==
    ============================================================================
    ** Changed attributes in files: /tmp/smart_start_sdd  /tmp/smart_finish_sdd
                    ATTRIBUTE   NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS      RAW_VALUE
          Raw_Read_Error_Rate =   199     200           51        ok          180125
        Reallocated_Sector_Ct =   199     200          140        ok          1
               Power_On_Hours =    10      10            0        near_thresh 66111
      Reallocated_Event_Count =   199     200            0        ok          1
       Current_Pending_Sector =   200     196            0        ok          9
        Multi_Zone_Error_Rate =     1       1           51        FAILING_NOW 22423
    
    *** Failing SMART Attributes in /tmp/smart_finish_sdd *** 
    ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
    200 Multi_Zone_Error_Rate   0x0008   001   001   051    Old_age   Offline  FAILING_NOW 22423
    
    327 sectors were pending re-allocation before the start of the preclear.
    331 sectors were pending re-allocation after pre-read in cycle 1 of 1.
    0 sectors were pending re-allocation after zero of disk in cycle 1 of 1.
    9 sectors are pending re-allocation at the end of the preclear,
        a change of -318 in the number of sectors pending re-allocation.
    0 sectors had been re-allocated before the start of the preclear.
    1 sector is re-allocated at the end of the preclear,
        a change of 1 in the number of sectors re-allocated. 
    ============================================================================
    

     

     

  7. I do not want to risk messing up my existing setup until I have a stable V6 running.  My current plan is to build a complete new rig with V6 and then I just have to figure out the best way to get the data there.

     

    I may have to just use Teracopy, but this is not efficient,will take a toll on my home network and will take days.

     

    You are correct I have no plan to use the old drives in the new system.

  8. I did some searches and I do not feel liek I found an answer that I understand so that is why I am posting.

     

    Current setup is unRaid 4.7 with 3 drives, I want to boot the system with V6 and 3 new drives and then mount the old drives one at a time to move all the data to the new unRaid. I just have no idea how to do this.

     

    I considered just upgrading but it seemed like there were too many potential issues and copying the data over the network will take too long.

     

    From v6 how can I simply mount a old unraid drive from 4.7 and directly copy the data to one of the new unraid drives running V6.

     

    thanks

     

×
×
  • Create New...