zaniix
-
Posts
18 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by zaniix
-
-
I made some mistakes and now I need some help trying to fix it in the simplest way.
I need to replace my cache drive with a larger one, but I realized that I used my cache drive as the location for
/mnt/cache/appdata/ that some dockers are usingI also set Docker vDisk location: /mnt/cache/docker.img
Can I create new shares for these on the array, stop array and copy the files to the new shares, change the docker location settings, start the array?
swap the cache disk and then set those new shares to cache prefer and invoke mover so the files will live on the cache drive, but have an array path.Would that work?
-
8 minutes ago, trurl said:
New Config lets you change your disk assignments however you want, then optionally (by default) rebuilds parity. That is all it does. Some people seem to be afraid of it thinking it will reset everything. Just make sure you don't assign any disk with data on it to any parity slot.
- Go to Settings - Disk Settings and make sure Default file system is XFS.
- Stop the array.
- Go to Tools - New Config, keep all assignments. It will let you change them however you want before starting the array.
- Assign new disk to parity slot, old parity to new data slot, leave all other assignments the same.
- Start the array but DON'T check the box that says parity is valid, because you (obviously) need to build parity.
- When parity build completes, format "new" data disk.
Since parity will be built with the "new" disk already in the array, everything will be in sync.
Then format just writes an empty filesystem to that new disk. (That is all "format" does, many people are very confused about that). An empty filesystem is just the filesystem metadata needed, some of which represents an empty top level folder ready for new folders and files.
Format won't take very long because it doesn't have much to write. Since Unraid treats that write operation just as it does any other, by updating parity, everything is still in sync.
Result is new parity and new empty XFS disk ready to hold the data from another disk so you can format it.
Thank you, that clears that part up. I just know how easy it can be to forget a small detail and destroy all the data.
-
1 hour ago, trurl said:
If you do these as 2 separate steps, then you will have to let Unraid clear the "new" data disk so parity remains valid.
If instead you New Config, assign new parity to parity slot and old parity to new data slot, then when parity rebuilds everything will already be in sync and you can just format "new" data disk.
Rest of your plan sounds good.
So are you saying that I should move the unassigned drive to the parity slot and the old parity drive to a data drive slot and do a copy parity?
then start array and format?
-
I have searched around a bit and I think I know what I need to do, but I am hoping some of you that are more experienced could chime in if my plan seems stupid.
Current setup
uRaid 6.8.3
One Parity Drive 3TB
two Data drives each 3TB - formated ResierFS
new drive not installed yet 4TBEnd goal
1) new 4TB drive as Parity
2) old Parity drive as Data Drive formated XFS
3) all data drives reformatted to XFS4) end result one new 4TB parity and Three 3TB data drives
I realize this is multiple processes, but I am want to make sure I am doing this in the most efficient and safe manner
with the least risk to my data.My Plan
1) add new drive as unassigned and preclear just to make sure it is safe to use
2) stop all backups to unRaid, shut all VMs and Docker images.
3) turn off auto start, shutdown remove old parity drive
4) boot add new drive to parity slot, start array, wait for parity rebuild5) shutdown and connect old parity drive, add as data drive, add exclusion to all shares for this drive to prevent any data being written t it
start array, format XFS
following this process https://wiki.unraid.net/File_System_Conversion#Mirroring_procedure_to_convert_drives
in short copying data to empty drive and then swapping positions and format and restarting array each time.6) restart any VMs, Dockers, and backup jobs
7) backup flash drive config
I know this topic has been covered a lot, but there is always a detail or two that I find unclear so I appreciate any help.
-
Thank you, I picked this old drive that I had remove because I suspected it was flaky so I could see what negative result look like.
Planning on replacing all my drives with some new HGST 3tb NAS drives soon.
thanks!
-
I ran a preclear on a drive I just replaced and do not trust in an attempt to understand the preclear results so I can test all drives I use moving forward.
Can you help me understand these results? Id this good or bad and what tells me that
========================================================================1.15 == invoked as: ./preclear_disk.sh -A /dev/sdd == WDCWD5000AAKS-00YGA0 WD-WCAS80792713 == Disk /dev/sdd has been successfully precleared == with a starting sector of 64 == Ran 1 cycle == == Using :Read block size = 1000448 Bytes == Last Cycle's Pre Read Time : 7:20:48 (18 MB/s) == Last Cycle's Zeroing time : 2:13:43 (62 MB/s) == Last Cycle's Post Read Time : 20:00:24 (6 MB/s) == Last Cycle's Total Time : 29:35:54 == == Total Elapsed Time 29:35:54 == == Disk Start Temperature: 41C == == Current Disk Temperature: -->41<--C, == ============================================================================ ** Changed attributes in files: /tmp/smart_start_sdd /tmp/smart_finish_sdd ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Raw_Read_Error_Rate = 199 200 51 ok 180125 Reallocated_Sector_Ct = 199 200 140 ok 1 Power_On_Hours = 10 10 0 near_thresh 66111 Reallocated_Event_Count = 199 200 0 ok 1 Current_Pending_Sector = 200 196 0 ok 9 Multi_Zone_Error_Rate = 1 1 51 FAILING_NOW 22423 *** Failing SMART Attributes in /tmp/smart_finish_sdd *** ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 200 Multi_Zone_Error_Rate 0x0008 001 001 051 Old_age Offline FAILING_NOW 22423 327 sectors were pending re-allocation before the start of the preclear. 331 sectors were pending re-allocation after pre-read in cycle 1 of 1. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 1. 9 sectors are pending re-allocation at the end of the preclear, a change of -318 in the number of sectors pending re-allocation. 0 sectors had been re-allocated before the start of the preclear. 1 sector is re-allocated at the end of the preclear, a change of 1 in the number of sectors re-allocated. ============================================================================
-
I do not want to risk messing up my existing setup until I have a stable V6 running. My current plan is to build a complete new rig with V6 and then I just have to figure out the best way to get the data there.
I may have to just use Teracopy, but this is not efficient,will take a toll on my home network and will take days.
You are correct I have no plan to use the old drives in the new system.
-
I did some searches and I do not feel liek I found an answer that I understand so that is why I am posting.
Current setup is unRaid 4.7 with 3 drives, I want to boot the system with V6 and 3 new drives and then mount the old drives one at a time to move all the data to the new unRaid. I just have no idea how to do this.
I considered just upgrading but it seemed like there were too many potential issues and copying the data over the network will take too long.
From v6 how can I simply mount a old unraid drive from 4.7 and directly copy the data to one of the new unraid drives running V6.
thanks
Move docker locations and replace cache drive help
in General Support
Posted
I thought if I had a path specified like /mnt/cache/appdata it would only look at the cache drive and not the array. are you saying that /mnt/user/appdata and /mnt/cache/appdata are the same?