Shrink Array question


Adrian

Recommended Posts

  • 1 month later...
On 10/5/2023 at 1:47 PM, JorgeB said:

Not necessarily but recommend starting the array in maintenance mode zero an array drive, or you need to first manually unmount that disk.

Shouldn't it work even if I don't, since zeroing will nuke the partition making a write impossible? That said, I tried to unmount it, but got a 'target is busy' error.

Link to comment

Running this process now on Unraid >6.12+. All looks wel in terminal screen. However the Web UI dissapeared, and is no longer reachable via browser. What would be the proper way to proceed if the actual clearing is finished? Proceed with the steps in the above post, after rebooting Unraid?

 

thanks for helping out.

Link to comment
  • 1 month later...

After run the fixed script (with `/md4p1`). I can't stop the array, following this error loop. Any idea?

 

Dec 11 20:31:35 Srv root: umount: /mnt/disk4: not mounted.
Dec 11 20:31:35 Srv emhttpd: shcmd (219): exit status: 32
Dec 11 20:31:35 Srv emhttpd: Retry unmounting disk share(s)...
Dec 11 20:31:40 Srv emhttpd: Unmounting disks...
Dec 11 20:31:40 Srv emhttpd: shcmd (220): umount /mnt/disk4
Dec 11 20:31:40 Srv root: umount: /mnt/disk4: not mounted.
Dec 11 20:31:40 Srv emhttpd: shcmd (220): exit status: 32
Dec 11 20:31:40 Srv emhttpd: Retry unmounting disk share(s)...
Dec 11 20:31:45 Srv emhttpd: Unmounting disks...
Dec 11 20:31:45 Srv emhttpd: shcmd (221): umount /mnt/disk4
Dec 11 20:31:45 Srv root: umount: /mnt/disk4: not mounted.
Dec 11 20:31:45 Srv emhttpd: shcmd (221): exit status: 32
Dec 11 20:31:45 Srv emhttpd: Retry unmounting disk share(s)...

 

I looked for this : 

 

But no process is running.

root@Srv:~# losetup
NAME       SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE        DIO LOG-SEC
/dev/loop1         0      0         1  1 /boot/bzfirmware    0     512
/dev/loop0         0      0         1  1 /boot/bzmodules    0     512

 

Edited by Modz
Link to comment
On 12/12/2023 at 4:57 PM, JorgeB said:

That script is no longer supported, IIRC somebody found a workaround to permit the unmount, but you will need to search the forum.

Are you referring to this thread?

Or do you mean an alternative to using the umount /mnt/disk8 command?

 

I've found that command is extremely unreliable when running the dd bs=1M if=/dev/zero of=/dev/md8 status=progress command.

 

What happens currently:

 

  1. Run the umount command
  2. The disk size changes to a random number
  3. I then run the dd command, and progress is running at 400kb/s
  4. I fail to kill the running dd using every method available
  5. I stop the array, but it fails
  6. I hard shutdown the server
  7. It comes back up
  8. I try to mount the array, but it's in some half broken state where VMs can start, but the GUI still allows you to modify disks
  9. I then do another reboot through the GUI
  10. Everything comes back normally
  11. The disks I'm trying to zero show unmountable: wrong or no file system
  12. I run dd again on them and it runs fine.

I did use the command fine previously, and the only thing different was I first mounted the disks in SMB to double check they were empty. Once the current command finishes, I'll try that. Not sure how that makes a difference, but I think that's the only thing different between when it worked previously and now.

 

Link to comment
On 12/14/2023 at 10:06 AM, jkexbx said:

Are you referring to this thread?

Or do you mean an alternative to using the umount /mnt/disk8 command?

 

I've found that command is extremely unreliable when running the dd bs=1M if=/dev/zero of=/dev/md8 status=progress command.

 

What happens currently:

 

  1. Run the umount command
  2. The disk size changes to a random number
  3. I then run the dd command, and progress is running at 400kb/s
  4. I fail to kill the running dd using every method available
  5. I stop the array, but it fails
  6. I hard shutdown the server
  7. It comes back up
  8. I try to mount the array, but it's in some half broken state where VMs can start, but the GUI still allows you to modify disks
  9. I then do another reboot through the GUI
  10. Everything comes back normally
  11. The disks I'm trying to zero show unmountable: wrong or no file system
  12. I run dd again on them and it runs fine.

I did use the command fine previously, and the only thing different was I first mounted the disks in SMB to double check they were empty. Once the current command finishes, I'll try that. Not sure how that makes a difference, but I think that's the only thing different between when it worked previously and now.

 

I'd mentioned that post previously. Following it still doesn't allow unraid to stop the array. I suspect the problem is relating to the dd command running at a root level.

 

No matter what I do, the disk I run the command on continues to see activity at 400kb/s.

Link to comment
On 12/13/2023 at 9:06 PM, jkexbx said:

Are you referring to this thread?

Or do you mean an alternative to using the umount /mnt/disk8 command?

 

I've found that command is extremely unreliable when running the dd bs=1M if=/dev/zero of=/dev/md8 status=progress command.

 

What happens currently:

 

  1. Run the umount command
  2. The disk size changes to a random number
  3. I then run the dd command, and progress is running at 400kb/s
  4. I fail to kill the running dd using every method available
  5. I stop the array, but it fails
  6. I hard shutdown the server
  7. It comes back up
  8. I try to mount the array, but it's in some half broken state where VMs can start, but the GUI still allows you to modify disks
  9. I then do another reboot through the GUI
  10. Everything comes back normally
  11. The disks I'm trying to zero show unmountable: wrong or no file system
  12. I run dd again on them and it runs fine.

I did use the command fine previously, and the only thing different was I first mounted the disks in SMB to double check they were empty. Once the current command finishes, I'll try that. Not sure how that makes a difference, but I think that's the only thing different between when it worked previously and now.

 

 

So had a very similar thing happen.  Script ran w/o issue, but when going to unmount my disk, i received the unmount error.  I tried the unmount script, but no luck, iwas able to shutdown my system, but when it came back up, it started a parity check. 

 

Should i let the parity check w/the now zeroed drive?  If i receive some sync parity errors, should i write to parity?

 

I'm reluctant to perform the new config step without running the parity sync.  Let me know how to proceed.

nasgard-diagnostics-20231217-0639 (1).zip

Link to comment
  • 4 weeks later...

I had a set of 3 drives to remove.  I followed the instructions laid out in the previous posts exactly and it worked very well for my first drive.  I created a new config, and then as a follow-up confirmation ran a parity check and all was good.

 

Then I started on the second drive.  Unfortunately, after the zeroing completed I have noticed that creating a new config sets the md_write tunable back to Auto from Turbo.  I'm wondering if this will break anything, or if the process will just take longer?  I noticed that using Auto only the drive being zeroed and my two parity disks are being read during the operation.  When on Turbo it was reading from all disks and writing to the drive being zeroed and the parity drives.

 

I'm OK with it taking longer (it's done now).  I just want to confirm I didn't irreparably break anything.

Edited by Gizmotoy
Link to comment
  • 2 months later...
On 12/3/2017 at 8:48 AM, JorgeB said:

I never used the script, it may have issues with latest unRAID, you can still do it manually (array will be inaccessible during the clear):

 

1. If disable enable reconstruct write (aka turbo write): Settings -> Disk Settings -> Tunable (md_write_method)

2. Start array in Maintenance mode. (array will not be accessible during the clearing)

3. Identify which disk you're removing

4. For Unraid <6.12 type in the CLI:

dd bs=1M if=/dev/zero of=/dev/mdX status=progress
   For Unraid 6.12+ type in the CLI:
dd bs=1M if=/dev/zero of=/dev/mdXp1 status=progress

replace X with the correct disk number

 

5. Wait, this will take a long time, about 2 to 3 hours per TB.

6. When the command completes, Stop array, go to Tools -> New Config -> Retain current configuration: All -> Apply

7. Go back to Main page, unassign the cleared device. * with dual parity disk order has to be maintained, including empty slot(s) *

8. Click checkbox "Parity is already valid.", and start the array

 

 

 

Hi I want to remove 5 disks can I run this for 5 disks at the same time or do I need to do it 1 disk at a time

 

Thanks

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.