JonathanM Posted August 23, 2023 Share Posted August 23, 2023 7 minutes ago, dboonthego said: Is "no space left on device" normal output? Yep 1 Quote Link to comment
JorgeB Posted August 23, 2023 Share Posted August 23, 2023 8 hours ago, dboonthego said: Is "no space left on device" normal output? Yep. Quote Link to comment
flyize Posted October 3, 2023 Share Posted October 3, 2023 If all my docker and VMs are on a cache drive, and I go into Global Share Settings and exclude a drive that I want to zero - can that be done online relatively safely? I really don't want to have to be without the server for two days while the drive zeroes out. Quote Link to comment
dboonthego Posted October 5, 2023 Share Posted October 5, 2023 On 10/3/2023 at 11:52 AM, flyize said: can that be done online relatively safely? You're zeroing the disk including the partition table so probably isn't necessary to do, but yes, that stops new writes to the excluded disk. Quote Link to comment
flyize Posted October 5, 2023 Share Posted October 5, 2023 Any reason the standard wisdom is that the Docker and VM services need to be shut down to zero out a drive? Quote Link to comment
JorgeB Posted October 5, 2023 Share Posted October 5, 2023 Not necessarily but recommend starting the array in maintenance mode zero an array drive, or you need to first manually unmount that disk. Quote Link to comment
flyize Posted October 10, 2023 Share Posted October 10, 2023 On 10/5/2023 at 1:47 PM, JorgeB said: Not necessarily but recommend starting the array in maintenance mode zero an array drive, or you need to first manually unmount that disk. Shouldn't it work even if I don't, since zeroing will nuke the partition making a write impossible? That said, I tried to unmount it, but got a 'target is busy' error. Quote Link to comment
JorgeB Posted October 10, 2023 Share Posted October 10, 2023 2 minutes ago, flyize said: Shouldn't it work even if I don't, since zeroing will nuke the partition making a write impossible? It will likely cause a few sync errors due to the filesystem being mounted and not correctly unmounted. Quote Link to comment
flyize Posted October 10, 2023 Share Posted October 10, 2023 36 minutes ago, JorgeB said: It will likely cause a few sync errors due to the filesystem being mounted and not correctly unmounted. How do I unmount it then. lsof is blank Quote Link to comment
JorgeB Posted October 10, 2023 Share Posted October 10, 2023 If the disk has no open files you can use umount /mnt/disk# Quote Link to comment
MobileDude Posted October 18, 2023 Share Posted October 18, 2023 Running this process now on Unraid >6.12+. All looks wel in terminal screen. However the Web UI dissapeared, and is no longer reachable via browser. What would be the proper way to proceed if the actual clearing is finished? Proceed with the steps in the above post, after rebooting Unraid? thanks for helping out. Quote Link to comment
JorgeB Posted October 18, 2023 Share Posted October 18, 2023 See if you can access the server using SSH and grab de diagnostics. Quote Link to comment
MobileDude Posted October 18, 2023 Share Posted October 18, 2023 Decided to move forward. Rebooted, new config.....etc. finally removed the drive and all is well now. Thanks for the write-up! Quote Link to comment
Modz Posted December 11, 2023 Share Posted December 11, 2023 (edited) After run the fixed script (with `/md4p1`). I can't stop the array, following this error loop. Any idea? Dec 11 20:31:35 Srv root: umount: /mnt/disk4: not mounted. Dec 11 20:31:35 Srv emhttpd: shcmd (219): exit status: 32 Dec 11 20:31:35 Srv emhttpd: Retry unmounting disk share(s)... Dec 11 20:31:40 Srv emhttpd: Unmounting disks... Dec 11 20:31:40 Srv emhttpd: shcmd (220): umount /mnt/disk4 Dec 11 20:31:40 Srv root: umount: /mnt/disk4: not mounted. Dec 11 20:31:40 Srv emhttpd: shcmd (220): exit status: 32 Dec 11 20:31:40 Srv emhttpd: Retry unmounting disk share(s)... Dec 11 20:31:45 Srv emhttpd: Unmounting disks... Dec 11 20:31:45 Srv emhttpd: shcmd (221): umount /mnt/disk4 Dec 11 20:31:45 Srv root: umount: /mnt/disk4: not mounted. Dec 11 20:31:45 Srv emhttpd: shcmd (221): exit status: 32 Dec 11 20:31:45 Srv emhttpd: Retry unmounting disk share(s)... I looked for this : But no process is running. root@Srv:~# losetup NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO LOG-SEC /dev/loop1 0 0 1 1 /boot/bzfirmware 0 512 /dev/loop0 0 0 1 1 /boot/bzmodules 0 512 Edited December 11, 2023 by Modz Quote Link to comment
JorgeB Posted December 12, 2023 Share Posted December 12, 2023 That script is no longer supported, IIRC somebody found a workaround to permit the unmount, but you will need to search the forum. Quote Link to comment
jkexbx Posted December 14, 2023 Share Posted December 14, 2023 On 12/12/2023 at 4:57 PM, JorgeB said: That script is no longer supported, IIRC somebody found a workaround to permit the unmount, but you will need to search the forum. Are you referring to this thread? Or do you mean an alternative to using the umount /mnt/disk8 command? I've found that command is extremely unreliable when running the dd bs=1M if=/dev/zero of=/dev/md8 status=progress command. What happens currently: Run the umount command The disk size changes to a random number I then run the dd command, and progress is running at 400kb/s I fail to kill the running dd using every method available I stop the array, but it fails I hard shutdown the server It comes back up I try to mount the array, but it's in some half broken state where VMs can start, but the GUI still allows you to modify disks I then do another reboot through the GUI Everything comes back normally The disks I'm trying to zero show unmountable: wrong or no file system I run dd again on them and it runs fine. I did use the command fine previously, and the only thing different was I first mounted the disks in SMB to double check they were empty. Once the current command finishes, I'll try that. Not sure how that makes a difference, but I think that's the only thing different between when it worked previously and now. Quote Link to comment
JorgeB Posted December 14, 2023 Share Posted December 14, 2023 8 hours ago, jkexbx said: Are you referring to this thread? Yes, there's a workaround posted there to unmount, though I don't recommend using that script, if you want to do that do it manually with the array in maintenance mode: https://forums.lime-technology.com/topic/61614-shrink-array-question/?tab=comments#comment-606335 Quote Link to comment
jkexbx Posted December 14, 2023 Share Posted December 14, 2023 4 hours ago, JorgeB said: Yes, there's a workaround posted there to unmount, though I don't recommend using that script, if you want to do that do it manually with the array in maintenance mode: https://forums.lime-technology.com/topic/61614-shrink-array-question/?tab=comments#comment-606335 Do you mean this command for unmounting? root@Unraid-Server:~# fusermount -uz /mnt/cache/ Quote Link to comment
JorgeB Posted December 14, 2023 Share Posted December 14, 2023 18 minutes ago, jkexbx said: Do you mean this command for unmounting? This post: https://forums.unraid.net/topic/145821-cant-unraid-stop-when-array-already-unmounted/?do=findComment&comment=1323316 Quote Link to comment
jkexbx Posted December 15, 2023 Share Posted December 15, 2023 On 12/14/2023 at 10:06 AM, jkexbx said: Are you referring to this thread? Or do you mean an alternative to using the umount /mnt/disk8 command? I've found that command is extremely unreliable when running the dd bs=1M if=/dev/zero of=/dev/md8 status=progress command. What happens currently: Run the umount command The disk size changes to a random number I then run the dd command, and progress is running at 400kb/s I fail to kill the running dd using every method available I stop the array, but it fails I hard shutdown the server It comes back up I try to mount the array, but it's in some half broken state where VMs can start, but the GUI still allows you to modify disks I then do another reboot through the GUI Everything comes back normally The disks I'm trying to zero show unmountable: wrong or no file system I run dd again on them and it runs fine. I did use the command fine previously, and the only thing different was I first mounted the disks in SMB to double check they were empty. Once the current command finishes, I'll try that. Not sure how that makes a difference, but I think that's the only thing different between when it worked previously and now. I'd mentioned that post previously. Following it still doesn't allow unraid to stop the array. I suspect the problem is relating to the dd command running at a root level. No matter what I do, the disk I run the command on continues to see activity at 400kb/s. Quote Link to comment
JorgeB Posted December 15, 2023 Share Posted December 15, 2023 Sorry, I've never used the script and like mentioned I don't recommend using it. Quote Link to comment
omartian Posted December 17, 2023 Share Posted December 17, 2023 On 12/13/2023 at 9:06 PM, jkexbx said: Are you referring to this thread? Or do you mean an alternative to using the umount /mnt/disk8 command? I've found that command is extremely unreliable when running the dd bs=1M if=/dev/zero of=/dev/md8 status=progress command. What happens currently: Run the umount command The disk size changes to a random number I then run the dd command, and progress is running at 400kb/s I fail to kill the running dd using every method available I stop the array, but it fails I hard shutdown the server It comes back up I try to mount the array, but it's in some half broken state where VMs can start, but the GUI still allows you to modify disks I then do another reboot through the GUI Everything comes back normally The disks I'm trying to zero show unmountable: wrong or no file system I run dd again on them and it runs fine. I did use the command fine previously, and the only thing different was I first mounted the disks in SMB to double check they were empty. Once the current command finishes, I'll try that. Not sure how that makes a difference, but I think that's the only thing different between when it worked previously and now. So had a very similar thing happen. Script ran w/o issue, but when going to unmount my disk, i received the unmount error. I tried the unmount script, but no luck, iwas able to shutdown my system, but when it came back up, it started a parity check. Should i let the parity check w/the now zeroed drive? If i receive some sync parity errors, should i write to parity? I'm reluctant to perform the new config step without running the parity sync. Let me know how to proceed. nasgard-diagnostics-20231217-0639 (1).zip Quote Link to comment
Gizmotoy Posted January 11 Share Posted January 11 (edited) I had a set of 3 drives to remove. I followed the instructions laid out in the previous posts exactly and it worked very well for my first drive. I created a new config, and then as a follow-up confirmation ran a parity check and all was good. Then I started on the second drive. Unfortunately, after the zeroing completed I have noticed that creating a new config sets the md_write tunable back to Auto from Turbo. I'm wondering if this will break anything, or if the process will just take longer? I noticed that using Auto only the drive being zeroed and my two parity disks are being read during the operation. When on Turbo it was reading from all disks and writing to the drive being zeroed and the parity drives. I'm OK with it taking longer (it's done now). I just want to confirm I didn't irreparably break anything. Edited January 11 by Gizmotoy Quote Link to comment
JorgeB Posted January 11 Share Posted January 11 1 hour ago, Gizmotoy said: I'm wondering if this will break anything, or if the process will just take longer? It won't break anything, and you can still changed it now. 1 Quote Link to comment
gemeit Posted April 6 Share Posted April 6 On 12/3/2017 at 8:48 AM, JorgeB said: I never used the script, it may have issues with latest unRAID, you can still do it manually (array will be inaccessible during the clear): 1. If disable enable reconstruct write (aka turbo write): Settings -> Disk Settings -> Tunable (md_write_method) 2. Start array in Maintenance mode. (array will not be accessible during the clearing) 3. Identify which disk you're removing 4. For Unraid <6.12 type in the CLI: dd bs=1M if=/dev/zero of=/dev/mdX status=progress For Unraid 6.12+ type in the CLI: dd bs=1M if=/dev/zero of=/dev/mdXp1 status=progress replace X with the correct disk number 5. Wait, this will take a long time, about 2 to 3 hours per TB. 6. When the command completes, Stop array, go to Tools -> New Config -> Retain current configuration: All -> Apply 7. Go back to Main page, unassign the cleared device. * with dual parity disk order has to be maintained, including empty slot(s) * 8. Click checkbox "Parity is already valid.", and start the array Hi I want to remove 5 disks can I run this for 5 disks at the same time or do I need to do it 1 disk at a time Thanks Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.