(solved) what's the simplest way of remotely formatting a drive outside of the array?


Recommended Posts

thanks so much!

I had already started to move the files off the cache manually, through Krusader.


I'll see how it goes, but I am also fading fast (almost 1am here)...the next step will be to "destroy" the cache pool and re-create it with only the SSD in it,,, and for that the SSD will need to be formatted to be "like new"...maybe tomorrow then.

have a restful night, jb!

Link to comment

ok, so after confirming that all content of the cache pool have been backed up to disk1, I stopped the array, unassigned the cache drive, and started the array again.
I then ssh'd back into the server and executed the commands again.

the BLKDISCARD command failed, though:

root@unRAID:~# wipefs -a /dev/sdl1
/dev/sdl1: 8 bytes were erased at offset 0x00010040 (btrfs): 5f 42 48 52 66 53 5f 4d
root@unRAID:~# wipefs -a /dev/sdl
root@unRAID:~# blkdiscard /dev/sdl
blkdiscard: /dev/sdl: BLKDISCARD ioctl failed: Remote I/O error

what to try now?

oh, and I just noticed that the "Format" button is available again for the SSD, so I chose XFS and typed "Yes" into the required field and then mounted the SSD, I now see all of its space available...YES!

after assigning it to be the new cache drive I still had to format it, again, as btrfs, though, but that is working too now.

Edited by tillkrueger
addendum
Link to comment
3 minutes ago, tillkrueger said:

holy cow, just moving files like the 80GB vdisk over to the SSD is so fast I can't believe it...I yearn for the day when 2TB SSD's are cheap enough to build an unRAID system with!

Just remember that there is a inverse correlation between size and endurance for the same-price SSD.

 

The really big SSD are making a special technology to fit more data to the same number of memory cells. And that reduces the number of times each flash block can be erased. So a normal figure for current larger SSD is max 300 total rewrites. So a 500 GB SSD then can handle max 150 TB of writes if assuming optimal write without what's called "write amplification". This isn't too important for storage disks, but does matter when using SSD as cache drives. Especially since some types of disk writes can give write amplification of 10 or more - which means that the same 500 GB SSD would then be down to support less than 15 TB of writes before reaching the warranty limit because of wear.

 

When people started to move to SSD, the older SSD could often handle 100,000 rewrites so even if they were much smaller they could often still handle a larger total amount of writes than the new SSD that are released now.

Link to comment

hmm, interesting albeit disconcerting information...well, I trust that technology will advance to address this issue in the next few years...although a trend toward "throw away" technology has definitely been going on, where manufacturers/engineers make sure that there isn't much more than a 2-5 year lifespan in anything we purchase that has a plug and/or battery on/in it...there was an excellent journalistic report on German television last night which confirmed how this trend has been developing since the 1980's.

Link to comment

I tell ya, it's one thing after another...

my Krusader Docker alerted me that I had to recreate the Docker Image due to a bug in an earlier version of unRAID v6.

after re-creating the Docker with what I thought to be the correct parameters, it starts up fine, but when I go into the "unRAID" folder in Krusader now, I only see the app data, docker, domains, isos and system Shares, but not my disk and user shares anymore, like I did before.

what did I do wrong?

screenshot_27.png

Link to comment
I tell ya, it's one thing after another...

my Krusader Docker alerted me that I had to recreate the Docker Image due to a bug in an earlier version of unRAID v6.

after re-creating the Docker with what I thought to be the correct parameters, it starts up fine, but when I go into the "unRAID" folder in Krusader now, I only see the app data, docker, domains, isos and system Shares, but not my disk and user shares anymore, like I did before.

what did I do wrong?
screenshot_27.thumb.png.1164e71f8a8f9e0739033d3353cdaaf9.png
You're mapping /UNRAID to /mnt/cache instead of /mnt/user
  • Like 1
Link to comment

when the VM Manager's Status shows as "Stopped", does that mean the parameters I gave it can't be executed correctly and the VM module can't start, or have I overlooked a simple toggle that turns it back on...I have moved all of my Docker and VM relevant files and directories over to the SSD cache and *think* that I adjusted all necessary settings throughout the Krusader Docker and VM Manger, and set all these Shares to "Cache Only", where they now reside, exclusively:

- appdata
- docker
- domains
- isos

- Krusader
- share
- system

for good measure, here is the current SysLog.

screenshot_28.png

Link to comment

apparently so...I deleted libvert.img and recreated it through the unRAID GUI and now the VM Manager is running again.

the Windows 10 VM that I had created and configured over the past 2 days is nowhere to be seen, though.

it is located in the domains/Windows 10 folder (I had also backed it up to disk1 before doing the cache swap), but what's the proper procedure to make it visible in the unRAID GUI again?

I just found an instructional video about implementing existing VMs in this thread  and will be watching it to hopefully gain the knowledge to make it work again. 

Link to comment

well, I *kinda* got it working again...the VM is booting (albeit very slowly), then it started to install Windows 10 updates and configuring them...then in Windows it started to configure a bunch of things I am pretty sure I had already configured, and for some reason the Windows 10 desktop environment still, after 15 mins, hasn't fully booted...it just seems like a different VM from the one I had before the Cache swap...it's super sluggish and slow to boot...could this have anything to do with the Trim feature your mentioned, jb? the Common Problems plugin alerted me earlier that with an SSD in my stem I should install the Trim plugin, which I did, but whatever is going on, it seems like everything was running much faster *before* I did this swap than it is now (except for the copying/moving of files to/from the cache, that I did earlier).

pffff...running out of steam after 2 days on this...I mean, I'd long be dead in the water if it weren't for all of y'alls help, but man-oh-man, this is turning out to be a gargantuan task...and I haven't even gotten to the BIOS update yet and the (hopefully) subsequent upgrade to 8GB of RAM).

Link to comment

now that all updates are installed and a reboot has cleared out unRAID's memory, the VM appears to be running much smoother...still not seeing the speed increases I expected by going to an SSD, but I was wondering:

does making all of these folders run off the SSD potentially cause any conflicts with each other? I mean, I know an SSD is very different from a spinner, where trying to access files in different places on a platter can cause the head(s) to not work efficiently anymore, but is it ok for all of them to be on the SSD? do any of them *not* benefit from being on the SSD and could just as well be moved back onto disk1 (if only to make more space on the SSD)?

- appdata
- docker
- domains
- isos
- Krusader
- share
- system

I think the biggest bottleneck right now is the fact that with 4GB RAM in total, I can only assign 2GB to the Windows 10 VM, which is probably the bottom of the barrel as far as what's acceptable, and a lot of the hesitation I see in the VM is due to memory swapping to the SSD and back, no?

Link to comment
6 minutes ago, tillkrueger said:

a lot of the hesitation I see in the VM is due to memory swapping to the SSD and back, no?

Yes. 4GB is really tiny to fit unRaid, the qemu virtual management functions, and the VM itself. No slack for anything to breath.

 

Not sure why you ended up with all that on your cache drive, I think you may have separated some stuff out to root folders that is normally in subdirectories.

 

isos for sure could be set to cache:yes instead of prefer, the other folders I'd need to see content to have an idea what happened and whether they need to be moved. Krusader should be under appdata, if you set the krusader docker to /mnt/cache/appdata/krusader and before it was set to /mnt/cache/Krusader that would explain that folder.

 

For neatness sake you could have the docker img file and libvirt img file both in the system folder.

 

Did you keep the 100GB setting for libvirt.img? That is freakin' HUGE, I have 126MB used in a 1GB allocated libvirt.img file with 10 VM's currently defined.

  • Like 1
Link to comment

yeah, I've been really anxious about getting the BIOS updated in hope that it will finally accept the other two slots being populated with the additional 4GB I have...but it worries me that the current BIOS version only reports 4GB max (as seen in the unRAID System Profile)...isn't that kind of unlikely that with the C2SEA's defining feature over it smaller sibling being the extra 2 memory slots, that even the original BIOS wouldn't already allow for the full 8GB to be recognised? I really want and need to assign at least 4-6GB to this VM.

I ended up with all of that on my cache drive by going it there, manually (for the most part)...I guess you have noticed that I make a lot of mistakes, still, and am very green behind the ears with all of this VM stuff (and much of the inner workings of unRAID itself)...so it's very likely that I moved a few folders onto the SSD in less than correct ways...I attached a screen grab of all folders open to reveal their sub-folders.

so, the isos should definitely reside there, which is what I've read elsewhere as well.

 

I see that the "share" folder in the root is the same as the one inside of "krusader"...can I delete it from the root, manually?

I deleted the "Krusader" folder (it was empty, save for the "share" and "config" subfolders, which were also empty) moved "krusader" inside of appdata, all manually (or should I not have done that)?

how would I go about moving the docker image into the system folder without breaking anything?

same regarding the huge libvert I have...how do I go about reducing that down to the same 1GB you have without breaking anything?

you da best!

 

screenshot_322.png

Link to comment

How many dockers do you have installed? It appears that you may have installed krusader a bunch of times, specifying a different config location each time. All but the current config should probably be deleted. 

 

You could also delete your current docker.img and set it to /mnt/cache/system/docker/docker.img with a size of 5GB, and delete the docker folder in the root.

 

If krusader is your only docker, maybe the best thing to do would be to remove it, then clean out all the leftover junk, then reinstall it to /mnt/cache/appdata/krusader/

 

When I said set the isos share to cache:yes instead of cache:prefer, that setting will move the isos share to the array disks, following any rules set for the isos share. There is no need for it to take up space on the SSD.

 

Since you only have 1 VM, the easiest thing to do would probably be to save the XML text from the edit XML screen, delete and recreate the libvirt.img file with the settings you want, then create a new VM and paste in the XML to recreate the VM.

 

Your tree graphic is very hard for me to follow for some reason, I can't quickly determine which folders are at the root. I think I interpreted it correctly though.

 

If you allow mover to relocate isos to the array, then the only permanent root folders on the cache drive would be appdata, domains, and system. Any other folders would be created and moved automatically to the array by data written to shares that are set to cache:yes

  • Like 1
Link to comment
  • tillkrueger changed the title to (solved) what's the simplest way of remotely formatting a drive outside of the array?

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.