Accidentally ran rm on root and reformatted usb without config


can4d

Recommended Posts

  • Replies 121
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

According to the test configuration we had back on page 3

The 500G disk, serial ending 07EH, was mountable and btrfs (it was disk3 then). We are going to make that the first disk in your first pool. clamav had mappings to a pool named cache_ssd, and its appdata is on that disk. So...

 

Stop the array. Add a pool named cache_ssd. Assign that disk as the first disk in that pool. Start the array.

 

DO NOT format anything.

 

Then post new diagnostics.

Link to comment
14 minutes ago, trurl said:

"Stop the array. Add a pool named cache_ssd. Assign that disk as the first disk in that pool. Start the array."

 

is it important to identify number of slots > 1 for the pool at creation or can add to pool as need?

 

Edited by can4d
Link to comment

Well, I was expecting it to accept that disk with its contents into that pool, but something didn't work there.

Feb 10 12:09:32 Tower  emhttpd: shcmd (58917): mkdir -p /mnt/cache_ssd
Feb 10 12:09:32 Tower  emhttpd: /mnt/cache_ssd mount error: No pool uuid
Feb 10 12:09:32 Tower  emhttpd: shcmd (58918): umount /mnt/cache_ssd
Feb 10 12:09:32 Tower root: umount: /mnt/cache_ssd: not mounted.

Let's see if @JorgeB is available and can put us on the right track.

Link to comment

I went ahead and added the other 500 to the cache_ssd pool because I honestly remember it that way and it appears it accepted both as the pool as shown. The other thing I am pretty sure I did was use the nvme as the parity 2. 

 

But I will wait for your thoughts. 

Tower_Main_and_Calling_All_Misfits_-_Evernote.png

Link to comment
2 hours ago, can4d said:

pretty sure I did was use the nvme as the parity 2

Not possible. Just like parity, parity2 has to be at least as large as any single data drive in the array. Also, parity has no filesystem, and the nvme mounted as btrfs and had files on it.

 

Also, no good reason to waste an SSD as parity on an array of spinners, and no good reason to have dual parity when you only have a single data disk.

 

Do you instead mean it was another pool?

 

49 minutes ago, can4d said:

old config

Before you do that, post a screenshot of Main and new diagnostics so we have a record of how you have things now.

 

Also, make a backup of the current flash drive from Main - Boot Device - Flash - Flash Backup.

 

I don't remember, are you still using the original flash drive? Or did you create a new install on a different flash drive? Just trying to understand the license situation if you go back to that old config.

Link to comment

BACK IN BUSINESS. HAHAH. Thank you Thank you. 

 

I have learned a lot in this little adventure. Couldn't have survived without you trurl. I have made another backup. 

 

Now I would love to evaluate my config to understand if there is a better method to the madness.

Tower_Main.png

Link to comment
51 minutes ago, can4d said:

VMs are enabled but they still do not show. 

The definitions for the VMs are in libvirt.img which you have configured in the libvirt folder of the system user share (the usual place).

 

Your system share is configured to prefer a pool named cache, which doesn't exist, so it was created on disk1 instead. Possibly recreated there with nothing in it.

 

system share has no files on any other disks, so your docker.img is the same way, in a folder named docker in the system user share on disk1.

 

Your domains share is cache:yes with pool vms. Since it is yes, some of it has been moved to the array on disk1. Normally you would set that to prefer instead of yes so it will stay on the designated pool.

 

Possibly your vdisks are in the domains share on either disk1 or the vms pool, but without the definitions in libvirt.img they will have to be setup again.

 

Also, appdata has files on disk1 and on cache_ssd. It is configured to prefer a pool named cache, which doesn't exist so any new files for that will be created on the array.

 

To summarize what you need to fix

 

Set appdata and system shares to prefer a pool named cache_ssd, set domains share to prefer a pool named vms.

 

Nothing can move open files, so you have to disable Docker and VM Manager in Settings and then run Mover to get these moved to their designated pools.

 

If you would rather have any of these on the other pool that can be done too, but mover ignores files that are not on the designated pool, so a little more work to get that done.

 

Link to comment
On 2/11/2023 at 12:28 PM, trurl said:

To summarize what you need to fix

 

Set appdata and system shares to prefer a pool named cache_ssd, set domains share to prefer a pool named vms.

 

Nothing can move open files, so you have to disable Docker and VM Manager in Settings and then run Mover to get these moved to their designated pools.

 

 

All done. Everything seems like it was. one vm didn't require recreating/installing. I just added it and referenced the disk image and it was working where it left off. The others I had to readd with no issue. 

 

one question: On my pools do I need to add a minimum of 3 drives so one can act as a backup?

 

Thank you again for all of your help. I won't post here anymore with questions off topic. very much appreciate it. 

Link to comment

default for multidisk pools is btrfs raid1. Only 2 disks are needed in btrfs raid1 pool to have redundancy. Other btrfs raid modes are available.

https://carfax.org.uk/btrfs-usage/

 

One approach (and what I do) is to have one pool for caching user share writes, and a separate pool for those things that need speed.

 

The pool for caching user share writes contains files that haven't yet been moved to the parity array. Do they need redundancy or is it good enough to just let them get moved before they get protection?

 

The pool for speed, typically the "default" shares

https://wiki.unraid.net/Manual/Shares#Default_Shares

probably needs to have backups anyway, whether they have redundancy or not. There is a plugin to backup appdata and libvirt.img (CA Backup), and a plugin to backup VMs (VM Backup)

 

The way I have mine setup is partly due to what hardware I had available.

 

My "cache" is 2x500G SSDs as btrfs raid1, and my "fast" is 256G nvme as xfs

 

And these pools get used in combination at times when postprocessing downloads. I download to a cache:yes share, postprocess to a fast:only share, let the results get moved to the final destination cache:yes share and that share gets moved to array when mover runs.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.