noob - used space, parity sync, passthru, CPU Unsupported


Go to solution Solved by JorgeB,

Recommended Posts

tl;dr

New to unraid, a few questions.

 

-Why used space on formatted drive?

-Why parity sync even though drives are empty?

-How do I passthru NVMe drive (with win10 already on it) to a Win10 VM?

-CPU Unsupported?

 

---

 

New to Unraid, testing out the trial to determine how I like it/if I want to run it long-term as my storage/VM server.

Main draw is that I really...REALLY like the idea that I can mix drive sizes and still have parity.

I buy storage as I see deals so I have a mixture of 2T, 10T, 16T, 20T drives :D thus zfs/TrueNAS or Scale are kinda out of the question.

I wanted to get used to adding/removing drives, parity, syncing, setting up shares, etc. in a low pressure (dev) environment before moving to Production and purchasing a license.

 

---

 

I have (in test) currently:

-WD 20TB Parity

-SD 16TB Drive

-512 NVMe (intel) [not added to pool]

 

Created the HDD array, formatted the 16TB drive (both are new from MFG)

 

Q:

1)

Why does it say I have 134GB of used space, 15.9TB free on the 16TB drive?

 

3)

Both drives are empty, why is it doing a parity sync?

XFS/XOR parity or something or another?

I hate wasting a whole drive write without any data to write to it.

Seems a huge waste.

I'll only have to re-sync parity when I add more drives/copy files.

 

3)

-How do I passthru NVMe drive (with win10 already on it) to a Win10 VM?

My 512 Drive already has a bootable copy of Win10.

Is it possible to create a VM and add it as a boot drive?

It didn't seem possible from the GUI as that seemed more geared towards building a new VM from scratch.

 

4)

I think this has to do with logging. But it says "CPU is unsupported" during boot logs.

Should I be concerned?

image.png.75b7ca0c12403514217510ae54535a68.png

 

Model: Custom
M/B: ASRockRack X570D4U-2L2T Version - 
BIOS: American Megatrends International, LLC. Version P1.40. Dated: 05/19/2021
BMS: 1.2
CPU: AMD Ryzen 9 3900X 12-Core @ 3800 MHz
HVM: Enabled
IOMMU: Enabled
Cache: 768 KiB, 6 MB, 64 MB
Memory: 64 GiB DDR4 Multi-bit ECC (max. installable capacity 128 GiB)
Network: bond0: fault-tolerance (active-backup), mtu 1500
 eth0: interface down
 eth1: 1000 Mbps, full duplex, mtu 1500
 eth2: interface down
 eth3: interface down
Kernel: Linux 5.15.46-Unraid x86_64
OpenSSL: 1.1.1o
Uptime: ~3hrs

 

Link to comment
  • Solution
1 minute ago, NathanR said:

1)

Why does it say I have 134GB of used space, 15.9TB free on the 16TB drive?

https://forums.unraid.net/topic/95309-empty-disk-using-56gb/

 

1 minute ago, NathanR said:

3)

Both drives are empty, why is it doing a parity sync?

Parity still needs to be synced, disks might be empty but unless they are new or have been cleared they won't be all zeros.

 

3 minutes ago, NathanR said:

4)

I think this has to do with logging. But it says "CPU is unsupported" during boot logs.

Should I be concerned?

CPU is not supported by mcelog, not by Unraid, it's fine.

Link to comment
4 minutes ago, NathanR said:

3)

-How do I passthru NVMe drive (with win10 already on it) to a Win10 VM?

My 512 Drive already has a bootable copy of Win10.

It's possibly, bind the NVMe device to vfio-pci first (Tools -> System devices) then reboot and select it for the VM when creating, it will appear under other PCI devices section.

Link to comment

Thank you! 

 

1/3)

That makes sense.

Q: So there's a reserved set of space on each drive dedicated to XFS?
Q: Is that also sync'd with the parity drive or do I need to be concerned with?

 

I am super new to XFS. Took me an hour of googling just to figure out that's the file system Unraid uses by default. xD

Went down the XFS/BTRFS convo rabbit hole... decided I don't know and don't care. Defaults it is! :) 

 

The drives are literally new out of box. I did tell Unraid to format the 16T drive, but the 20T parity wasn't an option.

But I can see how Unraid would still want to make sure.

 

Q: Will Unraid need to sync the entire 16T drive's worth of zeros/junk data? Or just the 134GB XFS portion?

Is there a workaround, like pausing the array, adding drives until I'm ready for a parity sync?
I hate to 'waste' drive writes on 'nothing' (just testing).
I keep pausing it for now.

 

Q: What am I missing (I see lots of filesystem rabbit holes even in that thread you pointed to)? [point me to relevant literature]

24 minutes ago, JorgeB said:

https://forums.unraid.net/topic/95309-empty-disk-using-56gb/

 

Parity still needs to be synced, disks might be empty but unless they are new or have been cleared they won't be all zeros.

 

CPU is not supported by mcelog, not by Unraid, it's fine.

 

4) Copy thanks, I thought that was what it was. But I wanted to make sure given this was my first install of Unraid ever.

 

2)

WOW that was easier than xcp-ng's commands lol. I love the GUI!

Unfortunately the VM throws an error. Perhaps this is due to IOMMU only finding the controller?

 

IOMMU group 21:			 	[8086:f1a6] 23:00.0 Non-Volatile memory controller: Intel Corporation SSD Pro 7600p/760p/E 6100p Series (rev 03)
This controller is bound to vfio, connected drives are not visible.

 

VM creation error
internal error: qemu unexpectedly closed the monitor: qxl_send_events: spice-server bug: guest stopped, ignoring
2022-07-11T15:23:36.764156Z qemu-system-x86_64: -device vfio-pci,host=0000:23:00.0,id=hostdev0,bus=pci.0,addr=0x5: vfio 0000:23:00.0: failed to add PCI capability 0x11[0x50]@0xb0: table & pba overlap, or they don't fit in BARs, or don't align

 

22 minutes ago, JorgeB said:

It's possibly, bind the NVMe device to vfio-pci first (Tools -> System devices) then reboot and select it for the VM when creating, it will appear under other PCI devices section.

 

Edited by NathanR
Link to comment
9 minutes ago, NathanR said:

Q: Will Unraid need to sync the entire 16T drive's worth of zeros/junk data? Or just the 134GB XFS portion?

Is there a workaround, like pausing the array, adding drives until I'm ready for a parity sync?
I hate to 'waste' drive writes on 'nothing' (just testing).
I keep pausing it for now.

 

On the one hand it depends on how irreplaceable the data you are housing on the server is... If you are just experimenting I would say turn OFF the Parity drive until you have added drives.

 

On the other hand, If you are adding drives slowly over time and are worried about the data, then let it go ahead.

 

I would try not to fret over parity sync using drive life... My system tends toward up times in excess of 100 days, but occasionally I do something silly and it ends up re-syncing parity... You just have to go with it.

 

 

 

Arbadacarba

Link to comment

Thank you for reading 'between the lines' into my angst and refuting it thoroughly.

 

I have a lot of consternation with syncing xD

when I first got my Samsung 2T drives forever ago and did software raid in windows Disk Management... It would constantly re-sync.

It bothered me so much I just started having two drives and copying the data manually every so often.

 

Yeah - for now I'm just playing with importing/exporting empty drives to see how to do things without risk of data loss.

 

I'ma get a few more empty drives and try the zero format thing from JorgeB's referenced thread.

 

On 7/11/2022 at 10:38 AM, Arbadacarba said:

 

On the one hand it depends on how irreplaceable the data you are housing on the server is... If you are just experimenting I would say turn OFF the Parity drive until you have added drives.

 

On the other hand, If you are adding drives slowly over time and are worried about the data, then let it go ahead.

 

I would try not to fret over parity sync using drive life... My system tends toward up times in excess of 100 days, but occasionally I do something silly and it ends up re-syncing parity... You just have to go with it.

 

 

 

Arbadacarba

 

 

I might have the issue/workaround for my device passthrough error/issue.

 

 

No idea what XML files are used for or anything. Time to do some reading/watching I guess.

Might just say screw it and buy a Samsung.

The intel SSDPEKKF512G8_NVMe_INTEL_512GB_PHHH9176002H512H - 512 GB (nvme0n1) was just something I had laying around.

 

On 7/11/2022 at 10:24 AM, NathanR said:
VM creation error
internal error: qemu unexpectedly closed the monitor: qxl_send_events: spice-server bug: guest stopped, ignoring
2022-07-11T15:23:36.764156Z qemu-system-x86_64: -device vfio-pci,host=0000:23:00.0,id=hostdev0,bus=pci.0,addr=0x5: vfio 0000:23:00.0: failed to add PCI capability 0x11[0x50]@0xb0: table & pba overlap, or they don't fit in BARs, or don't align
Edited by NathanR
fixed link to VM fix
Link to comment

In Unraid parity is real-time.   Parity is file system agnostics and works at the raw sector level so if you have a formatted drive as far as Unraid is concerned that disk is not empty as the file system data has to be reflected in the parity data.   Once you have created the initial parity data then when you subsequently write data to the array Unraid will only update the parity sectors corresponding to the file just written.

  • Thanks 1
Link to comment
18 minutes ago, itimpi said:

In Unraid parity is real-time.   Parity is file system agnostics and works at the raw sector level so if you have a formatted drive as far as Unraid is concerned that disk is not empty as the file system data has to be reflected in the parity data.   Once you have created the initial parity data then when you subsequently write data to the array Unraid will only update the parity sectors corresponding to the file just written.

 

Perfect, the best answer to my parity questions. Thanks!

 

This just reminded me of Q5) since the parity drive(s) is completely separate of the filesystem/storage doesn't that mean I am losing 4TB of space (given my largest drive size in the array is 16TB) until I get another 20TB drive?

Link to comment
1 hour ago, JonathanM said:

Data drives are the only place to store data, parity doesn't hold any sensible data. So in a sense, yes you are losing the entire 20TB parity drive of capacity to gain the ability to rebuild any single drive failure, assuming all the rest of the data drives are working seamlessly across their entire capacity.

 

https://wiki.unraid.net/Parity


Well, I understand the parity drive is 'lost' space.
Raid 1 = 50% overhead
Raid 5 = 66% (1 drive) [Something like this anyways]
I meant that the last 4TB will never be written to and can't be used until a 20TB data drive is added to the array to compliment the 20TB parity drive(s).

 

This is one of those 'duh' answers/moments. But my brain is just a bit fried from learning all this new stuff :$

https://www.servethehome.com/raid-calculator/

Link to comment

Big thank you everyone.

 

I realize this is a moment where you want to say "search the forum," "RTFM," etc.

 

But I really appreciate the time/answers instead of 'figure it out yourself' sometimes feeling comfortable about being 'heard' is enough to satiate consternation.


I tried to be diligent reading the manual/getting started guide, googling. But I still had some nagging questions and in hindsight they seem trivial.

 

Again, thank you.

I'm sure I'll be back for more! :)

Link to comment

Unraid doesn't use ANY of the options shown at that calculator link. It's UNRAID.

 

As long as all the remaining data drives are functioning 100%, Unraid can emulate and rebuild from ANY number of data drives. It's currently limited by choice to 28 data disks, so if you have all the same size disks you have less than 3% lost overhead. The probabilities of one of the remaining drives acting up during a rebuild is not negligible, so we usually recommend 2 parity drives any time you have more than 10 or so data disks, but it's all a matter of statistics, and you can choose to run all 28 disks with one parity if you like.

Link to comment
1 hour ago, JonathanM said:

Unraid doesn't use ANY of the options shown at that calculator link. It's UNRAID.

 

As long as all the remaining data drives are functioning 100%, Unraid can emulate and rebuild from ANY number of data drives. It's currently limited by choice to 28 data disks, so if you have all the same size disks you have less than 3% lost overhead. The probabilities of one of the remaining drives acting up during a rebuild is not negligible, so we usually recommend 2 parity drives any time you have more than 10 or so data disks, but it's all a matter of statistics, and you can choose to run all 28 disks with one parity if you like.

Not to be argumentative but isn't Unraid most similar to RAID4?

I was mostly talking out the answer to myself.

Thanks for listening.

 

That is indeeeeed an incredibly low overhead loss amount once you get the number of disks higher!

Link to comment

My own build includes 7 HDDs. I have ended up segmenting them by types of Data so that only one disk has to spin up at any given time.

Disc 1 - 10TB - Television

Disc 2 - 10TB - Anime, Audio Books, Comedy, Comics, Documents, E-Books, Graphics, Public, Software

Disc 3 - 10TB - Music, Photos, Videos

Disc 4 - 12TB - Movies

Disc 5 - 14TB - overflow

Disc 6 - 16TB - Backup

Disc 7 - Unpopulated

Parity - 18TB

 

The advantages of not having all drives spin up when data is pulled from one category, and the fact that I can lose only 20% of my available storage.

 

On the down side I have to move the largest drive into the Array every time I buy a larger drive... But I can live with that.

Link to comment
21 hours ago, Arbadacarba said:

My own build includes 7 HDDs. I have ended up segmenting them by types of Data so that only one disk has to spin up at any given time.

Disc 1 - 10TB - Television

Disc 2 - 10TB - Anime, Audio Books, Comedy, Comics, Documents, E-Books, Graphics, Public, Software

Disc 3 - 10TB - Music, Photos, Videos

Disc 4 - 12TB - Movies

Disc 5 - 14TB - overflow

Disc 6 - 16TB - Backup

Disc 7 - Unpopulated

Parity - 18TB

 

The advantages of not having all drives spin up when data is pulled from one category, and the fact that I can lose only 20% of my available storage.

 

On the down side I have to move the largest drive into the Array every time I buy a larger drive... But I can live with that.

 

 

Hmm, I like that approach.

 

I need to do some thinking about my data I thinks.

I presume you're running PLEX and that works well?

 

Each drive has separated content, but is it all one share or multiple different shares?

I'll admit having the location agnostic to the share is new territory for me.

Link to comment

It's one array but separate shares... Each share has included and excluded drives. 

 

When I launch Plex the interface works from the cache/appdata folder, but when I want to start a movie it takes a few seconds while the drives spin up. Same thing with Music and Television shows etc. But once the music drive is spinning there is never a delay. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.