Jump to content

Lev

Members
  • Content Count

    334
  • Joined

  • Last visited

  • Days Won

    4

Lev last won the day on April 13 2018

Lev had the most liked content!

Community Reputation

73 Good

1 Follower

About Lev

  • Rank
    Advanced Member

Recent Profile Visitors

1646 profile views
  1. This is correct. UnRAID's starting sector is 64, it used to be 63. Never 2048. I know it sucks to hear. But consider it a good excuse to buy one more Easystore to shuck and make the long copy process a little less painful 😀
  2. I've been puzzled for the last week why most of my XFS partitions were reporting 'sectsz=512' and a few were reporting 'sectsz=4096' and traced it to a whether the drive had been partitioned and formated by UnRAID or UnAssigned Devices. I put the detailed log output at the very bottom to show both outputs. Researching if an answer existed already I had to take a trip down memory lane searching through the posts to recall that UD didn't always create UnRAID compatible partitions. I remember those days but didn't find an answers to why the difference. I went deeper into the code and traced it to the difference in the following commands: # Create new UnRAID Partition sgdisk -o -a 8 -n 1:32K:0 {$dev} # Create UnAssigned Disk Partition sgdisk -o -a 1 -n 1:32K:0 {$dev} What's the real impact? 🤷‍♂️ Both ways work and are functional, however the UnRAID command aligning on "-a 8" does seem to not trigger an alignment warning. # UnRAID partition creation # sgdisk -Z /dev/sde GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. # sgdisk -o -a 8 -n 1:32K:0 /dev/sde Creating new GPT entries in memory. The operation has completed successfully. # UnAssigned Devices partition creation # sgdisk -Z /dev/sde GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. # sgdisk -o -a 1 -n 1:32K:0 /dev/sde Creating new GPT entries in memory. Warning: Setting alignment to a value that does not match the disk's physical block size! Performance degradation may result! Physical block size = 4096 Logical block size = 512 Optimal alignment = 8 or multiples thereof. The operation has completed successfully. Detailed XFS_INFO output from two drives for comparison Partitioned and Formated by UnRAID v6.8.3 # xfs_info /dev/sde1 meta-data=/dev/sde1 isize=512 agcount=11, agsize=268435455 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=2929721331, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=521728, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Partitioned and Formated by UnAssigned Devices v2020.06.28 # xfs_info /dev/sdas1 meta-data=/dev/sdas1 isize=512 agcount=8, agsize=268435455 blks = sectsz=4096 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=1953506633, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=521728, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Any thoughts on this @dlandon ?
  3. Works awesome No more playing hide and seek trying to guess which drive has the mount I'm looking for. Love the change. Thank you!
  4. A enhancement request... as the number of drives grows in my UD list in the GUI, I find I have to press the (+) to expand each drive to find the one that has the mount point I'm looking for. Not sure what the right enhancement is, perhaps a way to expand them all at once to make all the mount points visible?
  5. Not that I'm aware of.
  6. Nice update @Squid! Much appreciated 😀
  7. You mean like this? My current server
  8. @Addy90 Now that I've got more free time on my hands, I'm going to give CEPH a try. I ordered some more RAM for my 4 server cluster that I'll be setting up for it. In the meantime I'm reading and watching tutorial videos. Thanks again for your post. Using CEPH at home is a big leap from UnRAID but in the years ahead it will prove to scale beyond my yearly budget to buy additional hard drives 🤣
  9. I had some fun updating my User Profile for this forum. There's an 'About Me' section so I added my UnRAID servers I've had over the years. Will try to get more actual photos.
  10. I'm presently in my 3rd Generation of UnRAID server. 2017 to present - SuperMicro SC847 - 36 bays 2014 to 2017 - Norco 4224 - 24 bays 2011 to 2014 - Dell Inspiron - I modded the case to hold 10 drives. Future - In Test - BackBlaze Storage Pod v3 - 45 drives Not sure if it'll be UnRAID. Right now I'm testing SnapRAID w/ MergerFS to overcome both the drive limit and move to a asynchronous parity sync to gain write speed direct to the data disks.
  11. No. It doesn't fit my requirements. I overcame the 30 drive limit, there's various methods. I use Unraid VMs within an Unraid Host and the mount the smb shares on the Unraid VMs with Unassigned Devices in my Unraid Host. It took some time to get used to. Rebooting the UInraid Host takes a lot of effort to make sure each Unraid VM's array is stopped and then each VM is powered down before the host.
  12. @cbr600ds2welcome, I can see you are new here. happy to have a new person asking these kinds of questions. you'l fit right in. rather than continue this thread, I invite you to join an existing thread. You will likely have many advanced thoughts if you're asking such things. Try searching the forums, and you're likely to discover a wealth of information from those who have asked the same question years eariler.
  13. Found it... Dumping this here so I can find it in the future.... special thanks to him who will recognize his code. Don't use this unless you know WTF you're doing. This will destory things.