Netjet1980

Members
  • Posts

    4
  • Joined

  • Last visited

Netjet1980's Achievements

Noob

Noob (1/14)

0

Reputation

  1. That's actually a great point! Might give this a go and see how these shucked 18TB WDs perform as parity.
  2. Well I would need to get a second 18TB drive (it's a dual parity system), additionally I am concerned about system performance using a white label internal use WD drives for parity, in my experience they run very hot and don't perform great, it's a WD180EDGZ). 18 TB exos drives on the other hand are stupid expensive at the moment. So I was hoping there was away to limit the useable drive size from 18 to 16 TB somehow.
  3. Hello! I am aware, that generally your parity drives need to be at least as large as your largest data disk. Here is the thing: I ordered a 16 TB WD Elements from Amazon, but the nice guys shipped me a 18TB. I am running dual parity with 2x 16TB Exos drives. I just tried to add this to my array, expecting it to just add it (with only 16 TB useable). I am used for this to work with Synology NAS systems, where you can only use as much space as the parity allows, when you then upgrade parity disks the additional space gets allocated. Sadly, unraid doesn't let me do this and expects a parity swap procedure instead. Before I put my 18TB to the side until I can afford DECENT 18TB parity drives (don't fancy those "internal use" WDs as parity compared to the great performing EXOS drives). Is there really no other way?
  4. I second this! We are running regular large backups, which I would like to run via a cache to make full use of 10gbe), but these would need to be moved off that cache quickly due to size restrictions. But I don't want to run the mover for the main cache pool too often during peak times. Individual schedules for different cache pools would be ideal. There are countless use cases for this. I hope this can be introduced before 6.9 stable goes live.