Jump to content

winning.run

Members
  • Posts

    6
  • Joined

winning.run's Achievements

Noob

Noob (1/14)

0

Reputation

  1. hey Jorge, apologies, I am a complete zfs newbie. I tried googling what you asked for but I can't find the command.
  2. Hi, I have a Netapp DS4246 with 24x 4tb drives. It was working great for about a month with a Netapp PM8003 card until I updated and it caused havoc and I now can't access my ZFS raidz1 pool. Bit of a concern as it has a lot of data on it. Doing a zpool import showed most of the drives as unavailable at the time except for a couple. I'm aware of the compatibility issues between unraid and the PM8003, so I took the opportunity to get an LSI 9300 card instead. Firing it up tonight, the file system is still unmountable and it wants me to format. Here is the log for the zpool import now: zpool import -f pool: netapp id: 17755539231543376304 state: UNAVAIL status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E config: netapp UNAVAIL insufficient replicas raidz1-0 UNAVAIL insufficient replicas sdaa1 UNAVAIL sdab1 ONLINE sdac1 ONLINE sdad1 ONLINE sdae1 ONLINE sdh1 ONLINE sdag1 ONLINE sdah1 ONLINE sdai1 ONLINE sdaj1 ONLINE sdak1 ONLINE sdal1 ONLINE sdam1 ONLINE sdan1 ONLINE sdao1 ONLINE sdap1 ONLINE sdv1 ONLINE sdw1 ONLINE sdas1 ONLINE sdat1 ONLINE sdau1 ONLINE sdav1 ONLINE replacing-22 DEGRADED 10124387424495728984 UNAVAIL sdab1 ONLINE sdz1 UNAVAIL ^^I notice that the lettering scheme has changed. They all use to be sda, sdb, sdc. But now theres a mix of sdaa1, sdab1 etc. I also note that when the drives are sitting in unassigned devices, some are mountable, some aren't. See attached screenshot. Should I try a new config? Please, any guidance would be greatly appreciated.
  3. Hey guys, First time on the forum. I hope you guys can provide some guidance. I have a 24-disk server that I am attempting to expand due to storage constraints. I have recently picked up a Netapp DS4246 with another 24 drives, and have it connected via QSFP. Everything is up and running and I had all the drives as unassigned drives. I wanted to add them to the array, but that's not possible due to the 30 drive limit. Reading up, it's suggested I create a cache pool with these. I did this. In the Cache Pool settings, leaving the file system as auto defaults to BTRFS, which puts these in a RAID1, halving my capacity. I really want these to be protected with one parity drive to maximize capacity for storage (the data is replaceable). I am guessing I will need to use ZFS with RAIDZ? Sorry, I have no experience with ZFS. Then, I have a "data" share that I want to span across the 24 drives in the array and the 24 drives in the Netapp cache pool. I really don't want to separate my data share unless there's absolutely no other options. It's going to be messy if I have to. I have looked into the Share Settings. I wanted to use the array as the primary storage and the Netapp as the secondary storage, but this doesn't appear to be possible. Selecting array as primary greys out the secondary selection. If I use the Netapp as the cache, and the array as the secondary, I believe Mover will keep trying to move the data from the Netapp to the array, causing the array to fill up. I would really like to High-Water across both the array and the Netapp. Once again, I will settle and manage if this isn't an option. Your guidance and patience is appreciated. :)
×
×
  • Create New...