b10011

Members
  • Posts

    7
  • Joined

  • Last visited

b10011's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Hi, I can't get rid of SMART health check errors. First I had 6-drive in 5.25" slot which had poor-feeling Chinese SATA cables. I was sure the problem was with the slot or the poor cables. I ordered good quality (afaik) SATA cables and directly connected my 2 cache SSDs to the MB. I still keep getting those errors, but now only for "Cache 2" disk. I run them in btrfs RAID0 (for higher write speeds). It might not be relevant but I have Dynamix SSD TRIM scheduled to daily. Error code: 199 (UDMA CRC error rate) Drives: 2x Samsung 860 EVO 2TB Age of the drives: 5 months btrfs filesystem df: Data, RAID0: total=384.00GiB, used=131.38GiB System, RAID1: total=32.00MiB, used=48.00KiB Metadata, RAID1: total=2.00GiB, used=272.06MiB GlobalReserve, single: total=26.30MiB, used=0.00B btrfs balance status: No balance found on '/mnt/cache' Is this a known problem with btrfs RAID0 or with the drives? What should I try next? Is the risk of data corruption increased when this keeps happening?
  2. Interesting! Stupid question, would unraid support SSD TRIM if I didn't use parity for the array? Anyway, I probably have to go for the following configuration: unraid SSD array, no parity (a real) hardware RAID0 HDD array (just so that all drives appear as one) automation of the copying process That way I have 1 drive redundancy as any one drive breaking destroys either SSD or HDD array, but it leaves open the possibility to add M.2 parity later.
  3. Fast connection: AI training from datasets at the server + fast desktop backup I can deal with losing data from past 1 day. After all I'm going to have daily backup to the array + off-site. Most of the work I do will be "backed up" in git, photos are also in Dropbox (easier for phone) and anything else isn't so important that I couldn't risk it for 1 day.
  4. 200-400 MB/s is approximately 20-40% of the speed that the connection can handle. And that would only get worse over time. SSDs in cache are RAID1? And it cannot be used as RAID0? Because I would want to run cache disks as a combined drive that is fast with zero redundancy. And every night the server would duplicate the SSD cache data to HDDs in array. That way I would have 1 drive redundancy at least for the data from the last night. Only way I would see it working as an all-SSD array would be to have Samsung EVO drives in the array and a PRO M.2 drive as parity (so that it's fast enough for the 10 GbE). But M.2 isn't that easy to put into the array (would need lots of PCIe M.2 adapters) so it would have to be large enough (say 4TB) so that I could expand the array with up to 4TB drives. After I need to go over 4TB, I would have to throw the M.2 into trash and get larger M.2 + larger array drives. That sounds way more expensive than zero redundancy SSD cache with HDD duplication every night.
  5. I have understood that is supported, but I need parity and that on the other hand is not supported. If I added HDD parity, everything would slow down significantly and I wouldn't want that. The only solution I came up with is the one I mentioned.
  6. I have found out that you cannot (or at least shouldn't) create all-SSD array with parity. My current plan would be the following: 2x 2TB SSDs as cache 1x 4TB HDD in array No parity (at least for now) The data will be never moved to the HDD, only duplicated This setup would be good for me because Really fast file access (going to have point-to-point 10GbE between desktop and unraid server) (I do understand that it's overkill for the time being and 2xSSD are still slower than 10GbE in practice) TRIM can be kept on for the SSDs Data is secured from corruption and single drive failure (whether in the cache or in the array) HDDs are cheap, 4TB WD RED costs 18% of the 2x 2TB Samsung SSD price here so data duplication to HDDs is almost free The only time the HDD array would spin up would be when the data gets copied (once a day) Cache SSDs and array HDDs could be added as needed, no need to rebuild a real RAID every time SSDs can differ in sizes, now I have 2x 2TB, few years and I'll most likely have 2x 2TB + 2x 4TB and so on as the prices go down So the questions are: Did I miss something important? Is there a way to do this without tinkering or hacky solutions, are there plugins for this purpose? If tinkering is required, what would be the most reasonable way?