Marshalleq

Members
  • Posts

    811
  • Joined

  • Last visited

1 Follower

About Marshalleq

  • Birthday October 17

Converted

  • Gender
    Male
  • URL
    https://www.tech-knowhow.com
  • Location
    New Zealand
  • Personal Text
    TT

Recent Profile Visitors

2132 profile views

Marshalleq's Achievements

Collaborator

Collaborator (7/14)

109

Reputation

  1. I wonder why that doesn't exist already. I also wonder what happens when you have multiple pools, i.e do you need multiple cache files. I guess I gotta do some googling. I have quite a large number of pools.
  2. I would expect to use the exports file manually. And to make sure it persists across reboots. I'm trying to remember if ZFS has native NFS sharing built in like it does for SMB. If it does, I assume it will be the same i.e. edit the existing sharing mechanism. I think the main point is, currently the sharing mechanisms built into unraid GUI do not work for ZFS, you've got to do it at the command line. Hope that helps. Marshalleq
  3. Hi all, does anyone know why zdb command does not work? Is this something that could be fixed? I fairly regularly find that it would be useful to have. Thanks.
  4. I wonder if that will work for my Nextcloud docker. Hmmm
  5. Yep understand. The unraid downgrade was all about performance issues. Running RC1 my chia container had huge performance issues. Downgrading resolved that. I did notice the loop service at 100% also and trying a docker image froze the system completely. so there’s still something problematic about zfs, docker and unraid. maybe it’s the driver issue you mention.
  6. Aha, that makes sense! Thankyou! I hadn't realised Unraid was actually using ZFS anywhere yet. I've downgraded from RC1, but left the docker folder option (created a new one though) - it didn't work for me last time, but so far the performance issues are solved - so I think issues were RC1, but too soon to tell obviously. Then the question will be, what is it about RC1 causing issues - argh....
  7. So it turns out these are definitely not snapshots, something is creating datasets. The mount points are all saying they're legacy. Apparently that's when it's set in fstab, which of course they're not. I'm guessing it's something odd with docker folder mode so I'm going to go back to an image and try that.
  8. Hi all, recently I made two changes, 1 upgraded to rc1 of Unraid (which from memory has upgraded ZFS) 2, changed from a docker image file with btfrs to a docker folder, ironically called docker image. I've been trying to fault find some performance issues that have subsequently occurred and find a bunch of random snapshots have been taken of the docker image folder. There are no automated snapshots set for this folder and I'm wondering if anyone else has noticed anything similar? See screenshot. I'll probably just delete the dataset and it's subfolders and create a new one to see if that fixes it, but just in case....
  9. Wow, looking forward to getting my hands on this! Fantastic idea, and nice job!
  10. Hi, has anyone got any working scripts etc for backing up teams (I guess one drive would do) to local? It's a not so well known fact that Azure doesn't have any backups, other than to protect itself, so I was thinking to suck this down to a ZFS array with so I can use znapzend across it like I do everything else. Seems like rclone could be a great solution. Thanks, Marshalleq
  11. Yep, I'm aware of that. What I said still stands though. It's performance is disappointing. There aint not way of getting around that. Obviously these manufacturers are banking on most peoples use cases not mattering. But as soon as you want to do something serious, no dice.
  12. So yes, my original assessment stands then, it's performance is abysmal. Why doesn't really matter - though I read that link and understand it's another drive that cheats with fast cache at the start and slow at the end. So great for minor Bursts but not much else. I was using Ramdisk too with those above numbers. Yes, I got these because of the 'advertised' speed and the advertised endurance. Normally I'd buy Intel, but the store had none. I'm fairly new to Chia, but am happy it's levelled off a bit that's for sure. Those people that go and blow 100k on drives and are only in it for the money - they deserve to leave! I do have 2 faulty drives I need to replace which will have plots added, but that's it. I should add, I'm grateful for the link as now I understand it's definitely not me! Thanks. Marshalleq.
  13. Ah so you have the same drives, that's interesting. I got mine due to low stock of Intel for some chia plotting. The performance of them is actually less that some much older Intel SATA connected drives. And when I say less, I was putting the two firecuda's into a zero parity stripe for performance vs 4 of the SATA drives into a stripe. Given the phenomenal advertised speed difference between NVME and SATA I was not expecting reduced performance. Mine are also the 520's in the 500GB flavour. However I don't have PCIe4 and have them connected via a pass through card in 2xpcix16 slots. My motherboard is an x399 with gazillions of PCIe lanes so that's not a limiting factor. Even though I don't have PCIe 4 it would still be a very large performance increase as far as I know. But this is my first foray into NVME - either way, the result was disappointing. EDIT - I should add the Chia plotting is one of the few use cases that exploits good / bad drive hardware and connectivity options - I'm not sure how much you know but it writes about 200G of data and outputs to a final 110ish Gig file. This process takes between 25 minutes and hours depending on your setup. But disk is the primary factor that slows everything down. I was managing about 28 minutes on the Intels and about 50 minutes on the FireCuda's. I should also add (because I can see you asking me now) that a single Intel SSD Drive outperformed the firecuda's also coming in at about 33 minutes. The single intel was a D3-S4510 and the 4 Intels in the stripe were DC S3520 240G M.2's. That should give you enough information to compare them and understand (or maybe make some other suggestion) as to why the firecudas were so much slower. On paper, I don't think they should have been.
  14. @TheSkaz I'd tend to start from the angle @glennv has eluded i.e. hardware. The only place I've seen ZFS struggle is when it's externally mounted on USB. I think there is a bug logged for that, but not sure as to it's status. As you can imagine it needs a consistent communication with the devices for it to work properly so if the hardware is not keeping up or being saturated somehow is my first thought. I've also been surprised at just how rubbish my NVME Seagate FireCuda drives are compared to the Intel SSD ones I have. So either that could confirm your theory or it shows the potential variability in hardware. I would be interested in what you find either way.
  15. Hi there, I'm getting Execution Error, Bad Data and really it's quite a simple container isn't it. Anything obvious I'm missing? Thanks.