Marshalleq

Members
  • Posts

    800
  • Joined

  • Last visited

Everything posted by Marshalleq

  1. Yep, I'm aware of that. What I said still stands though. It's performance is disappointing. There aint not way of getting around that. Obviously these manufacturers are banking on most peoples use cases not mattering. But as soon as you want to do something serious, no dice.
  2. So yes, my original assessment stands then, it's performance is abysmal. Why doesn't really matter - though I read that link and understand it's another drive that cheats with fast cache at the start and slow at the end. So great for minor Bursts but not much else. I was using Ramdisk too with those above numbers. Yes, I got these because of the 'advertised' speed and the advertised endurance. Normally I'd buy Intel, but the store had none. I'm fairly new to Chia, but am happy it's levelled off a bit that's for sure. Those people that go and blow 100k on drives and are only in it for the money - they deserve to leave! I do have 2 faulty drives I need to replace which will have plots added, but that's it. I should add, I'm grateful for the link as now I understand it's definitely not me! Thanks. Marshalleq.
  3. Ah so you have the same drives, that's interesting. I got mine due to low stock of Intel for some chia plotting. The performance of them is actually less that some much older Intel SATA connected drives. And when I say less, I was putting the two firecuda's into a zero parity stripe for performance vs 4 of the SATA drives into a stripe. Given the phenomenal advertised speed difference between NVME and SATA I was not expecting reduced performance. Mine are also the 520's in the 500GB flavour. However I don't have PCIe4 and have them connected via a pass through card in 2xpcix16 slots. My motherboard is an x399 with gazillions of PCIe lanes so that's not a limiting factor. Even though I don't have PCIe 4 it would still be a very large performance increase as far as I know. But this is my first foray into NVME - either way, the result was disappointing. EDIT - I should add the Chia plotting is one of the few use cases that exploits good / bad drive hardware and connectivity options - I'm not sure how much you know but it writes about 200G of data and outputs to a final 110ish Gig file. This process takes between 25 minutes and hours depending on your setup. But disk is the primary factor that slows everything down. I was managing about 28 minutes on the Intels and about 50 minutes on the FireCuda's. I should also add (because I can see you asking me now) that a single Intel SSD Drive outperformed the firecuda's also coming in at about 33 minutes. The single intel was a D3-S4510 and the 4 Intels in the stripe were DC S3520 240G M.2's. That should give you enough information to compare them and understand (or maybe make some other suggestion) as to why the firecudas were so much slower. On paper, I don't think they should have been.
  4. @TheSkaz I'd tend to start from the angle @glennv has eluded i.e. hardware. The only place I've seen ZFS struggle is when it's externally mounted on USB. I think there is a bug logged for that, but not sure as to it's status. As you can imagine it needs a consistent communication with the devices for it to work properly so if the hardware is not keeping up or being saturated somehow is my first thought. I've also been surprised at just how rubbish my NVME Seagate FireCuda drives are compared to the Intel SSD ones I have. So either that could confirm your theory or it shows the potential variability in hardware. I would be interested in what you find either way.
  5. Hi there, I'm getting Execution Error, Bad Data and really it's quite a simple container isn't it. Anything obvious I'm missing? Thanks.
  6. I that happens now that's great - it didn't always. I so seldom reboot my prod box now that I have no idea. It was a really good thing having a dev and a prod - definitely improved the up time in the house - I can't stop fiddling sometimes!
  7. As far as I know, a DMZ is actually not meant to be a forward to all thing, but it just happens to be implemented that way on cheap routers that you'd get from an ISP. So the advise is sound for that segment. If however you had a proper firewall, like Opnsense/PFSense and many others, putting something in the DMZ doesn't automatically forward all ports there. It's just meant to be a place which protects your internal network from the private by having the private limit where it connects and the same of the public. These days, networks are so complicated the branding of a DMZ I assume has mostly gone out the window, but the concept continues to be used and these cheap routers keep it as a free for all to get things going when people don't fully understand what they're doing. That's my 2c anyway - just wanted to throw a bit of education along with the 'don't do statement.
  8. Hi, yeah I did try that - and I just did it again for good measure with same result. (BTW I've joined the Discord group also now thanks). Anyway, I found the culprit - I had in my config like the below: note the first # which needed to be removed and now it starts. Though I'm not convinced that's right yet - should there be something after keep_alive_monitor? Like enabled or true? #keep_alive_monitor: enable_remote_ping: true ping_url: 'www.domain.com' Also, do you have any idea if the below rpc.server traceback is a problem or just related to bad hosts? It's in my logs a few times but I suspect it's happening each time I restart. I'm not sure if it's chia related or machinaris. 2021-09-05T13:20:10.323 full_node chia.full_node.full_node: INFO peer disconnected {'host': '73.254.242.75', 'port': 8444} 2021-09-05T13:20:11.576 wallet chia.rpc.rpc_server : WARNING Error while handling message: Traceback (most recent call last): File "/chia-blockchain/chia/rpc/rpc_server.py", line 83, in inner res_object = await f(request_data) File "/chia-blockchain/chia/rpc/wallet_rpc_api.py", line 1138, in get_farmed_amount tx_records: List[TransactionRecord] = await self.service.wallet_state_manager.tx_store.get_farming_rewards() AttributeError: 'NoneType' object has no attribute 'tx_store' 2021-09-05T13:20:12.487 full_node full_node_server : INFO Connected with full_node {'host': '176.241.136.51', 'port': 8444} 2021-09-05T13:20:12.491 full_node full_node_server : INFO Connection closed: 176.241.136.51, node id: fd30bde4674f602d160b235ade905862112390d8a704d658c50c544318e234fa 2021-09-05T13:20:12.492 full_node chia.full_node.full_node: INFO peer disconnected {'host': '176.241.136.51', 'port': 8444} Thanks.
  9. Hi all, just switched over from standard chia docker container so that I could get alerting. I was wondering should the alerting be saying it is stopped? It's been running for a few hours now - but just thought I'd check thanks.
  10. In my experience you can no longer use Unraid's sharing menu to share files via SMB that are stored on ZFS. Instead use the /boot/config/smb-extra.conf that @jortan mentions below. Unraid has added other automation to their sharing system including the automation of actually creating the folder - which does not fit well with an already created folder. Of course this functionality is likely to change when they finally add ZFS support into unraid natively. We are all waiting for that day for various reasons! Oh and you do have to stop and restart the array (or reboot) for these shares to become active. So it pays to plan them out in advance.
  11. Thanks, looks like it might be OK then - will just have to try it. That thread if I recall, was multiple people with issues, not just me. And mine 'went away' for lack of a better explanation, but one person's didn't, so it was quite inconclusive really. They recently posted back on ZFS that it's still an issue for them too. Anyway, thanks for info - I'll hold my breath until I upgrade!
  12. Now that you mention it, I think I recall someone using ZFS 2.0 that was also having this problem and that's what was confusing me. But 'Oh Crap' because if this is still a problem, that means I can't run native ZFS any more which is a major problem preventing me from upgrading. I don't even have any disks for unraid array's other than a USB stick being used for a dummy array - and I definitely don't want to run docker from that. If I recall even running docker in a folder rather than an image presented the same issue. So here goes a revival of the bug thread I guess. The solution might be to shift to TrueNAS Scale - and as awesome as that is, there are a few challenges to overcome with it - e.g I'm on the fence about shifting to Kubernetes - that's the major one to be honest.
  13. Can I just say that I don't have this issue and I do store docker.img on a ZFS drive. I have no idea why - I did have the issue for a while, but one of the updates fixed it. I don't even run an unraid cache drive or array (I'm entirely ZFS) so couldn't anyway. So it may not apply to everyone.
  14. I agree that multiple arrays is important. And I do take you logic that there's a plugin for ZFS - it's a really good point TBH. That said, I yearn for aligned ZFS to beta updates so we can test it properly (I can't until the plugin is updated) and some alignment with other things in the GUI. It's not just listing the zfs stuff, it's all the spin downs and other things that make it complicated. Also 150Mbps is less than the performance of a single drive. Unraid arrays are awesome but speed is not part of that awesome. If you do basically any other array it's n-1 speed depending on your setup. I.e. 250 x n devices -1. When I bailed on unraid arrays (still use unraid but with proper ZFS) the performance of the whole system including unexpected areas like the performance of the gui - it was night and day. I feel like my computer is running properly now and ZFS isn't even the fastest raid - it's just the safest.
  15. I don't think ZFS wins with snapshots of running machines because technically you are meant to stop the machine before taking an image. However practically there are many people who don't and the machine will recover 90% of cases. What you can do is script the whole thing. This should be possible on BTRFS and ZFS. But my experience is with ZFS because it's more reliable so I can't say that I've done this on BTRFS.
  16. Actually in my experience I'd agree, but only in reverse. ZFS doesn't fix the cause, not having BTFRS does fix the cause though. If you search on the forums (and on the internet) plenty of examples of randomly failing BTRFS. But in fairness, this is now a year or so ago and I'm not up to speed if they've fixed that yet. What you do get from ZFS is rock solid stability and early detection (and healing) around these kinds of issues. So no it doesn't fix the cause but it likely removes the culprit or at least repairs properly while you figure it out - which BTRFS didn't do in the 3-4 times it completely killed my cache and I decided to bail on it altogether.
  17. ZFS doesn't actually need a lot of RAM, that's a common misconception though. What happens is there's a setting for the L1 ZFS cache (which sits in RAM) that can default to quite a high number in some distributions / implementations which makes it seem like it needs a lot of RAM. This can be manually set to a lower value. You can also get ZFS to cache at a reduced level i.e. tell it to cache metadata not whole files so that it doesn't use as much RAM. This works well for large files that are not read often. As for ECC, well it would be great if RAM manufacturers gave us this for the extra 1 chip it requires (kinda like RAID parity on a RAM stick) at a non-exorbitant price yes. I recently do use ECC on my production system, but also use non-ECC ram on many other systems for many years without issue. It's a pretty big topic when you get into it around when exactly you might lose data.
  18. Not sure if you've seen this? https://arstechnica.com/gadgets/2021/06/raidz-expansion-code-lands-in-openzfs-master/
  19. ZFS allows you to do snapshots for VM's. And it has additional snapshot features too such as being able to make a duplicate of a VM with zero space requirement (i.e. it uses the difference). I'm doing this now with unraid on ZFS unofficial and it's fantastic.
  20. 'most reliable' but not reliable is how I would describe it. Scan these forums and there are countless examples of failed raid 1 btrfs. Good call on the priorities and let's hope they bring ZFS number 1! Thanks for posting.
  21. Done and voted for ZFS. I hope unraid come to the party. I'm tired of the consumer end a little and stalking TrueNAS Scale because I miss native functionality such as backups, DOMAINS, ACLS and such like. Not trying to be negative about a great product because they do listen to their customers well - but lets face it, the target market of unraid probably doesn't fully appreciate what they're missing. Sometimes a bit of tech experience does help. ZFS is one thing, but there are a few basics that are making me look elsewhere - quite reluctantly I might add. Good thing for prod and dev boxes. I was surprised to see this poll, really I thought this was a foregone conclusion - I guess they're still on the BTRFS is awesome bandwagon - but while that may have moved on and be true now (not convinced), it wasn't before and we haven't forgotten! ZFS for the win!
  22. @theonewithinUnraid doesn't work like FreeNAS because ZFS isn't supported (yet). I assume that you're trying to connect to datasets. Personally I've only used image files on ZFS on unraid, which work extremely well. If you want to try a dataset because that's how all of yours are, I suspect you will have to look into the whole disk-by-id or something like that. Otherwise it is possible to convert them to an image using qemu convert.
  23. I’m pretty sure I was able to run about 30 dockers in less than 5G of space - but I’d have to check that. What you can do is just stop the docker service, rename the image file / directory and restart the service. If configured as you say the image file / directory should be much smaller. I have seen some docker images not playing nice with the appdata by default so with this you can see which folder is growing amd investigate from there. if one folder is growing you will likely find you’ll have to increase your image file size or allow more space for the folder Indefinitely.
  24. One of your dockers almost certainly has a cache / temporary / cancelled files or similar contained within in. My guess would be qbittorrent vpn. Since you have it in a folder, you should be able to figure that out with du -sh /folderpath. Also, I do believe that you would save space - because the difference between the 20GB image file and the actual docker space (in your case 3GB difference) is still a difference. The main exception to this is when you have compression on the filesystem hosting the image such as ZFS - it will recognise the image files blank space and compress it effectively to zero.
  25. Kernel build is more flexible and you can compile things in as you know. Plug-in is less flexible In that regard and simpler when kernels change. Yeah each has its pros and cons. But yes as far as zfs goes I think there is no difference other than occasional version mismatches which are usually inconsequential.