Marshalleq

Members
  • Posts

    968
  • Joined

  • Last visited

Everything posted by Marshalleq

  1. Not in the current version. You must use a dummy usb stick for the array. That may or may not change depending on how zfs is natively implemented.
  2. Hi all, I just noted this morning (as other seem to have previously) that I can no longer connect to privacy. I'm using Surfshark in custom mode, which I have been for some time (other VPN providers previously). Looking at the logs it seems vpn is connecting, it gets an IP and I think it's doing it's ping google check OK. However testing with Firefox set to proxy to 8118 it's a fail over and over. I did just try adding 8118 to the VPN input and output ports, but no dice. Actually not sure what those are for - couldn't find any documentation. So I understand IPTables protections have been added - can anyone link to any docs on what has changed? Cause this isn't working for me and if not solved I'll have to try a different container. Been a binhex fan for ages so keen to stick around if I can. Cheers. Marshalleq.
  3. Hi all, is it just me or is the docker hub integration no longer working? It seems to just take me to the docker hub page, but not offer any automated way to install a non unraid package like it used to. I assume I could adjust an existing package and change the url or something but that's not very nice. Specifically, when I click on the apps page and search for z-push, I get no results. I used to be able to click the 'click here to get more results from docker hub' and it would include results from there, within the community applications page. Now, it instead just opens a window at docker hub, which does not allow me to install a package as far as I know. Thanks.
  4. Anyway, I think the addition of ZFS officially is very exciting and I am looking forward to see where it leads. Though I suspect like everything else in unraid, we're not going to get enterprise features in the GUI. But who knows, this might just start a journey given from a file system perspective this stuff (like send /receive for backups) is just built in. It's funny to think, I jumped in this forum some time back with little knowledge and lots of questions and now I have quite a full on implementation and completely got rid of unrraids array. I had tried ZFS once on proxmox and it was incredibly slow, which I can only assume is due to a poor default arc setting or something. It goes to show the good job @steini84 has done here and everyone else whom has contributed. Also @ich777 I found to be super awesome to work with too - seeing that they're aligned up and unraid is aligned up with them I think the future is rosy for ZFS.
  5. From what I understand about the way the unraid array works, I don't think it matters what's actually on the drives, the parity just calculates at the block level and that is that. I've run an unraid array before, that had a mix of btrfs and xfs within the single array. Also, I'd say there is zero chance that the Unraid folk are going to include zfs for the people that want it and tell those same people that they can't use their existing pools, that'd just be stupid. Where I think there will be some ambiguity is exactly what and how the GUI manages native zfs pools. Whether unraid actually makes a whole GUI for something that is essentially not their core (and particularly whether they do this right in the first version) remains to be seen. What will be nice is just a little native integration so that ZFS is not a second class citizen for unraid features like cache, docker images, disk status, thermal reporting and such. Heck even a scheduled scrub option would be cool. That's my 2c anyway, I'll even take a simple 'we include the binary now' as a first step.
  6. Great question, yes it is possible to have features in a newer version of ZFS that don't work in an older version. So what you say is possible. However if you stick to stable versions, I'd say there's little chance of this being an issue due to the timeframes involved and maybe someone on here will explain how to keep to a specific stable version of the plugin, so you could e.g. keep it to one version. It's very unlikely that when Unraid DOES add zfs that it will be older than the current stable version given their attentiveness to remaining current. And if that scenario were to occur, you can bet they're working with the makers of this plugin to cover all the scenarios. I think stick to stable and you'll be good. Someone else may chip in something I haven't thought of though.
  7. I wonder why that doesn't exist already. I also wonder what happens when you have multiple pools, i.e do you need multiple cache files. I guess I gotta do some googling. I have quite a large number of pools.
  8. I would expect to use the exports file manually. And to make sure it persists across reboots. I'm trying to remember if ZFS has native NFS sharing built in like it does for SMB. If it does, I assume it will be the same i.e. edit the existing sharing mechanism. I think the main point is, currently the sharing mechanisms built into unraid GUI do not work for ZFS, you've got to do it at the command line. Hope that helps. Marshalleq
  9. Hi all, does anyone know why zdb command does not work? Is this something that could be fixed? I fairly regularly find that it would be useful to have. Thanks.
  10. I wonder if that will work for my Nextcloud docker. Hmmm
  11. Yep understand. The unraid downgrade was all about performance issues. Running RC1 my chia container had huge performance issues. Downgrading resolved that. I did notice the loop service at 100% also and trying a docker image froze the system completely. so there’s still something problematic about zfs, docker and unraid. maybe it’s the driver issue you mention.
  12. Aha, that makes sense! Thankyou! I hadn't realised Unraid was actually using ZFS anywhere yet. I've downgraded from RC1, but left the docker folder option (created a new one though) - it didn't work for me last time, but so far the performance issues are solved - so I think issues were RC1, but too soon to tell obviously. Then the question will be, what is it about RC1 causing issues - argh....
  13. So it turns out these are definitely not snapshots, something is creating datasets. The mount points are all saying they're legacy. Apparently that's when it's set in fstab, which of course they're not. I'm guessing it's something odd with docker folder mode so I'm going to go back to an image and try that.
  14. Hi all, recently I made two changes, 1 upgraded to rc1 of Unraid (which from memory has upgraded ZFS) 2, changed from a docker image file with btfrs to a docker folder, ironically called docker image. I've been trying to fault find some performance issues that have subsequently occurred and find a bunch of random snapshots have been taken of the docker image folder. There are no automated snapshots set for this folder and I'm wondering if anyone else has noticed anything similar? See screenshot. I'll probably just delete the dataset and it's subfolders and create a new one to see if that fixes it, but just in case....
  15. Wow, looking forward to getting my hands on this! Fantastic idea, and nice job!
  16. Hi, has anyone got any working scripts etc for backing up teams (I guess one drive would do) to local? It's a not so well known fact that Azure doesn't have any backups, other than to protect itself, so I was thinking to suck this down to a ZFS array with so I can use znapzend across it like I do everything else. Seems like rclone could be a great solution. Thanks, Marshalleq
  17. Yep, I'm aware of that. What I said still stands though. It's performance is disappointing. There aint not way of getting around that. Obviously these manufacturers are banking on most peoples use cases not mattering. But as soon as you want to do something serious, no dice.
  18. So yes, my original assessment stands then, it's performance is abysmal. Why doesn't really matter - though I read that link and understand it's another drive that cheats with fast cache at the start and slow at the end. So great for minor Bursts but not much else. I was using Ramdisk too with those above numbers. Yes, I got these because of the 'advertised' speed and the advertised endurance. Normally I'd buy Intel, but the store had none. I'm fairly new to Chia, but am happy it's levelled off a bit that's for sure. Those people that go and blow 100k on drives and are only in it for the money - they deserve to leave! I do have 2 faulty drives I need to replace which will have plots added, but that's it. I should add, I'm grateful for the link as now I understand it's definitely not me! Thanks. Marshalleq.
  19. Ah so you have the same drives, that's interesting. I got mine due to low stock of Intel for some chia plotting. The performance of them is actually less that some much older Intel SATA connected drives. And when I say less, I was putting the two firecuda's into a zero parity stripe for performance vs 4 of the SATA drives into a stripe. Given the phenomenal advertised speed difference between NVME and SATA I was not expecting reduced performance. Mine are also the 520's in the 500GB flavour. However I don't have PCIe4 and have them connected via a pass through card in 2xpcix16 slots. My motherboard is an x399 with gazillions of PCIe lanes so that's not a limiting factor. Even though I don't have PCIe 4 it would still be a very large performance increase as far as I know. But this is my first foray into NVME - either way, the result was disappointing. EDIT - I should add the Chia plotting is one of the few use cases that exploits good / bad drive hardware and connectivity options - I'm not sure how much you know but it writes about 200G of data and outputs to a final 110ish Gig file. This process takes between 25 minutes and hours depending on your setup. But disk is the primary factor that slows everything down. I was managing about 28 minutes on the Intels and about 50 minutes on the FireCuda's. I should also add (because I can see you asking me now) that a single Intel SSD Drive outperformed the firecuda's also coming in at about 33 minutes. The single intel was a D3-S4510 and the 4 Intels in the stripe were DC S3520 240G M.2's. That should give you enough information to compare them and understand (or maybe make some other suggestion) as to why the firecudas were so much slower. On paper, I don't think they should have been.
  20. @TheSkaz I'd tend to start from the angle @glennv has eluded i.e. hardware. The only place I've seen ZFS struggle is when it's externally mounted on USB. I think there is a bug logged for that, but not sure as to it's status. As you can imagine it needs a consistent communication with the devices for it to work properly so if the hardware is not keeping up or being saturated somehow is my first thought. I've also been surprised at just how rubbish my NVME Seagate FireCuda drives are compared to the Intel SSD ones I have. So either that could confirm your theory or it shows the potential variability in hardware. I would be interested in what you find either way.
  21. Hi there, I'm getting Execution Error, Bad Data and really it's quite a simple container isn't it. Anything obvious I'm missing? Thanks.
  22. I that happens now that's great - it didn't always. I so seldom reboot my prod box now that I have no idea. It was a really good thing having a dev and a prod - definitely improved the up time in the house - I can't stop fiddling sometimes!
  23. As far as I know, a DMZ is actually not meant to be a forward to all thing, but it just happens to be implemented that way on cheap routers that you'd get from an ISP. So the advise is sound for that segment. If however you had a proper firewall, like Opnsense/PFSense and many others, putting something in the DMZ doesn't automatically forward all ports there. It's just meant to be a place which protects your internal network from the private by having the private limit where it connects and the same of the public. These days, networks are so complicated the branding of a DMZ I assume has mostly gone out the window, but the concept continues to be used and these cheap routers keep it as a free for all to get things going when people don't fully understand what they're doing. That's my 2c anyway - just wanted to throw a bit of education along with the 'don't do statement.
  24. Hi, yeah I did try that - and I just did it again for good measure with same result. (BTW I've joined the Discord group also now thanks). Anyway, I found the culprit - I had in my config like the below: note the first # which needed to be removed and now it starts. Though I'm not convinced that's right yet - should there be something after keep_alive_monitor? Like enabled or true? #keep_alive_monitor: enable_remote_ping: true ping_url: 'www.domain.com' Also, do you have any idea if the below rpc.server traceback is a problem or just related to bad hosts? It's in my logs a few times but I suspect it's happening each time I restart. I'm not sure if it's chia related or machinaris. 2021-09-05T13:20:10.323 full_node chia.full_node.full_node: INFO peer disconnected {'host': '73.254.242.75', 'port': 8444} 2021-09-05T13:20:11.576 wallet chia.rpc.rpc_server : WARNING Error while handling message: Traceback (most recent call last): File "/chia-blockchain/chia/rpc/rpc_server.py", line 83, in inner res_object = await f(request_data) File "/chia-blockchain/chia/rpc/wallet_rpc_api.py", line 1138, in get_farmed_amount tx_records: List[TransactionRecord] = await self.service.wallet_state_manager.tx_store.get_farming_rewards() AttributeError: 'NoneType' object has no attribute 'tx_store' 2021-09-05T13:20:12.487 full_node full_node_server : INFO Connected with full_node {'host': '176.241.136.51', 'port': 8444} 2021-09-05T13:20:12.491 full_node full_node_server : INFO Connection closed: 176.241.136.51, node id: fd30bde4674f602d160b235ade905862112390d8a704d658c50c544318e234fa 2021-09-05T13:20:12.492 full_node chia.full_node.full_node: INFO peer disconnected {'host': '176.241.136.51', 'port': 8444} Thanks.
  25. Hi all, just switched over from standard chia docker container so that I could get alerting. I was wondering should the alerting be saying it is stopped? It's been running for a few hours now - but just thought I'd check thanks.