-Daedalus
-
Posts
430 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by -Daedalus
-
-
From the console: "diagnostics" will create a ZIP in /logs on the boot USB.
You can also do it from the GUI somewhere in the Tools menu, if memory serves.
-
Just spit-balling here, but I seem to remember an issue with Samsung drives (mostly 850s at the time). Something to do with a non-standard starting block.
I don't suppose anyone with this issue is using non-Samsung disks?
- 1
-
3 minutes ago, S1dney said:
I think Johnnie is very closely affiliated to Limetech so I'm sure this is coming from first hand. Do remember that the fact that they are not actively responding does not mean they're not working on it.
Although I do kind of agree a bit with you, not hearing anything tends to let users believe that no work is being done. It might be a quick win for the dev's to give some info once in a while, I have read some ideas somewhere on the forum to create some kind of blog (or a newsletter by mail or something) about "What we're doing" , which sounded appealing to me.
Nevertheless you have to acknowledge the fact that they do a lot and we all profit from it.
Keep up the good work guys 🍻
Absolutely. I in no way meant for that to come across as complaining (it may have, I apologise), more "passionate suggestion", shall we say. If anyone from the dev team ever decides to visit Ireland, I'll happily buy them a round. 🍻
- 1
-
5 hours ago, johnnie.black said:
Last unofficial information I have is that this issue is hopefully fixed and the fix will be in v6.9-rc1 which is expected soon™, no idea if a patch/update for v6.8 will be available, but very much doubt that as LT doesn't usually patch older releases after a newer one is in active development, maybe if it was a critical security issue.
Thanks for the info, and for that work-around. I'm already back to an XFS cache, and spent a couple of days setting up backups and the like, so not really bothered moving back to BTRFS at this point, but it's wonderful if this works for more people.
However, we shouldn't have to hear this from you. I'm sure Limetech are working on this, I'm sure there's some sort of fix coming at some point, but radio silence for something this severe really shouldn't be the norm, especially if Limetech is shooting for a more official, polished vibe, as a company.
Event something simple, like:
QuoteHi everyone, we realise this is an issue for a lot of you, and we're very sorry it's happened. We're not entirely sure where the bug originated at this point, but rest assured we are working on a fix for the next release. This should enter RC phase within the next few weeks, but obviously this is subject to change. We won't be porting this fix to 6.8.x, as too many things have changed already. We advise anyone affected to this to switch to XFS and take appropriate measures regarding backups, or at the very least to be aware of the increased wear of SSDs due to extra writes and plan accordingly.
This actually tells us very little, other than not to expect a patch for 6.8, and that the release is only "soon", but at least it's something reassuring. I'm usually not the guy to advocate being seen to be doing something, rather than actually just doing the thing, but in this case I think a little more communication would have been warranted.
- 1
-
Cheers, figured as much.
I'm starting a copy over of cache to convert to XFS. The writes are to the point that they're saturating my SSD"s write buffer, causing massive performance issues for anything on cache.
I'll be honest: I'll have a hard time going back to BTRFS after this. I think it'll be XFS and an hourly rsync or something until such a time as ZFS (hopefully) arrives to replace it.
Edit:
Moved from unencrypted RAID1 pool (1TB+2x500GB 850 Evos) to a single 1TB unencrypted drive, and the writes to Loop2 have gone from over 100GB in an hour, to just over 100MB. All my containers and VMs are now performing as expected now that the SSDs aren't choking on writes as well.
-
Out of curiosity, has anyone seen this behaviour on 6.9b1?
-
It's write amplification. This means that any container that was writing a little bit, will still (relatively) only be writing a little bit. Any container that was writing a lot, will now still be writing a lot. It's not the fault of any one container, but Docker (or something else) itself.
-
One interesting quirk I've noticed here (also running BTRFS cache pool, encrypted, on 6.8.3):
I leave "iotop -ao" open, and after a minute or so, I have maybe 30MB or so written.
I stop a Docker container, and I have 120-150MB written.
I start it, and it jumps another 100-150MB.
I start and stop a VM, and this doesn't happen.
I've no idea if this is expected behaviour, if it means anything, or if it helps at all, but I thought I'd mention it.
-
Just for giggles: Is it worth making a disk share, and backing up to that, just to see if there's a difference?
-
Perfect!
If needed I can raise a specific feature request for it, but I think it would be something useful to have in the help text under vdisk when creating a VM.
-
Thanks very much for the explanation. I didn't realise unRAID was smart enough to de-reference files like that. I actually just changed all my vdisks paths to /mnt/cache rather than /mnt/user because of some reports of FUSE slowing down certain things.
So to be clear: When creating vdisks in a cache-only directory, we can use /mnt/user, and the vdisks will be mounted under /mnt/cache?
-
Fair enough. I never explicitly type 'exit' anyway, just point it out in case I was an edge case or something.
-
-
I've seen the same thing on 6.7.2. I was hoping it got fixed in 6.8.
In my case, I have VMs with vDisks on user shares, and on an unassigned drive. I have VDs manually mapped, and each time I edit a VM from the GUI, the VD location is back on auto. Once I select 'manual' the path populates correctly, but it is a little tiresome to have to do it each time.
- 1
-
On 9/21/2019 at 4:23 PM, limetech said:
Unraid OS 6.8 uses FUSE 3.6.2 however I don't think FUSE has anything to do with this issue.
I get the feeling you might be ditching 6.7.2, and moving straight to 6.8, given all the changes you've talked about are for the latter and not the former.
Probably not something you like answering, but are we days/weeks/months away from RCs starting for 6.8?
-
On 9/5/2019 at 8:50 AM, TheBuz said:
+1
+2
-
I'm assuming this database corruption presents as a borked container? Because if that's the case, I'm on 6.7.2, with a cache pool, and zero issues. I've never experienced this (at least, so far as I can tell).
-
3 hours ago, bonienl said:
By popular demand, I changed it back to blue ...
(I'm not trying to be awkward here, I swear, but:) wouldn't it make more sense, given the new theme, to change the colour to orange, or similar? Seems that's the accent colour you're going for. I think the complaint wasn't that it wasn't blue, but simply that it wasn't a different colour to the "off" state.
-
Would a kind soul care to post a screenshot of the new dashboard for those of us not running RC versions?
Edit: Never mind, someone on Reddit posted some. Looks really nice!
Off the bat, the only thing that sticks out at me is that the tile on the top left - server description - seemed to be taking up a lot of space, and the information is largely duplicated on the right side of the banner. Might be worth thinking about condensing/removing some of this to free up space for the main interest items.
- 1
- 1
-
Silly one, but since you quote 50% and 25% reductions in benchmark scores, have you looked to see what CCXs your cores are mapped to? If they're on one of the secondary CCXs (that don't have direct access to the IMC) then that might be part of the issue.
(Unless of course you've got much better scores changing nothing but the unRAID version back to 653. If that's the case, then feel free to ignore the above)
-
Agreed.
QLGE Driver missing in 6.9.0-BETA22
-
-
-
-
-
in Prereleases
Posted
Not to hijack, but as someone thinking about moving to 10GbE, is there a go-to recommendation for a no-fuss RJ45 card? The Intel ones seem to jump between in-tree and out-of-tree drivers a bit.