-
Posts
2625 -
Joined
-
Last visited
-
Days Won
16
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by 1812
-
-
1 hour ago, limetech said:
Samba has an absolute max of 65535.
Then I'm not sure why this is currently working? I mean, I know what value I entered on purpose for max open files (just to see what would happen). But it seems like I'm chugging along beyond it. Maybe it'll all come to a screeching halt soon?
¯\_(ツ)_/¯
-
On 11/10/2022 at 6:28 AM, dlandon said:
A fix will be in a future release so you don’t have to add this to the smb-extra.conf.
just a heads up. A Mac photo library that is of any substantial size (like 175GB for example) blows past the 40964 open file limit when transferring to the server and the file limit has to be increased way beyond that. [learned from experience and several failures today before finding this thread and increasing to a ludicrous number to try and get this moved over]. Hopefully it will be more easily user adjustable in the future release.
-
Also experiencing the same behavior
-
I DIDN'T BREAK YOUR STUFF AND YOU HAVE NO PROOF.
- 1
-
hp ml350p kernel panic after boot. booting into safe mode with no plugins allows boot. Not sure which was causing the issue as I manually nuked them all and only reinstalled a few that I know I use regularly. I know this isn't that helpful, but reporting nonetheless.
-
14 hours ago, jonathanm said:
Soon™
Seriously, there is no official timeline, asking isn't going to make one appear. When it's done it will be released, no sooner, no later.
But if it was later, how could we really know?
-
1 hour ago, TexasUnraid said:
I am thinking I will upgrade my server to beta25 in the coming days, the multiple cache pools is just too enticing to wait lol.
This server is active, so before I do 2 questions:
1: Any outstanding issues I should be aware of?
2: Any planned changes in the RC / Final 6.9 release that would be better off waiting to upgrade until then? (don't want to have to reformat the cache twice for example)
the beta is only advised to run on test servers.
to answer your questions
1. read the entire thread to learn about potential issues
2. read the entire thread to learn about some potential changes and then wait and see what they are when the rc and final come out
-
bump ¯\_(ツ)_/¯
-
13 minutes ago, johnnie.black said:
It's a known issue, I already made a request for this to be corrected, for now you need to subtract parity size from the total free space displayed, it's why I'm still using UD for my pools since I have multiple raid 5 pools most with different disks sizes and it's not practical to always been doing mental calculations.
That's another known issue, I also reported the same before.
It appears i'm just late to the party on everything today… ¯\_(ツ)_/¯
-
I'm not sure raid5 in pools is working properly. I gave it the following disk sizes: 4,4,4,3,4,3,4 = 26 TB. Ran a raid 5 balance, shows:
Data, RAID5: total=6.00GiB, used=1.75MiB System, RAID1: total=32.00MiB, used=16.00KiB Metadata, RAID1: total=1.00GiB, used=112.00KiB GlobalReserve, single: total=3.25MiB, used=0.00B
but the web gui on the main tab shows 26TB free….
raid 0,1,10 all end up with useable space as expected.
---
edit
Also, clicking spin down underneath the pool doesn't seem to work. this new pool has nothing use it. Same issue with another single disk pool, no spin down, even when using the spin down button at the bottom of the web gui.
-
so, i broke it.
I attempted to add 2 pools (i had previously added 2 simple single and doulbe device pools and removed them with no issue). I make a new pool called backup with mixed file formats on the disks assuming it would format them. Server hung on mounting disks.
Diags attached.server-diagnostics-20200627-1509.zip
—edit
i was able to get it to shutdown via webgui. upon reboot I was greeted with the option to format the disks in the pool.
-
Just now, johnnie.black said:
Yes.
Every pool is an independent filesystem, and xfs "pools" can only be single device, if a share exists on different pools data will be merged together by shfs when accessing that share.
so to make sure I understand, if I want to use 3 disks using xfs, I'll need to create 3 xfs pools each with a single disk, and then then create a share that specifies those pools and Unraid will do the rest, correct?
-
maybe I missed this, but can different cache pools be mixed formats, as in main cache btrfs, second pool xfs? and if so, when pooled with xfs does it span data or stripe? Looking at using a pool for a backup copy of data on a few drives with the of accessibility xfs provides for recovery.
thanks
-
WHOA BUNDY!!
- 2
- 2
-
Changed Status to Solved
-
-
changed the computer out from a z400 to a z440 and problem persists even after upgrading to 6.8.0 stable. Problem does not occur on the other 2 servers in my house, including a z420.
-
-
3 hours ago, limetech said:
"hey you said multi-pool support would be in 6.9" but that is unavoidable.
Hey!.... I can wait. You all are awesome to the max.
- 4
-
Update: can't duplicate this today and vm's operate as expected. Maybe my server was just having a massive brain fart or something....
-
13 minutes ago, limetech said:
We're actively working on this for 6.9
- 1
- 2
-
upgraded a z400 without issue except for needing to remove some pcie specifications from vm xml that was used as a workaround for something that is now patched.
Pleasantly surprised with a faster web GUI and safari support in launching webgui windows for dockers.
but still no multiple cache pools.....
SUPER SAD FACE
(but thanks for everything else!)
- 1
- 1
-
8 hours ago, eschultz said:
We'll include QEMU 4.x in Unraid 6.8
along with multiple cache pools, right???!!!?!!? 🙏
- 1
-
updated from rc5, no issues found
SMB Shares crashed and no longer in network after 6.11.2 update
in Stable Releases
Posted
and I broke the open files plugin by running it out of memory