-
Posts
19616 -
Joined
-
Last visited
-
Days Won
54
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by itimpi
-
-
You would not have to make any changes when you migrate to 6.12.
The basic functionality around user shares has not changed - it is just the presentation that had changed in preparation for new features in future releases. Your existing settings would automatically adopt the new presentation when you upgrade.
You can largely ignore the bind-mount feature. It is just a performance optimisation that is automatically applied when the 6.12 release detects that all the files for a share are on the same pool.
A small ownside to having a SSD only array is that at the moment the trim operation is not supported so with some brands of SSD this can lead to performance degradation over time. I think it is likely that this restriction might be removed in a future release, but as not all HBA cards support trim some people will find this will still apply to them.
-
Not sure if it is related to your error, but there are a lot of
May 6 00:21:58 vs-tower shfs: share cache full
In the syslog. You do not have a Minimum Free Space value set for your ‘cache’ pool - it should be set to be more than the largest file you expect to write as file systems getting too full can have unpredictable effects.
There are also a lot of lines of the form:
May 6 03:05:19 vs-tower kernel: CIFS: __readahead_batch() returned 870/1024
In the syslog. Not sure what they mean but they are not normal.
neither of these, though, explain why there should be corruption if you run a scrub is the corruption detected as since the pool is formatted btrfs it should automatically have checksums associated with it to check integrity.
-
4 hours ago, isvein said:
But what I still dont get is what it does in practice. I think it has something to do how the share gets mounted, but that is as far as I get
Think of Exclusive as being equivalent to what used to be Use Cache=Only, but with a new performance optimisation that allows access to by-pass the overheads of fuse that implements User Shares for other modes.
- 1
-
1 minute ago, Zonediver said:
Have you seen this "RED MARKED" Text on the right side?
You would be surprised how many people have never even noticed that text
- 1
-
33 minutes ago, bonienl said:
or as a percentage value.
Is that a percentage of the size of any drive (which would be ideal) so not all drives in a share have the same value or something else?
Guess I could try that out and see
-
16 minutes ago, jaimbo said:
Is there a current estimate of when we might see that be availably publically (either full release or RC?)
Limetech never gives estimates other than 'when it is ready'. Since 6.12 is not yet released as a stable release I would expect we are talking about quite a few months at best.
-
21 minutes ago, CallOneTech said:
Also, the prospect of making EVERYTHING lowercase makes my soul hurt.
It does NOT do this. I just means that case is preserved rather than all filenames being treated as case independent within samba
-
@CallOneTech have you tried switching to using case sensitive file names? This made a big difference for me in folders with lots of files in them.
-
The instructions here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page will work, although as was mentioned with 6.12 it needs to be done on another machine.
-
1 hour ago, Bizquick said:
But currently I would like to see a little bit more like being able to make the main array a Raid z1 or z2 pool.
This is not going to happen in the 6.12 release.
-
5 minutes ago, binaryrefinery said:
@itimpi - thanks. I was referring to/quoted the inline documentation / help in the Web UI, rather than the online docs.
Documentation aside, it's a little non-intuitive for the free space setting on the share to affect the behavior of the cache. The share itself is nowhere near being full so I wouldn't have thought about the minimum free space as a fix. I'm also a bit curious to know if the split level would affect this too.The pool (cache) also has a Minimum Free Space setting.
At the moment if the setting on a share is higher than that on the cache pool it takes precedence. I personally think this is wrong and only the setting on the cache pool should be taken into account when writing to the cache, and that for the share to apply when writing to the array drives.
-
Just a FYI the documentation says
QuoteYes: Write new files to the cache as long as the free space on the cache is above the Minimum free space value. If the free space is below that then by-pass the cache and write the files directly to the main array.
When mover runs it will attempt to move files to the main array as long as they are not currently open. Which array drive will get the file is controlled by the combination of the Allocation method, Split level, and Minimum Free Space setting for the share.
which DOES mention the need to set the Minimum Free Space value. Maybe the Help needs that extra bit adding as well.
-
18 minutes ago, hydkrash said:
But I didn't dawn on me that the ZFS folder would be deemed as the cache pool.
When you use the Use Cache setting = Only then you are not really using it as a cache.
Perhaps it would be clearer if the text was redone to be something like "Use Cache/Pool" to make it clearer or maybe simply "Use Pool"? The current text dates from the days when only a single pool was possible and the option was to use it as a cache or not.
-
31 minutes ago, enJOyIT said:
then rebuild the parity (two drive parity)
If you have parity valid at the start then it will be maintained during the copies.
- 1
-
23 minutes ago, enJOyIT said:
Is it now possible to use zfs for an array drive?
Yes if you are running Unraid 6.12 rc2
24 minutes ago, enJOyIT said:Can I have multiple filesystems mixed up within one array? Lets say... two disks with xfs and three disks with zfs?
Yes. Each disk is a self-contained file system and can be any of the types supported by Unraid.
24 minutes ago, enJOyIT said:Is there any benefit for using zfs for an array drive instead of xfs?
I would think that the main benefit is data corruption being detected in real time. You get similar detection if using btrfs
-
I see where you are coming from, but I was assuming that only the top part of the Release Notes would initially be visible in the dialog box? My concern would be that the moment anyone has to click elsewhere to get more detail that they need then they are less likely to do it, but I guess the alert could contain the link to them?
I did not get an Alert when I installed 6.12 rc1 - should I have? This would at least have given a feel for how it might work.
BTW: Any idea about the other part - whether vfio ids change with the new kernel.
-
13 minutes ago, bonienl said:
I don't think showing the complete release notes upfront will bring anything.
It is like these disclaimer notices, people just click 'accept' and continue.
I guess we disagree on this. I think that users would at least read the initial stages of the release notes although they would probably not read the later sections on detailed fixes or package updates.
I wonder if it worth running a poll on this to see what other people think would happen?
- 1
-
OK - I guess I never really looked at this feature in enough detail.
It would be nice of Unraid could somehow insert the text of the Release notes into the dialog following that preamble to avoid the user having to look elsewhere (or is this already planned).
- 1
-
2 hours ago, bonienl said:
There is a "ALERT" system in place since Unraid version 6.11, which allows to display any warning or other information before upgrading. The user has to explicitly acknowledge this message before proceeding.
None of the Unraid releases have yet make use of this system.
Ps. This ALERT system can also be used for plugins.
I know about the alert system - I make use of this.
I still think displaying the release notes before allowing the OS upgrade to proceed is a good idea? I think this is more than the Alert system can provide?
-
3 minutes ago, bonienl said:
This bug happens when temperatures are displayed in Fahrenheit, and will be fixed in the next release.
I guess that explains why many (most) people do not see it as temperatures are displaying in Centigrade.
-
Not sure why they are not displaying for you, but they display fine for me.
-
With only 2GB of RAM the only way to upgrade (or downgrade) is to use the manual method as documented here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.
-
Have you tried rebooting to see if that corrects the issue?
I agree it should not be necessary but thought it was worth checking since I believe that during Normal startup Unraid tries to expand any file system to fill the whole drive as part of mounting the drives while bringing the array online. Be an interesting data point therefore to know whether that works.
-
What version of Unraid are you using? There is a known issue in the 6.11.2 release with partioning and formatting drives larger than 2TB, and this is corrected in the 6.11.3 release.
Unraid OS version 6.12.0-rc6 available
-
-
-
-
-
in Prereleases
Posted
What share have you set to hold the iso file (the normal default is ‘isos’). What is the Use Cache setting for the share holding the iso files.
We may be able to give better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread.