-
Posts
19,716 -
Joined
-
Last visited
-
Days Won
54
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by itimpi
-
-
6 hours ago, tjb_altf4 said:
I seem to have an issue where mover is not honoring excluded disks when it's set to Prefer, and is moving files from an excluded disk onto the cache.
I believe that disk include/exclude are only applied when writing new files to a User Share and are ignored when reading files which automatically includes all drives.
- 1
-
1 minute ago, djhunter67 said:
Are you saying this is not a 6.0.0-rc2 issue?
Yes, this is a hardware issue.
-
18 minutes ago, djhunter67 said:
That almost invariably means that the disk dropped offline, and then reconnected with a different /dev/sdX type identifier. Unraid is not hot plug aware and does not handle this. We would need the system diagnostics to confirm this. You might want to check the sata and power cabling to the drive.
-
It looks like you are running a trial license and one of the requirements of the trial licence is that you have an active internet connection before the array can be started.
-
There have been occasional reports of slow speeds but the vast majority of people do not see this and get very fast speeds. Suggests it could be something in the network link between you and the Amazon servers that is limiting the speed.
-
Not sure how easy it is to simulate the physical unplug/plug of USB in software. Are you saying that stopping the USB daemon; waiting a short while for the USB to reset; and then restarting the USB daemon is not sufficient? Feels much simpler than trying to emulate a USB unplug/plug sequence
-
The files are actually stored on the Amazon AWS servers, not on the Unraid web site so any speed issues are there.
-
Yes, you need to use copy/delete if you want the file to end up on a different drive. It is a quirk of the Linux implementation of the ‘mv’ where if source and target appear to be on the same mount point (/mnt/user) a rename is issued (which leaves the file on the same drive) rather than a copy/delete action.
- 1
-
This is intended behaviour. There are some Use Cases that exploit this behavior.
-
6 hours ago, Struck said:
The problem seemed to be unmounting disk shares, of which i have none i think
It would have been unmounting disk drives (not shares) which is a standard step in stopping the array.
-
1 hour ago, yyc321 said:
Firefox appears to behave the same as safari and chrome on iOS.
I think that Apple mandates that all browsers on iOS use their rendering engine, so this is probably an issue (bug?) at that level.
-
2 hours ago, trurl said:
Even if you retain all you can still make any changes including changing slots so some default settings is probably the only thing that makes sense.
Not sure I agree here. Changing slots still does not necessarily mean you want the shares to be set up any differently.
Perhaps the best way forward would be to add a new check box to the New Config dialog as to whether shares settings should be left unchanged or reset to defaults? At least that would give visibility to the effect that share settings might be affected. Making the default the current behaviour would mean users only get the share settings retained if they explicitly ask for them.
do you think this should explicitly be raised as a feature request?
-
I suspect that technically this is not a bug in that the New Config is working as designed.
Having said that I agree it might make a lot of sense to leave all share settings unchanged when usin New Config - especially if using the option to retain current disk assignments. I personally would find that more convenient than current behaviour.
I do not like your second option as that would cause problems for users who have their shares exported and set to Public.
-
Do you have any docker containers that if you look at the Docker tab and switch on the Advanced view show they are ‘healthy’? It has been noticed that such containers have been set up by their authors to write to the docker image every few seconds as part of a health check, and although the writes are small the write amplification inherent in using BTRFS means this can add up. I believe that an issue has been raised against docker in case this is a bug in the Docker engine rather than the container authors simply mis-using the health check capability.
In addition there are other options available in the Docker settings that can reduce this load such as using an XFS formatted image instead of BTRFS, or not using an image at all and storing directly into the file system. The array has to be stopped to see these options. Have you tried any of these?
-
9 hours ago, jdiggity81 said:
I take it that means with the work around you built in that the lsi 9200-16e will work until they make the corrections?
Strange that the 'e' model needs this whereas the 'i' model (which I have) does not. You would have thought they would be identical except for the connector being external rather than internal.
-
6 minutes ago, mgutt said:
Besides of that. Are these JSON files really part of an usual docker installation or is this a special Unraid thing? I wonder why only Unraid users are complaining those permanent writes. Ok... maybe it has the most transparent traffic monitoring
This is nothing to do with Unraid - it is internal to the docker container. It may have been more obvious to Unraid users because of the fact that a docker image is used and there was an issue in 6.8.3 and earlier that could cause significant write amplification when writing to that image stored on a SSD. If the docker container files were stored directly in the file system (as the latest Unraid betas allow) then this is probably far less noticeable particularly if using a HDD with a file system like XFS that has far less inherent write amplification than BTRFS does.
11 minutes ago, mgutt said:If these writes are related to Docker only, we should open an issue here. Because only updating a timestamp (or nothing) inside a config file, does not really sound like it's working as it meant to be.
This could not do any harm although they may simply reply that the feature is being mis-used and the fix should come from the container maintainer.
-
Not obvious on the best way to handle this as technically this is an issue with specific docker containers rather than an UnRAID issue. There may be good reasons the maintainer of a particular container has set this up so that over-riding it becomes a bad idea. I wonder if the Fix Common Problems plugin could be enhanced to detect this and suggest the fix? Alternatively make it a configuration option in the docker templates?
-
What do you have for the Disk Shares option under Settings->Global Share Settings (if you had posted the system's diagnostics zip file I could have seen for myself).
If you click on the Shares tab does it show the cache under the Disk Shares? If so you should be able to set the security level from there. However whether it should be showing up under the Disk Shares section by default becomes an interesting question.
-
4 minutes ago, jowi said:
Depends on how many dockers you have listed. If they don’t fit the screen, the interface gets stuck. It won’t scroll. Display font size also plays a role, the bigger the font, the less dockers you can list, and the gui gets stuck and wont scroll.
But then again, this is not specific for this version, its an issue for as long as there are dockers in unraid. It wont get fixed for some reason.
I have far more Dockers than fit on the screen and they scroll OK for me on my iPad.
i think the root cause has to be some sort of bug at the web kit level which can therefore affect all iOS/iPadOS browsers as Apple mandates they have to use WebKit for rendering. I would be interested to know if anyone using Safari on MacOS ever experience such problems.
-
I would assume this means the 6.9.0 beta25 release.
- 1
-
12 minutes ago, MothyTim said:
It's only the docker page, everything else is fine! It's the same in Safari and Chrome.
As i said it is working fine for me. I have had problems in the past but it is OK now. It may be relevant that I am using the iOS14 beta so possibly a web engine (Which is used by both those browsers) problem has been fixed.
-
I have no problem using the Unraid interface on my iPad.
-
30 minutes ago, Dephcon said:
That's very interesting. Say or example I have a share that's 'cache only' and i change which pool device i want it to use, unraid will move the data from one pool device to the other? That would be highly useful for me in my IO testing.
No, Unraid will not move the data to the new pool. It will just start using that pool for any new files belonging to the share. Note that for read purposes ALL drives are checked for file under a rip level folder named for the share (and thus logically belongs to the share). The files on the previous pool will therefore still be visible under that share even though all new files are going to the new share.
- 1
-
Actually thinking about it where can one now set the Minimum Free Space value for a cache pool? It used to be on the
global settings. The setting on the Shares tab applies I thought to array disks?
Unraid OS version 6.9.0-rc1 available
-
-
-
-
-
in Prereleases
Posted
I must admit I cannot work out from your description what the problem you are trying to report actually is? I think you need to provide a series of steps that would allow other users to try and replicate you problem (and to confirm it is a genuine bug).