Quickest might be to get a cable tester. I would think you would only need the cheap sort that are something like £7-£10 from Amazon. Of course that only works for the cabling - do not think it can detect a bad port on one of the routers.
The only other way I can think of is a portable device and gradually start working away from the router to which Unraid is attached and plugging it in to another port to see when the speed drops off.
You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread. It is quite possible that the behavior is due to settings you have used on the Unraid end.
It could. The mover will not overwrite existing files so if you have duplicate files from previous actions you need to tidy this up manually and decide which copy to keep. You can use the Dynamix File Manager plugin to examine the drives and manually carry out the desired actions.
Up to you. You can have up to 30 pools and each pool can have up to 60 devices. As long as the disk shelf is connected (typically via SAS or SATA) to the server then the drives just show up as normal.
You could run the New Permissions tool against that file ('newperms' from the command line). The permissions shown will not let it be visible across the network.
You should post your system's diagnostics zip file in your next post in this thread to get more informed feedback. It is always a good idea to post this if your question might involve us seeing how you have things set up or to look at recent logs.
You should post your system's diagnostics zip file in your next post in this thread to get more informed feedback. It is always a good idea to post this if your question might involve us seeing how you have things set up or to look at recent logs.
You should post your system's diagnostics zip file in your next post in this thread to get more informed feedback. It is always a good idea to post this if your question might involve us seeing how you have things set up or to look at recent logs. Take the diagnostics after you have tried to repeat the problem so we have an entry in the logs to see what happened.
According to the syslog in the diagnostics you seem to have file system level corruption on disk1 and this will be what is stopping the shares from showing. You should check filesystem on this drive.
By connected I mean by something like SATA, SAS or USB.
I do not think Unraid would have the drivers to handle a SAN, but if it did I suspect they would count as directly connected.
The limit is the number of drives physically attached to the server running Unraid - not the max in the main Unraid array.
To be honest having only 2 parity drives protecting 28 data drives is getting a bit risky anyway.
You can have many pools in addition so the limit is then over 1000 drives or more (although whether a server could physically be made that could drive that many I have no idea. We are also told that in a future release the current Unraid main array will become a pool type so you can then also have multiples of that.
we are expecting
The color coding just looks for certain strings in the log entries. I am guessing that ‘checksum’ and ‘error’ are strings being looked for. The lines look like perfectly normal information messages.
i notice that you are using macvlan networking for docker. If you experience instability you might want to consider switching to ipvlan as mentioned in the Release Notes.
I think that is unlikely as that has lots of dependencies and is quite likely to destabilise the system. There are also all the associated development tools.
The question is why you need that? It is highly likely that the requirement could be more safely met via using a docker or LXC container or a VM.