Jump to content

itimpi

Moderators
  • Posts

    20,789
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. Quickest might be to get a cable tester. I would think you would only need the cheap sort that are something like £7-£10 from Amazon. Of course that only works for the cabling - do not think it can detect a bad port on one of the routers. The only other way I can think of is a portable device and gradually start working away from the router to which Unraid is attached and plugging it in to another port to see when the speed drops off.
  2. It seems unlikely that both sets of RAM sticks have failed. At this point it feels more like a motherboard or CPU issue.
  3. You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread. It is quite possible that the behavior is due to settings you have used on the Unraid end.
  4. It could. The mover will not overwrite existing files so if you have duplicate files from previous actions you need to tidy this up manually and decide which copy to keep. You can use the Dynamix File Manager plugin to examine the drives and manually carry out the desired actions.
  5. Shares such as ‘appdata’ and ‘system’ still have files on the array so that still needs resolving.
  6. You can also eliminate the fuse overhead by making the share an Exclusive share.
  7. Up to you. You can have up to 30 pools and each pool can have up to 60 devices. As long as the disk shelf is connected (typically via SAS or SATA) to the server then the drives just show up as normal.
  8. You should now run without -n and adding -L. A mount has already failed by this point.
  9. @JorgeB tends to be the expert on BTRFS related errors so probably worth waiting for him to chime in.
  10. You could run the New Permissions tool against that file ('newperms' from the command line). The permissions shown will not let it be visible across the network.
  11. You should post your system's diagnostics zip file in your next post in this thread to get more informed feedback. It is always a good idea to post this if your question might involve us seeing how you have things set up or to look at recent logs.
  12. You should post your system's diagnostics zip file in your next post in this thread to get more informed feedback. It is always a good idea to post this if your question might involve us seeing how you have things set up or to look at recent logs.
  13. You should post your system's diagnostics zip file in your next post in this thread to get more informed feedback. It is always a good idea to post this if your question might involve us seeing how you have things set up or to look at recent logs. Take the diagnostics after you have tried to repeat the problem so we have an entry in the logs to see what happened.
  14. According to the syslog in the diagnostics you seem to have file system level corruption on disk1 and this will be what is stopping the shares from showing. You should check filesystem on this drive.
  15. Not sure what you mean? Nginx is built into Unraid as it runs the webGUI.
  16. Why do you expect anything to change with those settings? Mover ignores any shares which only have Primary Storage set.
  17. By connected I mean by something like SATA, SAS or USB. I do not think Unraid would have the drivers to handle a SAN, but if it did I suspect they would count as directly connected.
  18. I do not believe File Activity shows reads or writes to already open files. As to why it did not work previously I have no idea.
  19. The limit is the number of drives physically attached to the server running Unraid - not the max in the main Unraid array. To be honest having only 2 parity drives protecting 28 data drives is getting a bit risky anyway. You can have many pools in addition so the limit is then over 1000 drives or more (although whether a server could physically be made that could drive that many I have no idea. We are also told that in a future release the current Unraid main array will become a pool type so you can then also have multiples of that. we are expecting
  20. Unraid. If you start getting macvlan crashes it will eventually crash the whole server.
  21. Yes. i would think the most likely candidate is Nextcloud as that has files on disk2, but that is just a guess as it also has files on other drives.
  22. I suspect this is because a container is regularly accessing the drive. You could try stopping individual containers to see if that changes behavior.
  23. The color coding just looks for certain strings in the log entries. I am guessing that ‘checksum’ and ‘error’ are strings being looked for. The lines look like perfectly normal information messages. i notice that you are using macvlan networking for docker. If you experience instability you might want to consider switching to ipvlan as mentioned in the Release Notes.
  24. I think that is unlikely as that has lots of dependencies and is quite likely to destabilise the system. There are also all the associated development tools. The question is why you need that? It is highly likely that the requirement could be more safely met via using a docker or LXC container or a VM.
  25. An unexpected reboot is nearly always hardware. The commonest cause would be inadequate power, with VPU overheating probably being next on the list.
×
×
  • Create New...