Jump to content

itimpi

Moderators
  • Posts

    20,707
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. In Principle the parity check is not required although some people seem to like to carry it out as a confidence check ! If you do run it there should be 0 errors reported.
  2. It is probably because XFS has relatively recently been upgrade to support more features going forward.
  3. You might want to check that none of them are 0 in size? Typically most of them are about 1K.
  4. Hmm What about checking the content of the config/ssh folder on the flash to check that the contents look OK?
  5. What happens if you try to SSH in without the plugin installed? That works fine for me using PuTTY as SSH is enabled by default. In is always possible that the plugin is having some issue with the 6.8 release?
  6. The commands should be the same in both cases. Are all commands failing? If not give some concrete examples.
  7. Not sure what is going wrong, but have you watched the SpaceInvaderOne videos on installing Windows into a VM on Unraid? One of them might give you a clue as to what you did that is different to the process he uses in his videos.
  8. It is probably safe to do so as the parity build has gotten past the 4TB point so even if a 4TB Drive now failed you have sufficient parity to rebuild it correctly. The array will work correctly even though the parity sync is still running. However note that any write operations will run much slower than normal as you get the parity drives having to continually move the heads between the location on the data drives being written to for new files and the point the parity sync has reached. The sync will also run much slower while writing new array files. Reading should work at normal speed as the parity drives are not involved in typical read operations.
  9. It could be that the docker is not set up to create files with the correct permissions to allow network access to function as expected. if you run the Tools->New Permissions on the share containing the file then it will rest permissions to what Unraid expects. The long term solution is to get the docker container to create correct permissions in the first place! What docker container was this?
  10. Your router should be able to show you the IP addresses assigned to the various clients. Alternatively applications like fing can do it on iOS and Android by scanning the network.
  11. It is quite normal for array operations to drop in speed as they reach the inner tracks of drives as there the raw transfer speed is lower.
  12. The WireGuard service and drivers are now built into Unraid at release 6.8. What is still a plugin (so it can be easily tweaked in the short term) is the GUI component for configuring the service. You can in theory still use the built-in service by configuring it by hand but I suspect that this is not something that most users would want to contemplate.
  13. That is assuming that users are not running old versions of devices/software that require the older variants of TLS? There will be a difference between servers requiring the new version and what the clients support I would think?
  14. While I agree with the sentiment I would think that the first step would be to make it optional (with the default being disabled) so that the real-world impact (if any) on Unraid users can be determined?
  15. Especially since if the temperature is getting above a critical level the CPU may forcibly close down the server explaining yous 'crashes'.
  16. That is the wrong question It is the container that will be exposed so you need to determine from the container developer how hardened the container is. To some extent you will be protected if you only give the container limited access to your server in the path mapping.
  17. Good to hear that it tends to work fine I suspect that is because the rebuild process tends to access roughly the same sector on both disks at the same time so no head movement gets involved? Have you ever tried rebuilding 2 disks at the same time that have significantly different performance characteristics to see if this remains the case?
  18. You might as well take advantage of a feature that tends to only be found on server class motherboards when you think of it servers hosted in data centres are a classic case where the administrators might be remote to the server so it is very advantageous to be able to do remotely functions that otherwise require you to physically go to the server and use an attached keyboard -monitor.
  19. With IPMI you run a second ethernet cable to the router and the IPMI connection will have its own IP address. You can then use any client that can make an ethernet connection via the router (in my case via a WiFi connection from my iPhone/iPad) I would suggest looking up IPMI on wikipedia may be a good place to start if google is not giving pertinent results.
  20. The syslog server was a feature that first became available on the 6.7 series of Unraid releases (current stable release is 6.8) and your diagnostics show you are on a much older release (6.4.1).
  21. That sounds promising then and sounds like there is little (if any) contention. I would be interested to hear how replacing the data disks goes. Normally the key driver for having dual parity is to handle the case of a second drive failing while recovering a failure of the first one. However in your particular case of replacing good drives you have the original (good) drive still intact so the risk should be minimal.
  22. IPMI is nothing to with Linux, and is purely a motherboard feature. What it allows you to do is simulate a directly attached keyboard+monitor over a network link (as well as do other things). This is commonly used on servers that are running headless to avoid the need to physically access them. You could try googling IPMI for more detail. As to whether your Chromebook is capable of running an IPMI client I have no idea, but I would think there is a good chance you can. I know for instance I could do so on both my iPhone or iPad.
  23. This is possible in theory (with the increased risk you mention) but you may find that 2 rebuilds running in parallel actually take longer than doing them sequentially due to disk contention between the two operations. It would all depend on whether that contention caused additional head movements on the drives or whether the fact they started at the same time means you get away with it. If not then doing them sequentially may end up being faster. Have you tried it on the parity drives yet? If they could both be replaced at the same time without adversely affecting the speed of the parity build then that might be a good indication of whether rebuilding two data drives at the same time would not end up being slower than doing them sequentially.
  24. Those messages mean that the first 3 trim operation worked and whatever is mounted as /tmp/overlay (which is not a location that exists on a default Unraid install) could not be trimmed. Having said that the figures for /etc/libvirt look a bit strange as the libvirt.img file mounted at that location is normally only 1GB in size . Is yours different for some reason? Even the figure for the docker.img file is more than I would normally expect unless you increased the size above the default of 20GB for some reason.
  25. I do not think you mentioned that your server had IPMI support? With that it is possible to see the boot sequence (and access BIOS settings) without an attached keyboard or monitor. If IPMI stops working then barring catastrophic failure of the motherboard not having the network cable plugged into the IPMI port would be the obvious thing to check first. Having said that do you normally even have a ethernet cable plugged into the IPMI port AND one plugged into the normal network port? Your comment makes it sound as if this may not be the case? It would be shame to have an IPMI capable motherboard and not be exploiting this feature.
×
×
  • Create New...