gubbgnutten

Members
  • Posts

    369
  • Joined

  • Last visited

Everything posted by gubbgnutten

  1. Have you enabled turbo write (reconstruct write)?
  2. You are trying to log in as root on the unRAID server, right? Other users will get disconnected immediately in a default environment.
  3. Just suggesting to check the ssh configuration of the client first before starting to make random modifications to the server's configuration...
  4. How about just not making a request for X11 forwarding from the client?
  5. I would probably just recreate the USB stick, but that's me Otherwise I would probably in addition to fsck also check the contents of the FSCK*.REC files and compare the rest of the files to the corresponding ones from a recent backup.
  6. Could you post diagnostics from after the first aid attempt? Let's see if that got rid of the "Volume was not properly unmounted. Some data may be corrupt. Please run fsck." message. The files FSCK0000.REC, FSCK0001.REC and FSCK0002.REC suggests previous corruption.
  7. One common step is to connect the USB stick to a Windows computer and check it there.
  8. On mobile so I can't check the diagnostics, but at least my first, second and third suspect would be the USB stick: Either file system corruption or something wonky preventing it from being mounted properly possibly causing a default config to be used, only stored in volatile memory. Have you tried the Fix Common Problems plugin?
  9. High average temperature: There is likely a problem. Investigate! Individual disk/disks notably hotter than the average (especially during parity checks): Worth looking into specific cage/part of the server.
  10. Well, "never" is a pretty broad claim when speaking for everyone. From what I can tell the average is not occupying space that could be used by something else, so I don't really see how having it would be a problem. I actually find it rather convenient to have. Still no idea what LCM got to do with the average temperature.
  11. Can't think of any reason right now why that would be useful, there are simply too many primes in the typical temperature span for LCM to be of any interest imho.
  12. Moving the files locally instead can in many cases be virtually instantaneous. Some pitfalls, though. While your current method may be slower, it is also pretty foolproof.
  13. That is not a case of allocation for SSD vs HDD, it is just a case of Windows assuming an incorrect allocation size over the network... 1MiB IIRC.
  14. The cursor is not supposed to move there. Have you tried just entering the password and then press enter? Why are you "unable to access the web login"? Can't you reach the server with a web browser, or won't it accept your login?
  15. TestDisk has saved one of my disks in a somewhat similar situation. Just don't panic, and don't write anything to the disk unless you are absolutely sure about the consequences. There are still multiple approaches to data recovery available.
  16. It makes perfect sense! Mover is doing exactly what it should from the sound of it. In this case (yes), it moves data for a share from the cache drive to (the same share on) the array. Moving stuff from the downloads share to the media share is not something mover is supposed to do.
  17. Try the exact path suggested earlier and post log from using that path.
  18. Listen to @bonienl, not to Amcrest.
  19. Rebuilding a disk won't fix file system problems on it, if that's what you're asking.
  20. Just to confirm, it is the DATAZ share that contains those 500G, right? Edit: Oh, 6.4.0-rc6?
  21. Manually moving stuff is risky, not going into details since it has already been mentioned. Could the initial problem be that you had the shares cache settings set to Prefer? That would cause mover to move files from the array to the cache and not the other way round.
  22. Did you try safe mode as suggested by the FAQ entry?
  23. Zeroing a new drive before it is actually added keeps the parity correct at all times and is extremely straightforward to implement. Adding a drive by reading from it and updating the parity on the other hand, yikes. Not only would the parity be invalid (or at best require special handling) during the update, it would also in virtually all cases be way slower than filling a drive outside of the array with zeros.