ventrue

Members
  • Posts

    9
  • Joined

  • Last visited

Everything posted by ventrue

  1. Yes, Memtest didn't find anything at least. Could always be some freak hardware bug, I suppose. Other USB ports don't show different behaviour. It always works fine at first, then segfaults show up (seemingly in relation to a file transfer, haven't yet seen it before that) and only after that the blacklist errors also appear. I've had the storage running for over a day now with constant access and even though these errors keep popping up in the log, the storage itself seems to work fine. If the GUI wasn't affected, I wouldn't even notice that something's wrong. I've also sorted out the speed issue, it's definitely limited by the network. Had to reposition some Wifi stuff.
  2. It already is in a USB2 port. I could try USB1, if your idea is to just use any other type of port? It's fairly old hardware, so I don't have a whole lot of options here. The log doesn't show any disconnects though. The drive (or rather the share) also seem accessible all the time, although this is certainly not a bombproof way to find intermittent problems.
  3. Looks fairly random to me on both drives I've tried (one PNY, one SanDisk). I agree that after a while, a pattern seems to develop where each segfault is accompanied by a blacklist-error. This is different in the beginning, however. As I said, the segfaults appeared first and were also significantly more numerous at first. And in any case, why would I be issued a trial key for a blocked GUID to begin with and why would it not stay unusable permanently if it was indeed blocked, rather than immediately revert to a usable state every single time?
  4. Any more ideas, anyone? Any further details I can provide to help solve this?
  5. Yeah, I did notice those blacklist errors as well. Not sure what happened there. They seemed pretty inconsequential but to make sure, I got out another flash drive and literally started from scratch with a fresh install, a new trial key and no backup import whatsoever. Same issues as before: The segfaults pop up basically with every click in the half-working GUI as long as a file is being sent to the NAS. Cancelling the transfer seems to solve the issue. I'd also like to point out this line in the log: I believe this is the moment that blacklist error starts appearing on this new flash as well. Would surely be interesting to know what this means, but the other issue appeared first, so maybe they aren't related. Edit: Checking back after a while, the errors actually do continue even without accessing the NAS. It's still parity-syncing, maybe it'll get better when that is finished. Edit 2: No it doesn't. tower-diagnostics-20210627-2201.zip
  6. Just for the record: Still new here, still trialling Unraid 🙂 Can someone help me solve this? I'm having some issues that are probably related: Broken Web GUI: Parts not loading, Bad Gateway or PHP errors like "Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 4294967568 bytes) in [varying scripts]". This gets increasingly worse until the GUI become unusable. Lots of segfaults in the log, for multiple applications (bash, php, ngnix, libc) Bad transfer performance. This might be unrelated and simply a hardware or network limitation though, not sure. The other day, I got 10 MB/s writing on the NAS, today I reformatted, used XFS instead of btrfs this time, and got 30 MB/s. What I've tried so far: Closing the console (with htop in it) and log seem to at least temporarily restore GUI functionality Problems can reoccur right after rebooting, not not necessarily runtime related Problems seem to suddenly appear or get much worse when files are transferred to the NAS Two passes of Memtest haven't shown any errors. I'm running more now. Completely reinstalling Unraid Docker and VMs are disabled User Applications, unBalance and Dynamix sleep are installed tower-diagnostics-20210627-1800.zip
  7. Right now, there's no redundancy at all. Building that up and then needing it a little bit more than before is a net gain, I'd say. I do have real time backups as well, should the worst happen. I don't trust storage devices.
  8. Well, believe it or not, but the only significant reason for me to use a NAS is that everything else in my PC is silent and I want to throw the single, noisy HDD that I still need for those large files out of the room. I try to move the files I'm working on onto SSDs for more speed anyway, but if they're too large that's not possible and then the HDD has to keep running. That annoys me. Everything else is just an afterthought (i.e. moving personal files to the NAS, using parity, abusing parity to give drives with bad sectors another chance, etc.). Unfortunately, if I want to use parity, I will have to use that one large drive as the parity drive, which will waste 4TB of storage and require me to throw literally everything else I have lying around into the array in order to not lose too much space. I should probably just buy some larger drives, but I'm not willing to pay the inflated prices we're seeing right now. Luckily Chia isn't farmed on actual soil, or we'd all be starving in a year! The easiest way to go right now would probably be to just use the one, large HDD I've got as the only disk, if Unraid can't do any organising in the background. I don't really want to deal with that myself too much. But, are you sure the mover doesn't care about file sizes? I mean, once something's in the cache, the file size should be known and surely the mover considers that? If so, I could order the HDDs from small to large and have unraid fill them up in that order. That would pretty much do what I want then, wouldn't it?
  9. Hi! I'm new here 🙂 I understand how I'm supposed to handle large files (i.e. set minimum free space larger than largest file). However, my file sizes vary from Bytes to (possibly) Terabytes. Definitely similar to or even larger than some disks in the array, which would not only waste space, but entire disks. Is there a way to use the largest disk as cache and have Unraid move the files from there to the disk with the lowest amount of free space on which they will fit? Storing it on the cache disk first will make the filesize known to Unraid and the cache size in turn is a known to me as a limitation. This would allow me to completely fill even small disks, while retaining the largest possible chunks of free space on the larger disks for files that actually are that big.