SvbZ3r0

Members
  • Posts

    18
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

SvbZ3r0's Achievements

Noob

Noob (1/14)

3

Reputation

  1. To anybody else who has this problem, it was because I had FIREFLY_III_LAYOUT set to v2. The v2 layout has been merged into the main branch, so this is unneccessary. Just leave the variable empty in the template.
  2. I have had firefly installed for quite a while, but haven't used it in over a year. I just got back and see this: Is this normal? The image used is fireflyiii/core:latest, and unRaid assures me there are no pending updates. There's placeholder text everywhere, in case it's not apparent. And not just on this page either.
  3. Got it. I was worried it was a failing disk. Glad that's not the case. Thanks for your help.
  4. Thanks Jorge. That works. Is there a reason this happened? How can I prevent it from happening again? Or is it something that just happens once in a while?
  5. Hi, Fix Common Problems detected an issue with my Docker overnight saying "Unable to write to Docker Image. Docker Image either full or corrupted". My Docker image is almost one-third free, so it is probably corrupted. I tried restarting Docker but no luck. I ran a short SMART test on the disk that has the docker image, but it found no issues. I've attached the diagnostics report. I would like to know what caused this, and how to avoid this in the future. Any help is appreciated. orthanc-diagnostics-20231110-0837-afterDockerRestart.ziporthanc-diagnostics-20231110-0826-beforeDockerRestart.zip Edit: Is the link to the diagnostics wiki page automatic?
  6. Sorry for necroing old post. I have exactly the same issue. At least, I used to. Webterminal would 502 on opening, but an F5 would get it working. But for a couple days now, refreshing the webterminal doesn't work. I'm stuck with SSHing into the system everytime I want to check anything. Same error you had. Unlike other posts mentioneing this issue, I have this on all browsers and from all network connections. Docker consoles do not have this problem, neither do logs. I'm honestly at my wits end.
  7. Can jdupes be updated? jdupes on NerdTools is on v1.23.0. Later versions have a nifty little setting to store hashes in a text file so the hashes don't have to be created evry time. Latest version is v1.27.3
  8. This issue still exists. There no way to add a parent or sibling.
  9. +1 from me If we were to get multiple arrays (and that's a big if), how would it be implemented? Personally, I don't see a use case for multiple arrays where you are going to have files and directories spread across them. Therefore, arrays will be independant of each other. Given that is the case, taking individual arrays offline shouldn't be a problem. So, I guess this request boils down to having multiple independant arrays? Ofc, I might be fundamentally misunderstanding how unRaid works. 😅
  10. Hi, I ended up copying all the data to my array and formatting my cache. So.. Solved. I guess? Sorry for the trouble. Thanks for your time.
  11. Follow-up: Rebooted. xfs drive works again. Automagically. Btrfs still has problems. I'm still interested in knowing how and why the xfs drive had problems, and what I can do to not repeat that in the future.
  12. Ohh.. There's something in the logs about a duplicate UUID for the unmountable xfs drive. I tried xfs_admin -U generate to get a new UUID, but it gave the same error about valuable metadata in a log.
  13. To start with, I've been using unraid for a while now with 6 disks and no parity. I recently bought 2 new 4TB disks. I swapped two of my old disks out by copying data to one of my new disks, created a new config (had to shuffle the disk order in order to fit the new disks), verified that everthing worked, and then installed the second disk as parity. As soon as it started building the parity disk, one of my older xfs disks and my btrfs cache disk glitched. I immediately paused building the parity. Now, the xfs disk says "unmountable: nofilesystem", and the btrfs disk says "unsupported partition layout". The btrfs disk is mountable as an unassigned disk, the xfs disk is not. For the xfs disk, I tried running xfs_repair and got this as a result: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. How do I proceed? Any help is appreciated. Attached disgnostics. orthanc-diagnostics-20211013-0503.zip