Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

1 Neutral

About ogi

  • Rank
    Advanced Member


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Figured it was something like that, but you know what they say about assumptions. Thanks for explaining 👍
  2. Turns out DevPack was the culprit! I didn't find the link you posted, but found a post on reddit with the same error. Thanks for posting
  3. I just updated from 6.5.1 (via Tools -> Update OS) and am unable to load the webgui (get 502 Bad Gateway error). I am able to ssh into the system. Welcome troubleshooting steps EDIT: SOLVED DevPack was the culprit
  4. +1ing here as well... Looking at the CPUs that support Quick Sync Video, and the features I would like in a CPU for a server, the options aren't that great. Also, I'm curious why everyone is so concerned with NVIDIA based decoding on the server. Unless I'm misunderstanding something, the server doesn't really decode video, it encodes it to formats that are friendly to the devices doing the playback (typically to x264), no?
  5. After talking this over with some friends; I got help creating a custom docker build file: ``` FROM alpine RUN apk add --update git build-base openssh WORKDIR /app/ RUN git clone https://github.com/Matroska-Org/foundation-source.git WORKDIR /app/foundation-source/ RUN gcc corec/tools/coremake/coremake.c -o coremake RUN ./coremake gcc_linux_x64 RUN make -C spectool RUN make -C mkvalidator RUN make -C mkclean FROM linuxserver/nzbget COPY --from=0 /app/foundation-source/release/gcc_linux_x64/mkclean /usr/bin/ ``` I created a dockerhub repo in case anyone is curious or wants to try it out here: https://hub.docker.com/r/j9ac9k/nzbget_mktools/
  6. So I'm not sure if this is the correct place for this post, so apologies in advance if I should have posted this elsewhere. My question in short is, how would I go about compiling the mkclean with static libraries? Longer explanation with some context below. mkclean, is one of several tools that can be used to cleanup and optimize matroska based video containers. They include some other utilities that are handy as well here: https://github.com/Matroska-Org/foundation-source I thought I would attempt to use the mkclean binary package specifically as part of a post-processing script within a docker container. With the assistance of the devpack plugin I was able to compile the matroska utilities from source. The problem is that I cannot execute the binaries from within a docker container. I suspect the reason that's not possible is due to dynamically linked libraries: root@4bb49dcb91f6:/utils$ ldd mkclean /lib64/ld-linux-x86-64.so.2 (0x14d447b80000) libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x14d44782e000) libm.so.6 => /lib64/ld-linux-x86-64.so.2 (0x14d447b80000) libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x14d44761c000) libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x14d447b80000) root@4bb49dcb91f6:/utils$ ls /lib64 ls: cannot access '/lib64': No such file or directory root@4bb49dcb91f6:/utils$ ls /usr/lib64/ ls: cannot access '/usr/lib64/': No such file or directory root@4bb49dcb91f6:/utils$ ls /lib64 ls: cannot access '/lib64': No such file or directory root@4bb49dcb91f6:/utils$ Of course, from within unraid, the binary executes just fine. So, I suppose this means I need to build mkclean (and the other binaries I want to use) from source, but with statically linked libraries; so how would I go about that? Thanks!
  7. I can't stop any of my containers; only thing I Can think of doing is manually umounting all my disks, and then powering off the system.
  8. Oh, definitely won't be stopping the array during a preclear or shrink! In the first example, the clear disk script finished, then I attempted to stop the array (per the instructions) and it hung on sync filesystem for an hour (before I power cycled, ..had already made sure the drives were unmounted). This time around, I'm just trying to stop a docker container (clear disk script is still running), I'll wait until the script finishes before I try anything more drastic (umounting disks/powercycling/etc).
  9. I'm noticing my sync commands are running what feels like indefinitely on occasion since I enabled turbo write. For example, when I have attempted to stop an array, I've had 'syncing filesystem' last for an hour (before I finally verified the disks were unmounted and powered off the system manually). Currently, I'm attempting to restart a Plex Docker container, and it's stuck on [s6-finish] syncing disks. It should be noted, that I'm also currently attempting to shrink my array with the use of clear an array drive script by following the procedure in shrink array here: https://wiki.lime-technology.com/Shrink_array Anyway I'm puzzled as to how I'm supposed to handle these hangs, I can't seem to kill the s6-sync commands through the console via the kill command. Suggestions would be appreciated.
  10. I'm not sure if it's a seperate issue I'm having, but while removing a second disk, I'm having problems with 'syncing disks'. I attempted to restart a docker container (Plex Media Server) while the clear-disk script was running. The container won't stop, it appears to be stuck on [s6-finish] syncing disks ... I've been trying to kill the container with no luck. Current plan is to let the clear disk script finish on the current disk, then force a shut-down/restart (will likely have to manually umount all my disks). Anyway I don't think this issue is directly due to the script so much as the suggested setting md_write_method to reconstruct write (from auto or read/modify/write). Given that that setting is suggested, I'm going to try running through this script next time around w/ read/modify/write setting instead (full well knowing it will take longer), and see if I have sync filesystem or sync disks issues then.
  11. After a little over an hour of sync-ing, I powered the machine off, let it come up, it's doing a parity check now, but the disk I was running this before is labeled as unmountable, so as soon as the parity check completes, I'll resume the shrink array procedure from step 9. https://wiki.lime-technology.com/Shrink_array
  12. I just ran this script, while completed without issue, when I have gone to stop the array, the array is taking a long time to sync filesystems... is this normal (I'm currently passing 20 minutes)?
  13. After going through more steps in the mailing list above, I was unable to recover the files that were removed, so in the interest of time, I've decided to restore from backup. Sorry I can't provide a better test-bed for testing the recovery of files, only thing of value I can suggest now is for unraid to include an older version of btrfs-find-root utility for now :-/
  14. That's clever to change the file-system to prevent mounting.... server is in a parity check right now (from the unexpected crash when I was doing btrfs-find-root with the drive mounted (yeah I was an idiot for that). Will attempt this after it finishes. I did take a catalog of the data-loss, I have a backup available for this data, so I'm not too concerned should btrfs restore not work (mostly want to get some experience with the restore procedure, so should I not have a backup in the future, I'm not freaking out). I did have the same issue this poster did regarding duration of btrfs-find-root command: https://www.spinics.net/lists/linux-btrfs/msg61340.html . in my case, I let it run for ~26 hours before I killed it. Strangely enough, when I ran 'brtfs-find-root /dev/md18 I immediately got the following printed out (but the program did not terminate and went on to run for a while longer). root@Tower:~# btrfs-find-root /dev/md18 Superblock thinks the generation is 12237 Superblock thinks the level is 1 Ogi
  15. So that didn't work, command was running well in excess of 24 hours, didn't generate any output, no memory usage, just 1 thread at 100% cpu utilization. Good news is I think I can recover my data with just running btrfs restore /dev/md18 Problem is I need to restore the data to another folder, which means I need to mount a different unraid disk to move the restored files to. I haven't been able to mount another of my drives while in maintenance mode, the command I've run is mkdir /mnt/disk20 mount /dev/sdo1 /mnt/disk20 I get a 'filesystem mounted, but mount ( failed: No space left on device' error. Suggestions as to how to proceed with be appreciated. Sent from my Pixel XL using Tapatalk