Flatbed0563

Members
  • Posts

    4
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Flatbed0563's Achievements

Noob

Noob (1/14)

2

Reputation

  1. Hi all, I recently restarted my server without realizing the mover was still running (clean reboot via the webgui). After boot, I noticed higher than usual cache drive storage used, so I thought to run the mover manually... which completed immediately. Calculating the share storage usage on each disk, I should be getting around 250GB cache usage, whereas unraid reports 320GB being used on my 1TB pool. I read somewhere that rebooting while the mover is running might leave some files stuck in cache forever, so I have some questions: Is there a possibility that my files were corrupted during move due to my reboot? How can I make sure stuck files are unstuck?
  2. hmmm, started from 15, and switching to the tensorchord/pgvecto-rs:pg15-v0.1.11 seems to have worked seamlessly for me actually, at least no errors in indexing, and the image counts in the stats seem to be the same as before the switch...
  3. Hi All, I currently have a cloud server where I use duplicati to backup all its contents to an external hard drive every night (using the unassigned devices). To have a true 3-2-1 backup strategy, I swap out this external hard drive with another one that I store at another location. Because the external hard drives are full now, I want to switch to a drive pool system that can be scaled dynamically and which can be parity protected. I was thinking of buying a QNAP TR-004U, putting it in the individual drives configuration and configuring 2 zfs-pools (1 pool for each backup drive set) in unraid. 1 pool would be active at any given time and swapped out for the other pool every once in a while. Some questions I have on this setup: Would this even work? Will unraid complain a lot when a drive pool is not active? Can the capacity of the vdev be increased similarly to current unraid drive arrays? From what I understand, vdevs cannot scale in number of drives (planned for future update), but if I were to swap out and rebuild 1 drive at a time, would that increase the capacity? Thx in advance
  4. Hi all, I currently have an Immich container, which uses a separate PostgreSQL database in the backend. Due to a recent update, the server does not boot up because it tries to use the vectors extension in PostgreSQL, which is not installed. When looking up how to install extensions, the results mostly say I should console into the docker, clone the git repo and install it, but I have some questions whether this is the clean way of doing it. I always thought that everything done in the docker console command line is a sandboxed environment installed from the docker image, so if I update my docker container in the future, will it also remove the extension I installed with the command line? I guess the extensions also need to be updated, so if I install it via the container console, I would probably have to manually update it as well? Is there a way to embed this in the container config so that it will install and update the extension at startup of the container? This is the step plan I found: console in the PostgreSQL container and install tools needed to install the extension: apk add --no-cache git build-base clang llvm15-dev llvm15 clone the git repo of the extension in a temp folder cd /tmp git clone --branch v0.5.1 https://github.com/pgvector/pgvector.git cd pgvector compile and install the extension make make install after this, the extension should be installed and ready to use in Immich, as that container installs the extension on the database itself on startup is this correct?