mich2k

Members
  • Posts

    35
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

mich2k's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Hello, I already know all the limits of having 1 zfs pool for each drive but having a cloud backup i care only for the snapshot (due to fat fingers and wrong deletes), compression and corruption warnings The issue is i cant understand how to perform a dataset share-level If I have a folder that gets replicated on more disks should I manually create a dataset for each disk? And what happens if i gather on one disk the files, each disk snapshot will not delete what we moved away, right? I saw many scripts about auto-snaps but they grab as source only one pool and only one dataset in a pool/dataset fashion Am I missing something? Thank you all
  2. Ok, I feel really fool by having missed that folder during the mappings! I followed the PR which split the /library (or /photos in my case) in 3 volumes and did not mention the upload one Thank you again for real
  3. Hello, thank you for the hints 1) By checking here and here I do not understand which folder do you refer as "upload" folder, I see "/photos", I must have missed this folder but cant even find docs about it 2) Is a well known approach solved in a PR on GH that allows you as example to keep thumbs on a ssd thus not spinning the drives 3) i know right, i had some external libs to move in with immich CLI, will remove it after, for now is read only thank you
  4. Hello, I've been having this issue for 1-2 months now My docker img keeps growing, last time this happened I erased all my containers and increased the size to like 40gb of the image I understood the issue is with immich container somehow, thus yesterday I removed the unraid app of immich and went with the compose file since some people rightly said you have full control and is more supported in this way, yet I reached 100% usage I m desperate now since reaching 100% kills the immich jobs and is really frustrating Any help would be appreciated thank you all --- version: "2.1" services: immich: image: ghcr.io/imagegenius/immich:latest container_name: immich environment: - PUID=99 - PGID=100 - UMASK=022 - TZ=Etc/UTC - DB_HOSTNAME=postgres_db - DB_USERNAME=postgres - DB_PASSWORD=postgres - DB_DATABASE_NAME=immich - REDIS_HOSTNAME=redis - DB_PORT=5432 #optional - REDIS_PORT=6379 #optional - REDIS_PASSWORD= #optional - MACHINE_LEARNING_GPU_ACCELERATION= #optional - MACHINE_LEARNING_WORKERS=1 #optional - MACHINE_LEARNING_WORKER_TIMEOUT=120 #optional volumes: - /mnt/user/appdata/immich:/config - /mnt/user/immich_library:/photos/library - /mnt/user/immich_thumbs:/photos/thumbs - /mnt/user/immich_encoded_video:/photos/encoded-video - /mnt/user:/import:ro #optional ports: - 8080:8080 restart: unless-stopped devices: - "/dev/dri/card0:/dev/dri/card0" - "/dev/dri/renderD128:/dev/dri/renderD128" networks: br0: ipv4_address: 10.5.1.14 # This container requires an external application to be run separately to be run separately. # By default, ports for the databases are opened, be careful when deploying it # Redis: redis: image: redis ports: - 6379:6379 container_name: redis networks: br0: ipv4_address: 10.5.1.13 # PostgreSQL 14: postgres_db: image: tensorchord/pgvecto-rs:pg14-v0.2.0 ports: - 5432:5432 container_name: postgres14 environment: - PUID=99 - PGID=100 - UMASK=022 - POSTGRES_USER=postgres - POSTGRES_PASSWORD=postgres - POSTGRES_DB=immich volumes: - /mnt/user/appdata/postgres_immich:/var/lib/postgresql/data networks: br0: ipv4_address: 10.5.1.12 networks: br0: external: true name: br0 homedata-diagnostics-20240422-0835.zip
  5. Hello i was almost over and kept receiving warnings about my docker img filling I already checked with 10 scripts and tools that im not writing inside and there is no misconfig I went to 35 gb and now this (i cant update or install containers) Whathomedata-diagnostics-20240318-1058.zip is happening? Thanks :)
  6. Hello, After a week of research about the cloud provider i found a fitting s3 compatible cloud storage Now i know this topic has already been opened hundreds of times but yet I did not manage to find a complete answer premise: as of today i m moving all drives and pool to zfs but yet i m not going to a zfs compatible backup (so no zsend/zreceive like rsync.net) This means I m sticking to the good old rsync/rclone so I m looking to: -> I do not care for incremental backups, for me is ok if once a day evertything gets sent again -> client-side encryption -> versioning, now my provider provides versioning but I guess wont work with binary encrypted files ( I would like a "keep last 3 versions" approach for instance) -> ideally I would like to switch to a zfs so that if scrub flags me some errors i can just restore that single file, now yet i do not know how and if is possible to schedule a scrub with zfs in unraid and if would be possible to restore a single file if the remote is encrypted plus as i understood if i would instead go for a zfs zsend the backup would be on the /mnt/diskX and that in my opinion would break the unraid philisophy since you should be very careful in which drive is saving what, with an rsync based approach i should instead be able to backup the user share sight instead /mnt/user/share if i understood well now i saw some users using scripts, others using middleware containers that can encrypt and still manage the versioning on their own I mean thats all I am looking for, yet dont manage to find a path, does something on the "Apps" already exists or i have to go with scripts? thank you all
  7. Hello how did you all manage to install powertop on the lastest unraid version? Thanks
  8. oh thanks, i already did by cli, it needed the destructive mode on Now i have back the data so i destroy this dataset, create a new one and just move the files in? I guess is a nice to have to have appdata folder as a dataset no? thanks
  9. how can i do this? I mean is possible by gui?
  10. funny enough I just lost my appdata (still had to set up the backup plugin) since I had the appdata folder but not the appdata dataset I went to Create Dataset -> typed appdata and wiped straight the appdata folder In my opinion there should be a follow up warning for this if the directory exists
  11. hello, first of all thanks for the plugin i was wondering if there were an option to enable auto-refresh for sata ssd/nvme pools and disable it for spinner based pools, or a way to hand-select the refresh
  12. thank you for reaching out, actually I was having two issues: templates not using official image AND I had to enable the Low power HuC/GuC due my processor type Thank you for the template :)
  13. I did enable mode 3 and now seems to work, cpu is still under heavy load and gpu gets used only 200/300 MHz up to 800MHz (max capacity), from the gpu stats I also see a ton of interrupts/sec, wondering if is normal ^^" I yet have to fully understand the difference between mode 2 and 3, archwiki is quite technical about this I just hope the system is using as much as possible the iGPU since the cpu is a very weak one thankk you for your help edit: this topic is among the first results googling for guc/huc on LP mode, is actually the only reference for unraid and it did help a lot homedata-diagnostics-20240113-1109.zip
  14. Hello, are there any new indications on how to enable GuC / HuC on unraid 6.12? I am running a J6413 processor and facing this issue right now (or does any plugin exist for this?) Thanks
  15. I guess the issue is my specific CPU that only allows low power acceleration, I met other uses with same CPU on unRAID Would be safe to execute this directly on unraid host (?) cd ~/ git clone --depth=1 https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git sudo mkdir -p /usr/lib/firmware sudo cp -r linux-firmware/i915 /usr/lib/firmware https://jellyfin.org/docs/general/administration/hardware-acceleration/intel/#configure-and-verify-lp-mode-on-linux