All Activity

This stream auto-updates

  1. Past hour
  2. @Frank1940 thank you sir, you are a genious. This worked. I remove that last line, rebooted the PC and the shares came back. A huge thank you. Looks like some of my Dockers need a remap but that's not hard. I never added that line in the first place, not sure what caused that text to appear, or maybe it was there and a recent MS change caused it to become problematic. Cheers.
  3. If your host is setup correctly all you need to do is pull the roflcoopter/amd64-cuda-viseron image and it will use your GPU automagically if it is detected properly. I fully agree that the config should be editable in the GUI and i plan to do that at some point, but it is a pretty huge amount of work so i cant give a timeline on that sadly. Viseron v3 is in beta right now which brings a lot of improvements however, like 24/7 recordings
  4. That's what I did. I've been exporting the syslogs to my desktop using win-syslog server running there. But I'll set it to export to the flash drive and see what comes up. log_1233317640.log
  5. Ich hab Frigate im Einsatz als Container auf Unraid, aber eigentlich verwertet in Home Assistant (als VM auf Unraid). Nun habe ich aber zufälligerweise etwas festgestellt - Frigate scheint sich alle "freien" Grafikkarten zu schnappen, was zur folge hatte, dass die VMs die nicht automatisch gestartet wurden aber eine dedizierte GPU zugewiesen hatten, den Dienst quttierten da Frigate ihnen die GPU weggeschnappt hat. Ich habe bei den Einstellungen des Frigate Dockers eigentlich nur 1 GPU zugewiesen, tortzdem nimmt er bis zu 3 GPUs in anspruch.
  6. Actually I have a question regarding this as frigate in my case seems to "grab" all GPUs available and this is what i do NOT want. I have some dedicated GPUs for VMs and if frigate uses those GPUs the VMs won't start, therefore I want to limit frigate to just one GPU. But somehow this seems impossible to me: here's my Frigate Docker Config: docker run -d --name='frigate' --net='host' --privileged=true -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e HOST_HOSTNAME="Server" -e HOST_CONTAINERNAME="frigate" -e 'TCP_PORT_5000'='5000' -e 'TCP_PORT_8554'='8554' -e 'FRIGATE_RTSP_PASSWORD'='***' -e 'NVIDIA_VISIBLE_DEVICES'='GPU-bd161e6e-9b15-d71a-ec49-8c085ea02ad1' -e 'NVIDIA_DRIVER_CAPABILITIES'='compute,utility,video' -e 'TCP_PORT_8555'='8555' -e 'UDP_PORT_8555'='8555' -e 'TCP_PORT_1984'='1984' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://[IP]:[PORT:5000]' -l net.unraid.docker.icon='https://raw.githubusercontent.com/yayitazale/unraid-templates/main/frigate.png' -v '/mnt/user/appdata/frigate':'/config':'rw' -v '/mnt/user/Media/frigate':'/media/frigate':'rw' -v '/etc/localtime':'/etc/localtime':'rw' --device='/dev/bus/usb/006/002' --runtime=nvidia --shm-size=256mb --mount type=tmpfs,target=/tmp/cache,tmpfs-size=1000000000 --restart unless-stopped 'ghcr.io/blakeblackshear/frigate:stable' 440515ff723a15052068c74656fef26f189832b762fe93eb1766155f8130e599 The command finished successfully! The only way is to start the VMs before I start frigate - this is not an option for me. With this config and one VM started before frigate, frigate still shows up 2 GPUs: Please how can i remove the rtx 3060? Or in other words how can i limit frigate to the 1030? The 1030 has the GPU id mentioned above in the frigate docker config. Thank you for your advice!
  7. Danke für die schnelle Antwort. Tja, wie das bei den China-Boards so ist, es gibt keine genaue Bezeichnung. Anbei mal der Link, aber der wird nicht helfen. https://www.alibaba.com/product-detail/12th-Gen-i3-N305-N100-NAS_1601006122956.html Und nein, ist kein Controller in nem m.2 port. Wenn dem so ist, dass immer nur eine Platte wegen derselben ID erkannt wird, würde das den controller ja vollständig ad absurdum führen. Und das Modell ist, wenn ich es richtig gelesen habe, auch in den m.2 Adaptern verbaut. Würde mich daher wundern, wenn das eine Eigenart des Modells jmb585 wäre. Habe aber gerade mal den Test gemacht und habe die pari-Platte abgezogen und nur die Datenplatte dran gelassen. Auch da wird die Datenplatte nicht erkannt. Wo hast du in den Diagnoseteil wegen der Platten genau nachgesehen? Da sind ja eine Menge Daten. So könnte ich mal schauen, was jetzt angezeigt wird. Aber überraschend wird das nicht sein. Es werden wohl nur die nvmes drin stehen. Aber dennoch, in welcher Datei prüfst du das, was oben von dir steht? VG
  8. naja, was für ein Board ist es denn ? und wenn ich die logs sehe, es wird nur eine HDD erkannt, was du beschreibst bedeutet ja mehr oder weniger dass der Controller imm nur eine Platte mit der gleichen ID durchgibt ... [0:0:0:0] disk Intenso Micro Line 8.07 /dev/sda /dev/sg0 state=running queue_depth=1 scsi_level=5 type=0 device_blocked=0 timeout=30 dir: /sys/bus/scsi/devices/0:0:0:0 [/sys/devices/pci0000:00/0000:00:14.0/usb3/3-6/3-6:1.0/host0/target0:0:0/0:0:0:0] [5:0:0:0] disk ATA WDC WUH721818AL W870 /dev/sdb /dev/sg1 state=running queue_depth=32 scsi_level=6 type=0 device_blocked=0 timeout=30 dir: /sys/bus/scsi/devices/5:0:0:0 [/sys/devices/pci0000:00/0000:00:1d.2/0000:07:00.0/ata5/host5/target5:0:0/5:0:0:0] [N:0:7:1] dsk/nvm SAMSUNG MZAL4256HBJD-00BL2__1 /dev/nvme0n1 - capability=0 ext_range=256 hidden=0 nsid=1 range=0 removable=0 dir: /sys/class/nvme/nvme0/nvme0n1 [/sys/devices/pci0000:00/0000:00:1c.0/0000:01:00.0/nvme/nvme0/nvme0n1] [N:1:0:1] dsk/nvm KINGSTON SNV2S500G__1 /dev/nvme1n1 - capability=0 ext_range=256 hidden=0 nsid=1 range=0 removable=0 dir: /sys/class/nvme/nvme1/nvme1n1 [/sys/devices/pci0000:00/0000:00:1d.1/0000:06:00.0/nvme/nvme1/nvme1n1] mit onboard Controller, mal gefragt, das ist kein Zusatz Controller in einem m2 port ?
  9. Genau das mit den Befehlen konnte ich nicht, weil es für mich nur die Möglichkeit gab das über unbalanced zu erledigen. Dieser hat mein Prolbem aber nicht lösen können. @hawihoney hat das durchschaut und mir so geholfen, dass ich erst mal voran komme. So wird es vielen Einsteigern gehen und ich hoffe das Thema wird auch ihnen helfen hineinzufinden. Danke an beide Seiten. Ich habe ganz ähnliche Einstiegshürden mit Linux Mint aktuell. Wenn man knapp 30 Jahre mit Windows gearbeitet hat, ist das wohl normal. Ich weiß nicht ob das hierher gehört, eine Grundsatzdiskussion: Wenn man Begriffe wie sudo, rsync, top, cp und so weiter sich merken muss, fällt mir das nicht nur schwer ich empfinde es sogar als unnötig kompliziert. Grafisch prägt man sich Funktionen besser ein, z.B. wüsste ich genau wie man einen Datenträger defragmentiert, auch noch unter Win95/98, obwohl ich dies schon über 15 Jahre nicht mehr mache. Wie sehe dies aus mit Befehlen, die ich so lange nicht genutzt habe?
  10. Cannot `ls` any dirs root@vortex:/mnt/user# ls /bin/ls: reading directory '.': Structure needs cleaning but with direct paths I seem to be able too
  11. Yes but also makes it probably very warm. I don't know how much you played with it to make it work, but may be you can try to disable only APST PS states deeper than PS3. You can see the exlat parameters of PS3 and PS4 at the end of nvme id-ctrl /dev/nvme0 If you set nvme_core.default_ps_max_latency_us between the two value you can disable only PS4 You can check the current PS state with: nvme get-feature -f 2 -H /dev/nvme0
  12. USB ... kurz, sorry, das wird keinen Spaß machen mit dem Unraid System ... schon gar nicht so ein Teil mit USB 2.0
  13. Latest Unraid version installed, moved from a stable system on 10th get Intel to a new 13th gen Intel. New CPU/MB/RAM, Memtest86 passed without issues. Having issues keeping the system online for more than a few hours. Everything works when it's up, but then it just dies. I have syslog server activated with logs attached, and diagnostics as well. Looking for where to start...thanks!192.168.1.10 syslog.txt tower-diagnostics-20240414-1119.zip
  14. I understand now , for some reason I thought It would be more complicated . I started the copy , I have a little chile to go. 6 TB to copy .. Thank you for all your Help @itimpiand @Gragorg I will let you know how everything goes , once completed
  15. Please read the second post of this thread. All things that need to be changed are described in there.
  16. update to v6.12.10 ⚡️ 3.5 — 4.5 W idle - ️on the AC side (measured by Shelly 1PM)
  17. It sounds like you didn't stop your Docker and VM services before you ran the mover - so it left the in-use files on your old SSD. It sounds like you're sorted now, but for future reference, mover won't move in-use files - so ensure you've disabled VM and Docker before you use mover to shift the data off any cache drives you're planning to replace.
  18. Hallo zusammen, nachdem ich mit einer Hardware, die ich noch herumliegen hatte, meine ersten Schritte in unraid gegangen bin, wollte ich nun was größeres zusammenbauen. Zum Einsatz kommt ein ITX Mobo mit Intel N305 und Kingston DDR5. Es sind zwei m.2 Slots verbaut (die funzen auch) und über einen onboard JMB585 6 SATA ports. Nach dem Neustart wird nur die Paritätsplatte erkannt, nicht die (einzige) Datenplatte. Und zwar egal, welche Platte ich mit welchem Kabel auf welchen Port der 6 möglichen Ports setze. Es wird immer nur die Pari-Pl.atte erkannt. Hänge ich die Datenplatte, die nicht erkannt wird, an einen anderen PC, wird diese wunderbar erkannt. Auch die Sata-Storm-Stecker habe ich durchgetauscht. Wäre für ein paar Tipps und Denkanstöße sehr dankebar. Anbei, falls nötig die Diagnostics-Datei. Danke nimora-diagnostics-20240414-2151.zip
  19. @dave234ee From a bit of quick googling it sounds like your database was either corrupted or deleted while in use. Are you able to restore from backup?
  20. thanks, I will report back once it's done (1 day?)
  21. I have been trying to use ChatGPT to create an unraid user script that backups my files to another unraid server. It won't exclude my folders tho - any idea on how to fix it? #!/bin/bash # Source and Destination Directories declare -a SOURCE_DIRS=( "/mnt/user/APK" "/mnt/user/Computer_Apps" "/mnt/user/Documents" "/mnt/user/data" "/mnt/user/Photos" "/mnt/user/ss-recordings" ) DEST_DIR="/mnt/remotes/192.168.1.12_networkbackup/" # Folders to Exclude declare -a EXCLUDE_FOLDERS=( "/mnt/user/data/media/movies-4k/" "/mnt/user/data/media/tv-4k/" ) # Rsync Options RSYNC_OPTIONS="-avz --delete --progress" # Loop through each source directory for SOURCE_DIR in "${SOURCE_DIRS[@]}"; do # Exclude folders EXCLUDE_ARGS="" for EXCLUDE_FOLDER in "${EXCLUDE_FOLDERS[@]}"; do EXCLUDE_ARGS+=" --exclude=${EXCLUDE_FOLDER}" done # Run rsync rsync ${RSYNC_OPTIONS} ${EXCLUDE_ARGS} "${SOURCE_DIR}" "${DEST_DIR}" done
  22. Today
  23. I just posted a guide for this here. Hopefully it can be helpful for setting this up
  24. I've seen some people ask for help setting up Pixelfed. I spent a few days trying to get it working. This is my first guide so please let me know if something needs to be changed or clarified. You will need a database (mysql or mariadb), redis, a reverse proxy (swag in this case), and the compose.manager plugin. ------------------------------------------------------- Database Setup ------------------------------------------------------- Pixelfed only supports mysql. I am using mariadb which seems to be working but it's not supported so your mileage may vary. If you feel more comfortable using mysql install it instead of mariadb and modify the .env changes we'll do later to match • Search CA for "mariadb" install the container from linuxserver's repository • in the setup create a mysql root password, mysql database, mysql user, and mysql password. You will need the database name, user, and mysql password for later. ------------------------------------------------------- redis Setup ------------------------------------------------------- • Search CA for "redis " install the container from jj9887's repository • set an a port number that is not being used by another service and click apply to install ------------------------------------------------------- pixelfed container setup ------------------------------------------------------- • Search CA for "Docker Compose Manager " install the container from dcflachs's repository • Return to the docker tab. At the bottom click the new "ADD NEW STACK'' button • Create a name for your new docker compose stack I used "pixelfed". Click the advanced dropdown arrow and set a directory for the compose stack, this is where the "appdata" will be downloaded. I used "/mnt/user/appdata/pixelfed" • Click the small gear icon next to the new stack and click "edit stack" then "compose file". • paste the following docker compose file into the text box that appears and click "save changes" --- # Require 3.8 to ensure people use a recent version of Docker + Compose version: "3.8" ############################################################### # Please see docker/README.md for usage information ############################################################### services: web: image: "murazaki/pixelfed:edge-apache" container_name: "${DOCKER_ALL_CONTAINER_NAME_PREFIX}-web" restart: unless-stopped profiles: - ${DOCKER_WEB_PROFILE:-} environment: # Used by Pixelfed Docker init script DOCKER_SERVICE_NAME: "web" DOCKER_APP_ENTRYPOINT_DEBUG: ${DOCKER_APP_ENTRYPOINT_DEBUG:-0} ENTRYPOINT_SKIP_SCRIPTS: ${ENTRYPOINT_SKIP_SCRIPTS:-} volumes: - "./.env:/var/www/.env" - "${DOCKER_ALL_HOST_CONFIG_ROOT_PATH}/proxy/conf.d:/shared/proxy/conf.d" - "${DOCKER_APP_HOST_CACHE_PATH}:/var/www/bootstrap/cache" - "${DOCKER_APP_HOST_OVERRIDES_PATH}:/docker/overrides:ro" - "${DOCKER_APP_HOST_STORAGE_PATH}:/var/www/storage" ports: - "${DOCKER_WEB_PORT_EXTERNAL_HTTP}:80" healthcheck: test: 'curl --header "Host: ${APP_DOMAIN}" --fail http://localhost/api/service/health-check' interval: "${DOCKER_WEB_HEALTHCHECK_INTERVAL}" retries: 2 timeout: 5s worker: image: "murazaki/pixelfed:edge-apache" container_name: "${DOCKER_ALL_CONTAINER_NAME_PREFIX}-worker" command: gosu www-data php artisan horizon restart: unless-stopped stop_signal: SIGTERM profiles: - ${DOCKER_WORKER_PROFILE:-} environment: # Used by Pixelfed Docker init script DOCKER_SERVICE_NAME: "worker" DOCKER_APP_ENTRYPOINT_DEBUG: ${DOCKER_APP_ENTRYPOINT_DEBUG:-0} ENTRYPOINT_SKIP_SCRIPTS: ${ENTRYPOINT_SKIP_SCRIPTS:-} volumes: - "./.env:/var/www/.env" - "${DOCKER_APP_HOST_CACHE_PATH}:/var/www/bootstrap/cache" - "${DOCKER_APP_HOST_OVERRIDES_PATH}:/docker/overrides:ro" - "${DOCKER_APP_HOST_STORAGE_PATH}:/var/www/storage" healthcheck: test: gosu www-data php artisan horizon:status | grep running interval: "${DOCKER_WORKER_HEALTHCHECK_INTERVAL:?error}" timeout: 5s retries: 2 • Navigate to the directory you set for the pixelfed "appdata" and create a .env file. Paste the following into the file APP_NAME="pixelfed" # # !!! Domain Cannot be changed after instance is started !!! # APP_DOMAIN="pixelfed.domain" APP_URL="https://${APP_DOMAIN}" ADMIN_DOMAIN="${APP_DOMAIN}" APP_ENV="production" APP_DEBUG="false" ENABLE_CONFIG_CACHE="true" OPEN_REGISTRATION="false" ENFORCE_EMAIL_VERIFICATION="false" # # !!!The APP_TIMEZONE Cannot be changes after the instance is running!!! # APP_TIMEZONE="UST" APP_LOCALE="en" INSTANCE_CONTACT_EMAIL="admincontact@email" CACHE_DRIVER="redis" BROADCAST_DRIVER="redis" ################################################################################ # database ################################################################################ DB_VERSION="11.2" DB_CONNECTION="mysql" DB_HOST="MARIADBCONTAINERADDRESS" DB_USERNAME="DBUSER" DB_PASSWORD='DBPASSWORD' DB_DATABASE="DBNAME" DB_PORT="3306" DB_APPLY_NEW_MIGRATIONS_AUTOMATICALLY="false" ################################################################################ # redis ################################################################################ REDIS_CLIENT="phpredis" REDIS_SCHEME="tcp" REDIS_HOST="REDISCONTAINERADDRESS" REDIS_PORT="REDITPORT" ################################################################################ # ActivityPub ################################################################################ ACTIVITY_PUB="true" AP_REMOTE_FOLLOW="true" AP_SHAREDINBOX="true" AP_OUTBOX="true" ################################################################################ # Federation ################################################################################ ATOM_FEEDS="true" NODEINFO="true" WEBFINGER="true" ################################################################################ # logging ################################################################################ LOG_CHANNEL="stderr" ################################################################################ # session ################################################################################ SESSION_DRIVER="redis" ################################################################################ # docker shared ################################################################################ # # !!! Do not fill in the appkey line. It will be generated during instance setup !!! # APP_KEY= DOCKER_ALL_CONTAINER_NAME_PREFIX="${APP_NAME}" DOCKER_ALL_DEFAULT_HEALTHCHECK_INTERVAL="10s" DOCKER_ALL_HOST_ROOT_PATH="./docker-compose-state" DOCKER_ALL_HOST_DATA_ROOT_PATH="${DOCKER_ALL_HOST_ROOT_PATH:?error}/data" DOCKER_ALL_HOST_CONFIG_ROOT_PATH="${DOCKER_ALL_HOST_ROOT_PATH:?error}/config" DOCKER_APP_HOST_OVERRIDES_PATH="${DOCKER_ALL_HOST_ROOT_PATH:?error}/overrides" TZ="${APP_TIMEZONE}" ################################################################################ # docker app ################################################################################ DOCKER_APP_RELEASE="branch-jippi-fork" DOCKER_APP_PHP_VERSION="8.2" DOCKER_APP_RUNTIME="apache" DOCKER_APP_DEBIAN_RELEASE="bullseye" DOCKER_APP_BASE_TYPE="apache" DOCKER_APP_IMAGE="ghcr.io/jippi/pixelfed" DOCKER_APP_TAG="${DOCKER_APP_RELEASE:?error}-${DOCKER_APP_RUNTIME:?error}-${DOCKER_APP_PHP_VERSION:?error}" DOCKER_APP_HOST_STORAGE_PATH="${DOCKER_ALL_HOST_DATA_ROOT_PATH:?error}/pixelfed/storage" DOCKER_APP_HOST_CACHE_PATH="${DOCKER_ALL_HOST_DATA_ROOT_PATH:?error}/pixelfed/cache" ################################################################################ # docker web ################################################################################ DOCKER_WEB_PORT_EXTERNAL_HTTP="8080" DOCKER_WEB_HEALTHCHECK_INTERVAL="${DOCKER_ALL_DEFAULT_HEALTHCHECK_INTERVAL:?error}" • Modify the .env file to suit the needs of your instance. Note that the Domain and Timezone cannot be changed after the instance is started • In the docker tab of unraid click the new "compose up" button to this will download and start the docker containers from the murazaki/pixelfed:edge-apache repository. ------------------------------------------------------- pixelfed instance setup ------------------------------------------------------- The Following should set up the instance now that the containers are running. • In the docker tab of the unraid UI click on the new pixelfed-web container and click ">_ console" this will open a new window that contains a console shell of the docker container • Paste "php artisan key:generate" into the console window and press enter. This will generate an appkey and insert it into your .env file • in the Unraid docker tab click the "Compose Down" key to the right of your pixelfed stack then, when that is complete, "compose up". This will restart your containers • Enter the Console for pixelfed-web again and paste "php artisan config:cache" this will cache the settings in you .env to pixelfed. If you make changes to your .env file run "php artisan cache:clear" then "php artisan config:cache" to clear the old and cache the new settings • In the console paste "php artisan migrate" and press enter. You will be asked of you want to proceed. Arrow left and press enter to select "yes" This will set up your database • When this is complete compose down then up again to restart the containers • Enter the Console again and paste "php artisan user:create". Follow the prompts to create your first admin account. Email account is used for login so please use a valid email address format ------------------------------------------------------- reverse proxy setup (swag) ------------------------------------------------------- • Navigate to the proxy-conf directory of your swag appdata and edit the pixelfed.subdomain.conf.sample file to match the following config with the upsteam app and upstream port edited to match your setup server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name pixelfed.*; include /config/nginx/ssl.conf; client_max_body_size 0; location / { include /config/nginx/proxy.conf; include /config/nginx/resolver.conf; set $upstream_app PIXELFED-WEBADDRESS; set $upstream_port PIXELFED-WEBPORT; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } } ------------------------------------------------------- DNS and Beyons ------------------------------------------------------- At this point you should have a working pixelfed instance with a reverse proxy. What is left is to create a CNAME dns record. There are likely guides for how to do this for your DNS provider. If you are using cloudflare my understanding is that you will have to disable the cloudflare proxy for activity pub federation to work properly On the subject of federation if you would like to enable activity pub federation enter the pixelfed-web console again and run "php artisan instance:actor" After updates you may need to run "php artisan migrate" I haven't had to update yet so I haven't tested this. There are many settings that can be changed or added via the .env file. The one I posed is trimmed down to make it clearer and easier to follow. You can check out the full .env file at the Pixelfed Github repo in the .env.docker file ---------------------------------------------------------------------------------------------------------------------------------------------------- Hopefully by now you have a working instance that can be reached from outside of your network. I am sure there are better or more efficient ways to do but, this suits my needs for a small instance for a few friends and myself to use. Please understant that I am an amateur and am new to server/system administration so I cannot guarantee that this is secure enough for production use. Please review these methods along with the pixelfed official documentation before decideding if this is right for you and your user's security
  25. Is it possible to use the GPU to do the transcoding in Handbrake and MKVToolNix? For example in Handbrake when i select the Hardware H.265 NEVC 1080p option my CPU does the work, while the GPU sits idle. IDK if i need to change a prefrence in the GUI or if its through the docker edit function. Thanks
  1. Load more activity