Leaderboard

Popular Content

Showing content with the highest reputation on 03/01/24 in all areas

  1. (chuckling at the irony of some of these people who illegally download movies they didn't pay for, complaining that software they already own that isn't changing, but new purchases will incur potential recurring fees to support further development of the platform they love, use, and, again, already own and will continue to receive the benefits of using at no additional cost.) Seriously folks, it's a standard, common sense practice for mature software to have recurring fees to sustain continued development. If it didn't, once you hit a market saturation point for your product, you essentially can only cover maintenance but no other development (or get in the business of harvesting your existing user base's personal information to sell to third parties.) Some of you complaining would be shocked to learn that there are many of us that pay hundreds every year for updates to other software companies we use and need. From audio and video editing, to network licensing. I do. I just did 2 weeks ago! But I evaluate if the new features in that next version are worth it, or skip a generation. Now, with all that said, I will honestly admit that the little cheapskate in me doesn't like any increase on anything ever! Even if I can rationally justify it. And that's because nobody really wants to gleefully pay more for anything. I hear you. I feel you. I started out like some folks cobbling together hardware and sketchy drives to make my first server. Many of you are probably still in this stage. But 8 years later, and after numerous sever interactions, I run 4 licenses now on solid hardware with solid drives [knock on wood.] If their newly announced model was implemented when I first looked around 8 years ago, it would have given me the same pause to consider it versus alternatives. I would have still tried all the other free operating systems like I did. But I think in the end, I would have still picked Unraid for its ease of use, ability to run on a wide range of hardware, and community support. I think that it's ridiculously generous that Unraid has stated that they will grandfather previously sold licenses to have continued updates. Some of my licenses are 7 or 8 years old, and I'm still getting new features, new patches, and more. There is no other software that I own that has done that beyond a few years. This is why I have recommended this os, and will continue to do so. I'll just tell people to suck it up and buy the lifetime upfront, as it'll pay for itself over time, and give the dev's the ability to do more sooner. --- as a postscript, don't reply to me with nonsensical arguments or how it "costs a month's worth of food" replies. I'll just ignore them. This software is a luxury, not a necessity. If you are having to make the decision between eating versus storing more data than the average pc can do, then the solution is simple; go use a completely free os and stop making irrelevant arguments.
    6 points
  2. I spent a few hours in blender, here's a rough draft. I can add Unraid text pretty easily, but I just wanted to get a proof of concept going. It's a very tight fit, Printed in PETG, so stringing galore across all the tiny holes. First draft was no ventilation, just the two shells, which printed perfect in PETG, but I wanted airflow possibilities. Have you checked temps after prolonged running yet?
    2 points
  3. you are in the wrong thread then 1/ plugin installed, plex can see it, job here done 2/ plex issues, rather look at the plex lsio thread for help therefore sample, most common delete the codecs folder and restart plex, and recommended here is the official plexinc container, just as note.
    2 points
  4. @ChatNoir I validated that the USB key or boot device is not counted when using the USB to mSATA adapter. You can go to Tools --> Registration and it will show you the number of drives counted against your license. On my main unRAID server that's 41 drives, but I have 42 if I include the USB to mSATA SSD adapter.
    1 point
  5. Highly recommend finding someone with a working setup that's doing what you want so you can copy both hardware and software. This is not a trivial setup, you may end up with a bunch of trial and error, mostly error and a lot of trial.
    1 point
  6. alles klar, GB halt ... Danke für die Aufklärung, muss ich mir merken bei denen.
    1 point
  7. Das Board gibt es in beiden RAM-Varianten. Die DDR4 Variante ist namentlich am Ende nur leicht unterschiedlich. Somit ist die Frage nicht ganz unberechtigt. GIGABYTE B760M DS3H https://geizhals.de/gigabyte-b760m-ds3h-a2872212.html Formfaktor µATX Sockel Intel 1700 Chipsatz Intel B760 CPU-Kompatibilität Core i-14000, Core i-13000, Core i-12000, Pentium Gold G7000, Celeron G6000 RAM 4x DDR5 DIMM, dual PC5-60800U/DDR5-7600 (OC), max. 192GB (UDIMM) Erweiterungsslots 1x PCIe 4.0 x16, 2x PCIe 3.0 x1, 2x M.2/M-Key (PCIe 4.0 x4, 2280) GIGABYTE B760M DS3H DDR4 https://geizhals.de/gigabyte-b760m-ds3h-ddr4-a2872190.html Formfaktor µATX Sockel Intel 1700 Chipsatz Intel B760 CPU-Kompatibilität Core i-14000, Core i-13000, Core i-12000, Pentium Gold G7000, Celeron G6000 RAM 4x DDR4 DIMM, dual PC4-42666U/DDR4-5333 (OC), max. 128GB (UDIMM) Erweiterungsslots 1x PCIe 4.0 x16, 2x PCIe 3.0 x1, 2x M.2/M-Key (PCIe 4.0 x4, 2280)
    1 point
  8. Ok. I cancelled the one pending and started it from scratch. I will report back if anything new shows up
    1 point
  9. I apologize for wasting your time. I found some old rule in my firewall, blocking traffic to internet from certain IPs. Including those used for lancache. Disabling the rule fixed the problem. I appreciate your support. I wouldn't figure it out if you haven't pointed me in the right direction. I'm using quite a few of your containers and they are great. Keep up good work! Thank you!
    1 point
  10. I'm with @Sissy on this. For a long time I used an adapter to run the flashdrive off a USB header on the motherboard, so it was of course inside the case. Was quite happy until one day the flashdrive died. Then I had to open up the server to replace the drive. Much as I love my Fractal Design R5, for me the glass side panel is incredibly difficult to align (too much flex) and closing it up while it's vertical involves a bit of non-techie thumping. (I could place the server on its side, where replacing the panel is a bit easier, but my SATA cables seem to be quite sensitive, so didn't want to do that. Also, with 10 hard drives, it's not a trivial matter to move the server around.) I had to test a few times to find out whether the problem was with the flash drive or the adapter - I think in the end it was indeed a dying flash drive. But after that I thought I might as well just stick the flash drive into one of the USB ports on top of the case. At the very least there would be no worrying about whether the adapter was working. No one else comes near the server, and in my (untidy) situation, there are some many other things that are more likely to be dislodged or knocked over (ethernet cables, UPS cables, network switches and their power cords, etc.) One possible additional advantage to mounting the flashdrive externally, admittedly not tested yet, is that it might be useful if I have a dualboot server. On my other server, I have Win11 installed on an NVME drive that is passed through to a VM. Usually unRAID runs on the server and I access Win11 via the VM. The BIOS boot order is unRAID first, then the Win11 drive. A week or so ago, I managed to mess up the VM (I think when I tried to run WSL2) and could only access Windows by booting into the NVME drive directly. This meant fiddling with the BIOS to change boot device order so that I could boot directly into Windows, trying a Windows fix, and then resetting the boot order to go back to unRAID to test whether the VM was working. This had to repeated every time the fix didn't work, and I ended up having to do this a few times. I think that if I had had the flash drive mounted externally, I could simply have removed it, so that when the BIOS couldn't find it, it would boot into the next item, the Windows NVME drive. After that I could just plug in the flash drive again to boot into unRAID. Haven't tested this out because there hasn't been any reason to open up the case (another R5) to move the internally mounted flash drive). I'm sure there's a better alternative boot process involving GRUB or something similar, but I haven't looked into that.
    1 point
  11. Thank you, worked out very well
    1 point
  12. Is the cache device detected in the board BIOS?
    1 point
  13. Die Rohfehlerwerte bei hochkapazitiven Festplatten sind logischerweise erheblich höher. Es ist schon ein Wunder, daß die Hersteller bei der Verkleinerung der Technik, es überhaupt schaffen ziemlich zuverlässig aus den ausgelesenen analogen Signalen noch etwas Originales zu erkennen. Ziemlich faszinierendes, aber auch seeehr umfangreiches Thema. Nicht umsonst haben festplatten indern sogar ihre eigene Fehlerkorrektur laufen, weil die wissen, daß das Rohsignal eben doch ziemlich "mies" ist. Wenn Du das nicht akzeptieren wolltest, gibt sie zurück. Das ist eben eine der Unwägbarkeiten bei Recertified von Drittanbietern (egal welcher Hersteller). Man hat keine Herstellergarantie (oder nur, wenn man glück hat), man kauft, was angeboten wird und kann sich entweder im Rahmen des Widerrufrechtes (ggf. unter Verlust der Versandkosten) oder im Rahmen der Sachmängelhaftung (mit Beweislastumkehr) über reale Mängel unterhalten. Aber nur normal hoch gezählte Werte sind kein Sachmangel. Beim nächsten Kauf empfehle ich dann aber die Beschreibungen und mitgeltenden Informationen vorher zu lesen.
    1 point
  14. Nur zum Beispiel habe ich mal einige meiner ST18000NM000J nachgesehen (ich teste alle bei Eintreffen auf DOA und versuche das zu protokollieren und wenn ich die zwischendurch teste, lege ich auch Screenshots ab.). jeweils unterschiedliche Festplatten: Power on hours: 0 Power Cycle: 1 Raw read error: 50 Power on hours: 0 Power Cycle: 1 Raw read error: 34 Power on hours: 6986 Power Cycle: 55 Raw read error: 10976522 Power on hours: 15372 Power Cycle: 88 Raw read error: 60615811 Das sind nur 4 von meinen >20 Stück der Seagate 18TB Exos. Das sind keine ungewöhnlichen Werte und deuten auch nicht auf einen Schaden hin. Die laufen alle problemlos. Erst wenn die CRC Fehler (Schnittstellenprobleme) oder die pending oder zugewiesenen Werte steigen ist das ein Fall für die Sorge.
    1 point
  15. Ich habe wegen einem Epson-MuFu permanent eine Win11 VM laufen, die sich ansonsten nur langweilt. (Der Epson kann nicht auf SMB scannen, aber er kann auf seine eigene Hintergrundanwendung unter Windows zugreifen und die kann dann irgendwo ablegen). Das System ist mein 2nd System Shipon (siehe Signatur): Habe die gerade mal abgeschaltet um den Unetrschied zu sehen: Mit Win11 VM an: 45,99 idle/spindown Win11 gestoppt: Anscheinend hat das unraid erst einmal etwas mehr aufgeweckt, weil die Leistung nun nach rund 2 Minuten bei rund 50,92W liegt. Ich wartete mal noch so 20 Minuten und schau nochmal nach: es sind weiterhin nun 50,35W. Gemessen mit AVM Fritz!Dect 200 Somit beeinflußt eine idle Windows VM bei mir den Verbrauch eigentlich gar nicht.
    1 point
  16. 🥰 YES! Unter dem Paster sind 2x Lötpunkte, siehe Bilder. Habes es gerade ausprobiert. Funktioniert. Die 2x Drähte miteinan verbunden schaltet das MB ein und aus. Selbes Prinzip wie der Taster.
    1 point
  17. Ok. Das ist mir auch gerade neu. Die letzten Seagate Platten die ich hatte, zeigten das noch nicht so, allerdings auch schon paar Jahre her. 2-3TB Modelle waren das. Ich habe dennoch Bauchschmerzen bei "recertified" Platten. In einem Kommentar steht folgendes: Stell dir vor du kaufst nen gebrauchten PKW, bei dem einfach mal alle alten Unterlagen gelöscht und der Kilometerstand auf 0 gesetzt wurde für 20-30€ günstiger als Neuware.
    1 point
  18. The NVMe devices. I think you'll find they are rated up to 1200MB/s, and once the small pseudo SLC cache is full they will be much slower than that.
    1 point
  19. A drive being disabled simply means that a write to it has failed so it is no longer in sync with parity, More often than not this is not the drive itself but an external factor such as cabling and/or power. The instructions for re-enabling the drive are covered here in the online documentation accessible via the Manual link at the bottom of the Unraid GUI. In addition every forum page has a DOCS link at the top and a Documentation link at the bottom. The Unraid OS->Manual section covers most aspects of the current Unraid release.
    1 point
  20. Kdenlive und manchmal Handbrake Gesendet von iPhone mit Tapatalk
    1 point
  21. The Codec solution worked !!! Thanks for the suggestion !!! will update my original post for those who will face the same issue.
    1 point
  22. SSDs in the array cannot be trimmed, and can only be written at speed of parity
    1 point
  23. Sounds good at the moment I'm using a manual fan controller with a dial I turn but with your designed software & hardware I could automate this with some scripts. Awesome project for me to get ready for the warmer summer months.
    1 point
  24. thank you so much this worked now we have mod support only thing now is to find a way to add auto download but this is great you will see all 80 mods loaded im posting this also to have a reference for anyone else how needs it {workshopid = "1695671502", path = " /serverdata/serverfiles/steamapps/workshop/content/445220/1695671502"},
    1 point
  25. It's been more than one week with 11 containers running. I don't know why ipvlan didn't work before, but after making the change, system is stable. root@Tower:/mnt/user/appdata# dnetworks NETWORK ID NAME DRIVER SCOPE 9dd32ab44a95 br0 ipvlan local 7bcb1926472d bridge bridge local 53da3ff4205d host host local 92ef905a9981 mosquitto_default bridge local 0f34678ea99b none null local root@Tower:/mnt/user/appdata# root@Tower:/mnt/user/appdata# docker system df TYPE TOTAL ACTIVE SIZE RECLAIMABLE Images 11 11 4.929GB 7.335MB (0%) Containers 11 11 103.8MB 0B (0%) Local Volumes 3 1 0B 0B Build Cache 0 0 0B 0B
    1 point
  26. If I hear the word "subscription" used to describe the changes for Unraid one more time, I gonna scream!😣
    1 point
  27. harte Neustarts sind in der Regel immer Hardware Fehler, wenn Unraid abschmiert "freezed" das System und du musst händisch neu starten, das nur als Info. sleep plugin > hoffentlich noch nicht aktiv und versehentlich falsch konfiguriert cache dirs > warum zum Start ?, bringt nur wirklich Last auf den Server bei einer Massenbefüllung system stats > wird aktuell nicht wirklich weiter entwickelt, vielleicht mal abwarten bis alles da ist cache drive, sehe ich das richtig ? single drive cache in btrfs ? nicht empfohlen ... appdata share cache yes > aufpassen system share cache no > keine gute Idee ... was noch gerne harte Neustarts verursacht, Stromsparmechanismen ... oder OC wie XMP und co ... BIOS defaults nutzen, Gleiches gilt für powertop und co ... Primär, schau nach deiner Hardware ... auch wenn du gerade am fillen bist, Temps und co ... abschließend meinerseits noch frustrierender ist es solche Kommentare zu lesen um "gefühlt" zu pushen das jemand antwortet ... nur meine persönliche Anmerkung.
    1 point
  28. I am diving into this conversation. I noticed that TheBird956 is running Unraid 6.12.8. I have 7 Unraid servers that I manage. I recently upgraded 2 of them from 6.12.6 to 6.12.8 and now both of the upgraded servers are randomly shutting down. I have Virtual Syslog Server running on a Windows machine in the office of one of the upgraded Unraid servers to hopefully catch some log files of what is going on. I am posting something here so that others can add if they are experiencing similar issues.
    1 point
  29. There has never been a promise that prices would remain fixed forever. The prices for nearly everything has gone up. We are not immune to that. But, you do have a chance to buy now and be immune from these prices changes.
    1 point
  30. There are many unexplained stories about Realtek cards and ASPM ... so I made a little investigation: The kernel in-tree r8169 driver is a generic driver for all Realtek network cards, despite its name. It contains lots of workaround for different Realtek chipset. It also has a very strict condition for using ASPM (this condition was removed for a few weeks last year then reverted back...). It enables ASPM only for RTL8125A and RTL8125B chipset and only if a bit in a register is set in the chip by the system vendor, which signals to the driver that vendor successfully tested ASPM 1.2 on this configuration. (Probably not many motherboard vendor bothers to do this ...). If this condition not met it tries to disable ASPM, if it is denied by BIOS then it uses ASPM whatever the chipset is ... This behaviour cannot be overridden. That basically means low chance for having ASPM with the in-tree driver. I also checked the r8125 driver (from Realtek) used by this plugin. It contains CONFIG_ASPM = n in the Makefile but this just sets the default value of ASPM, it is not a real compile time option. After installing this plugin you can easily enable ASMP if you create a file r8125.conf in /boot/config/modprobe.d with the following line: options r8125 aspm=1 It enables ASPM the correct way. With this my server reaches C10 package state... @jinlife please try to enable firmware load support with ENABLE_USE_FIRMWARE_FILE = y. The firmware files are already in /lib/firmware/rtl_nic/ as the in-tree driver uses them. Probably most of these cards are shipped with outdated firmware, so enabling this option would help. Correction: The r8125 source code contains the firmware file in binary form. No need to enable ENABLE_USE_FIRMWARE_FILE. The speed issues can be caused by EEE (Energy Efficient Ethernet) which is enabled by default by this driver. You can check the status with ethtool --show-eee eth0 Or disable it with an extra line in r8125.conf options r8125 eee=0
    1 point
  31. This is the reason for this recommendation:
    1 point
  32. Single rail power is all well and good, but it's more important to limit the number of drives per connector. And don't, use molded style connector splitters. Use only quality individually wired and pinned connectors.
    1 point
  33. I like using a single rail power supply. Think my Unraid is a beQuiet brand. Also partial to FSP but not sure they are even making anymore.
    1 point
  34. Were these SATA->SATA or Molex->SATA and how many ways were you splitting them? Nothing wrong with splitters per se but you do need to make sure that you are not trying to split a SATA connection in particular more than 2 ways. In my experience Molex->SATA splitters are more reliable as the Molex connector end is more robust and can take a heavier current without voltage sag happening.
    1 point
  35. Veto'ing files slows all access down as samba has to then compare every single file against the veto list https://www.samba.org/samba/docs/current/man-html/smb.conf.5.html#VETOFILES:~:text=valid %3D yes-,veto,-files (S) fruit:metadata being set to stream is technically faster in conjunction with everything else listed there, but pre-existing metadata may get lost by changing it from the default of netdata. My testing on a directory with ~1000 folders has stream being ~1 second faster to open and populate. Default of netdata was ~2-3 seconds. 6.12.8 has for the settings with Mac Interoperability enabled what LT recommends. Not to say that there may not be better / faster, but the possibility of losing metadata - especially if you ever used AFP on the server dictates a more conservative setting for the OS defaults
    1 point
  36. Then you are doing it wrong. You mustn’t passthrough the iGPU itself. You have to passtrough one of its VFs
    1 point
  37. If you are at the point that you need to swap your flash drive, the few moments it takes to open a cover to retrieve it is inconsequential. The benefits are preventing it from accidental damage, theft, loss, being removed while server is running, ....
    1 point
  38. There was a change made to the Realtek stock kernel driver that disables ASPM, this was done by the Linux kernel maintainers, since apparently is was causing the NIC to dropout for some users, so unlikely it will be re-enable by default in the kernel soon, in the meantime you can try this: https://forums.unraid.net/topic/153787-unraid-os-version-6128-available/?do=findComment&comment=1373749
    1 point
  39. For anyone looking for a guide on how I did this, here it is. Login to pgAdmin4. On the left expand Server > Matrix > Databases, then right click Databases then go Create > Database... Then in the Database field type "syncv3" then click "Save" Install Conpose.Manager from CA. Navigate to the Docker tab, and then to the "Add New Stack". Put in "Sliding_Sync" in the stack_name field, then click "Advanced" and in the stack directory put /mnt/user/appdata/matrix/sliding-sync then click "OK" Now scroll down and below your dockers you should see Sliding_sync under Compose, click the COG > Edit Stack > Compose File Scroll down again and there should be a text editor. Copy and past the following into that. version: '3.8' services: slidingsync-proxy: container_name: slidingsync-proxy image: 'ghcr.io/matrix-org/sliding-sync:latest' restart: unless-stopped environment: - 'SYNCV3_SERVER=https://chat.yourdomain.com' - 'SYNCV3_SECRET=KEY' - 'SYNCV3_BINDADDR=:8009' - 'SYNCV3_DB=user=postgres-username dbname=syncv3 sslmode=disable host=slidingsync-db password=postgres-password' ports: - '8009:8009' depends_on: - slidingsync-db slidingsync-db: container_name: slidingsync-db image: postgres:15 restart: unless-stopped volumes: - /mnt/user/appdata/matrix/sliding-sync/database:/var/lib/postgresql/data environment: - POSTGRES_PASSWORD=postgres-passowrd - POSTGRES_USER=postgres-username - POSTGRES_DB=syncv3 Adjust `SYNCV3_SERVER`, `SYNCV3_DB`, `POSTGRES_PASSWOR`, `POSTGRES USER`, and 'POSTGRES_DB' to match your setup. You can use this command in the unraid console to generate a random string to put in 'SYNCV3_SECRET' echo -n $(openssl rand -hex 32) Create a file called `client.json` with the following content. You can use any text editor for this. { "m.homeserver": { "base_url": "https://matrix.yourdomain.com" }, "org.matrix.msc3575.proxy": { "url": "https://chat1.yourdomain.com" } } Place this file in the Nginx directory on your Unraid server, I put it in /mnt/user/appdata/Nginx-Proxy-Manager-Official/data/nginx/.well-known/matrix/client.json Open the Nginx Proxy Manager UI in your browser. Add a New Proxy Host for Sliding Sync connections, I made mine chat1.yourdomain.com Set it up like normal and have it point to you Sliding Sync Proxy. I have my matrix server on port 8008 and Silding Proxy on 8009. Before you hit save, go to the "Advanced" tab and add this to Custom Nginx Configuration location /.well-known/matrix/client { root /data/nginx/; try_files /.well-known/matrix/client.json =404; default_type application/json; add_header Access-Control-Allow-Origin *; } Verify the Setup, go to https://chat1.yourdomain.com/.well-known/matrix/client It should return the contents of the .json file you made earlier And that's it, it should all be up and running.
    1 point
  40. 1. Yes 2. You can give the licence files different names to help you tell them apart. When transferring licences you put the licence file for the one you want to transfer into the ‘config’ folder on the new USB drive so it knows which one is being transferred.
    1 point
  41. Just thought I would share my appdata backup script that I have been using for a couple of years now. It can be modified for any other accessible directory. I have used the Appdata backup plugin on community apps as well and I do highly recommend it. The way my brain works and even though less feature filled, I decided to just write a script to do exactly what I wanted and streamlined for my particular workflow. Hopefully this is the place to share and I do so in the off-chance someone may find some use for it or inspiration for their own scripts. How to use the script is given in the code comments. In short, it is designed to simplify and automate rolling, incremental, and permanent backups of the appdata folder. With customisable settings like the number of rolling backups to retain, the frequency of incremental backups, and intervals for permanent backups. The script automatically excludes user specified directories and filenames using regex patterns. Note that the script uses hard-linking to reduce storage utilisation, something I wanted that wasn't available in the Appdata backup plugin. Please use your due diligence and run a test case for your needs to see if it works. I have included cursory error checks, but nothing exhaustive. #!/bin/bash ###################################################################### # MyAppDataBackup v1 # ·•˚°○.●|HyperWorx - 2023|●.○°˚•· # # Description # ------------ # This script automates rolling, incremental, and permanent backups # for the specified appdata folder. It keeps a user-specified number # of rolling backups, performs incremental rolling backups and # permanent backups at defined intervals. # # Usage for automated backups: # Modify the user-defined variables to configure the script behavior. # Ensure the excluded_directories array includes directories to exclude # from the backup. # Run this script using the user scripts plugin or as a cron job. # Ensure proper execution permissions for the script. # # Explanation of User-defined Variables # -------------------------------------- # appdata_folder - Path to the source AppData folder to be backed up. # backup_parent - Parent directory for storing backups. # max_rolling_backups - The most recent number of rolling backups to retain. # incremental_backup_freq - Frequency of script execution for incremental backups. # Set to 0 for not utilising this functionality. # max_incremental_backups - The most recent number of incremental backups to retain. # permanent_backup_freq - Frequency of script execution for permanent backups. # Set to 0 for not utilising this functionality. # excluded_directories - List of directories to exclude from the backup. # If undefined, all appdata folders will be included. # excluded_file_patterns - List of regex compatible patterns to match filenames for # excluding from the backup. If undefined, all files included. # # Default Settings Example: # -------------------------_ # Executing the script every month and keeping 2 rolling backups (the last 2 months), # keep incremental backups at every 3 script executions (or 3 months) and storing the 4 of # last 4 most recent incremental backups, and keep a permanent backup snapshotted every # 12 script executions (or every 12 months). # # NOTE: # IT IS USER'S RESPONSIBILITY TO ENSURE SCRIPT WORKS FOR THEIR INTENDED USE CASE. ###################################################################### ################ USER DEFINED VARIABLES ################### appdata_folder="/mnt/user/appdata/" backup_parent="/mnt/user/backups/unraid/MyAppDataBackup/" max_rolling_backups=2 # Adjust the number of rolling backups to keep incremental_backup_freq=3 # Adjust the script execution frequency for incremental backups max_incremental_backups=4 # Adjust the number of incremental backups to keep permanent_backup_freq=12 # Adjust the script execution frequency for permanent backups # Define excluded directory basenames - e.g. excluded_directories=("mariadb" "jackett" "plex") # If undefined or an empty list, no folders will be excluded. excluded_directories= # Define excluded filename patterns - e.g., excluded_file_patterns=("pattern1" "pattern2") # If undefined or an empty list, no filenames will be excluded. # E.g. Exclude all files with '.tmp' extension and exclude files # starting with 'backup_' and ending with '.bak': # excluded_file_patterns=("*.tmp" "backup_.*\.bak") excluded_file_patterns= ########################################################## # Check if required command-line tools are installed command -v rsync >/dev/null 2>&1 || { echo "Error: rsync is not installed. Install it and try again." exit 1 } # Ensure the appdata folder exists if [ ! -d "${appdata_folder}" ]; then echo "Error: AppData folder does not exist." exit 1 fi # Path declaration incremental_folder="${backup_parent}incremental/" permanent_folder="${backup_parent}permanent/" # Create appdata backup folder backup_folder="${backup_parent}appdata_$(date +'%y%m%d')" counter=1 # Create if needed and Give write permission to the backup parent directory mkdir -p "${backup_parent}" || { echo "Error: Failed to create backup parent directory." exit 1 } chmod +w "${backup_parent}" # Manage rolling backups cd "${backup_parent}" || exit mapfile -t backups < <(find . -maxdepth 1 -type d -regex './appdata_[0-9]\{6\}\(_[0-9]\+\)?' -printf '%T@ %p\n' | sort -nr | awk 'NR>1 {print $2}') for ((i = max_rolling_backups; i < ${#backups[@]}; i++)); do rm -rf "${backups[i]}" echo -e "\nRemoved the rolling backup:\n${backups[i]}" done # Add a time suffix if the folder already exists while [ -e "${backup_folder}" ]; do backup_name=appdata_$(date +'%y%m%d_%H%M') backup_folder="${backup_parent}${backup_name}" done # Check if the mkdir command was successful mkdir -p "${backup_folder}" || { echo "Error: Failed to create backup folder." exit 1 } # Create Rolling Backup and exclude directories and filenames based on regex patterns if [ -d "${backup_folder}" ]; then # Prepare --exclude options for each directory exclude_dir_options=() for dir in "${excluded_directories[@]}"; do exclude_dir_options+=("--exclude=$dir") done # Prepare --exclude options for each regex pattern exclude_pattern_options=() for pattern in "${excluded_file_patterns[@]}"; do exclude_pattern_options+=("--exclude=$pattern") done rsync -a --link-dest="${backup_parent}$(ls -t "${backup_parent}" | grep -E 'appdata_[0-9]{6}' | grep -Eo 'appdata_[0-9]{6}_?[0-9]*' | head -n 1)" \ "${appdata_folder}" "${exclude_dir_options[@]}" "${exclude_pattern_options[@]}" "${backup_folder}/" else rsync -a "${appdata_folder}" "${exclude_dir_options[@]}" "${exclude_pattern_options[@]}" "${backup_folder}/" fi echo -e "\nCreated rolling backup:\n${backup_folder}" # Counter to check if it's time for incremental or permanent backups if [ ! -f "${backup_parent}.counter" ]; then echo 1 >"${backup_parent}.counter" else counter=$(<"${backup_parent}.counter") fi ## Manage incremental backups if [ $((incremental_backup_freq > 0)) -eq 1 ] && [ $((counter % incremental_backup_freq)) -eq 0 ]; then mkdir -p "${incremental_folder}" || { echo "Error: Failed to create incremental backup directory." exit 1 } # Rotate incremental backups cd "${incremental_folder}" || exit mapfile -t inc_backups < <(find . -maxdepth 1 -type d -regex './appdata_[0-9]\{6\}\(_[0-9]\+\)?' -printf '%T@ %p\n' | sort -nr | awk 'NR>1 {print $2}') for ((i = max_incremental_backups; i < ${#inc_backups[@]}; i++)); do rm -rf "${incremental_folder:?}/${inc_backups[i]:?}" echo -e "Removed the incremental backup:\n${inc_backups[i]}" done # Create incremental backup folder incremental_backup_folder="${incremental_folder}${backup_name}" # Add a time suffix if the folder already exists count=1 while [ -e "${incremental_backup_folder}" ]; do incremental_backup_folder="${incremental_folder}${backup_name}_${count}" count=$((counter + 1)) done rsync -a --link-dest="${backup_parent}$(ls -t "${backup_parent}" | grep -E 'appdata_[0-9]{6}' | grep -Eo 'appdata_[0-9]{6}_?[0-9]*' | head -n 1)" "${appdata_folder}" "${incremental_backup_folder}/" echo -e "Created incremental backup:\n${incremental_backup_folder}" fi # Manage permanent backups if [ $((permanent_backup_freq > 0)) -eq 1 ] && [ $((counter % permanent_backup_freq)) -eq 0 ]; then mkdir -p "${permanent_folder}" || { echo "Error: Failed to create permanent backup directory." exit 1 } # Create permanent backup folder permanent_backup_folder="${permanent_folder}${backup_name}" # Add a time suffix if the folder already exists just for the sake of sanity count=1 while [ -e "${permanent_backup_folder}" ]; do permanent_backup_folder="${permanent_folder}${backup_name}_${count}" count=$((counter + 1)) done rsync -a --link-dest="${backup_folder}" "${backup_folder}/" "${permanent_backup_folder}/" echo -e "Created permanent backup:\n${permanent_backup_folder}" fi # Increment the script execution counter echo $((counter + 1)) >"${backup_parent}.counter" # Display the directory structure echo -e "\nDirectory Structure:" tree --dirsfirst "${backup_parent}"
    1 point
  42. 1 point
  43. It wouldn't because it may have the same end result but 's not the same problem, log tree is damaged, if that is the only issue this may help: btrfs rescue zero-log /dev/sdj1 Then re-start array.
    1 point
  44. It's possible with the docker version with - /var/run/docker.sock:/var/run/docker.sock server: http_listen_port: 9080 grpc_listen_port: 0 positions: filename: /positions/positions.yaml clients: - url: http://10.10.40.251:3100/loki/api/v1/push scrape_configs: - job_name: system static_configs: - targets: - localhost labels: job: varlogs __path__: /host/log/*log - job_name: docker # use docker.sock to filter containers docker_sd_configs: - host: unix:///var/run/docker.sock refresh_interval: 15s #filters: # - name: label # values: ["logging=promtail"] # use container name to create a loki label relabel_configs: - source_labels: ['__meta_docker_container_name'] regex: '/(.*)' target_label: 'container' - source_labels: ['__meta_docker_container_log_stream'] target_label: 'logstream' - source_labels: ['__meta_docker_container_label_logging_jobname'] target_label: 'job' promtail: # run as root, update to rootless mode later user: "0:0" container_name: Mon-Promtail image: grafana/promtail:main command: -config.file=/etc/promtail/docker-config.yaml depends_on: - loki restart: unless-stopped networks: mon-netsocketproxy: mon-netgrafana: br1: ipv4_address: 10.10.40.252 dns: 10.10.50.5 ports: - 9800:9800 volumes: # logs for linux host only - /var/log:/host/log #- /var/lib/docker/containers:/var/lib/docker/containers:ro - /mnt/user/Docker/Monitoring/Promtail/promtail-config.yaml:/etc/promtail/docker-config.yaml - /mnt/user/Docker/Monitoring/Promtail/positions:/positions - /var/run/docker.sock:/var/run/docker.sock labels: - "com.centurylinklabs.watchtower.enable=true"
    1 point
  45. I have just set this up, and thought i should summarize this fairly wordy and bitty Thread to a step by step guide. To set up Unraid as a Rsync server for use as a Rsync destination for compatible clients, such as Synology hyperbackup. 1. (optional) Open Unraid web interface and set up a new share('s) that you want to use with Rsync 2. Open Unraid web interface and open a new web terminal window by clicking the 6th icon from the right, at the top right of the interface ( or SSH in to your unraid box) 3. Type or copy and paste the following one line at a time. (SHIFT + CONTROL + V to paste in to the unraid web terminal) mkdir /boot/custom mkdir /boot/custom/etc mkdir /boot/custom/etc/rc.d nano /boot/custom/etc/rsyncd.conf 4. Type your Rsync config. As a guide use the below example, modified from @WeeboTech uid = root gid = root use chroot = no max connections = 4 pid file = /var/run/rsyncd.pid timeout = 600 [backups] # Rsync Modual name (basicaly Rsync share name) Synology hyperbackup calls this "Backup Modual" path = /mnt/user/backups # Unraid share location. /mnt/user/YOURSHARENAME could also be a subdirectory of a share comment = Backups # Modual Description read only = FALSE [vmware] # Add multiple Rsync moduals as required path = /mnt/user/backups/vmware comment = VMWare Backups read only = FALSE 5. Press CONTROL + x then press y and then ENTER to save the config 6. Type or copy and paste the following: nano /boot/custom/etc/rc.d/S20-init.rsyncd 7. Type or copy and paste the following: #!/bin/bash if ! grep ^rsync /etc/inetd.conf > /dev/null ; then cat <<-EOF >> /etc/inetd.conf rsync stream tcp nowait root /usr/sbin/tcpd /usr/bin/rsync --daemon EOF read PID < /var/run/inetd.pid kill -1 ${PID} fi cp /boot/custom/etc/rsyncd.conf /etc/rsyncd.conf 8. Press CONTROL + x then press y and then ENTER to save the Script 9. To add your script to the Go file, its quicker to use echo to send a line to the end of the file. Type or Copy and paste: echo "bash /boot/custom/etc/rc.d/S20-init.rsyncd" >> /boot/config/go 10. Type or copy and paste the following: ( I am not sure if chmod is needed, however its something i did trying to get this to work) chmod =x /boot/custom/etc/rc.d/S20-init.rsyncd bash /boot/custom/etc/rc.d/S20-init.rsyncd rsync rsync://127.0.0.1 11. The last command above is to check rsync is working locally on your Unraid server. It should return the rsync module's and comments from your rsync.conf like below: root@YOURUNRAIDSERVERNAME:/# rsync rsync://127.0.0.1 backups Backups vmware VMWare Backups 12. If the last command Displays your rsync module's then you may want to quickly check if rsync can be reached by your unraids domain name or network interface IP: rsync rsync://192.168.0.100 # replace ip with your unraid server ip Or rsync rsync://UNRAIDSERVERNAME.local #obviously replace with your sever name ;) End. Now check that your rsync client connects to unraid. i used Synology Hyper Backup, Created a new data backup, Under file server selected rsync > next change server type to "rsync-compatible server", fill in your unraid server Ip or domain name, transfer encryption "off" - not sure how to get encryption to work, please post below if you know how port "873" username "root" - i guess couldset up a second account and grant appropriate privileges using CLI on unraid? password "YOURUNRAIDROOTPASSWORD" Backup module: "your Rsync module from rsynd.conf" directory "subdirectory inside your rsync module / unraid share" Hope this helps someone. ( Edited, thanks Dr_Frankenstein )
    1 point
  46. How do I limit the memory usage of a docker application? Personally, on my system I limit the memory of most of my docker applications so that there is always (hopefully) memory available for other applications / unRaid if the need arises. IE: if you watch CA's resource monitor / cAdvisor carefully when an application like nzbGet is unpacking / par-checking, you will see that its memory used skyrockets, but the same operation can take place in far less memory (albeit at a slightly slower speed). The memory used will not be available to another application such as Plex until after the unpack / par check is completed. To limit the memory usage of a particular app, add this to the extra parameters section of the app when you edit / add it: --memory=4G This will limit the memory of the application to a maximum of 4G
    1 point
  47. Check /etc/profile, you'll find that the contents are not what you expect. From what I can see, you are redirecting the alias output to /etc/profile rather than writing the alias line to it. Consider the difference between the following two commands: alias size='du -sh --time'>>/etc/profile echo "alias size='du -sh --time'">>/etc/profile Edit: Really, check /etc/profile after making changes to it. Make sure you got your quotes/escaping correct if you are appending to it! Another option is to write the aliases in a separate properly encoded text file with appropriate line endings stored on the flash drive and just append the contents of that one instead. cat /boot/aliases.txt >> /etc/profile
    1 point