Leaderboard

Popular Content

Showing content with the highest reputation on 04/02/21 in all areas

  1. This is the support thread for multiple Plugins like: AMD Vendor Reset Plugin Coral TPU Driver Plugin hpsahba Driver Plugin Please always include for which plugin that you need help also the Diagnostics from your server and a screenshots from your container template if your issue is related to a container. If you like my work, please consider making a donation
    1 point
  2. DVB-Driver (only Unraid 6.9.0beta35 and up) This Plugin will add DVB Drivers to Unraid. Please note that this Plugin is community driven and if a newer version of Unraid is released the drivers/modules has to be updated (please make a short post here or see the second post if the drivers/modules are already updated, if you update to a newer version and the new drivers/modules aren't built yet this could break your DVB support in Unraid) ! Installation of the Plugin (this is only necessary for the first installation of the plugin) : Go to the Community Applications App and search for 'DVB-Drivers' and click on the Download button (you have to be at least on Unraid 6.9.0beta35 to see the Plugin in the CA App) : Or download it directly from here: https://raw.githubusercontent.com/ich777/unraid-dvb-driver/master/dvb-driver.plg After that wait for the plugin to successfully install (don't close the window with the , wait for the 'DONE' button to appear, the installation can take some time depending on your internet connection, the plugin downloads a custom bzimage with the necessary DVB Kernel modules, the DVB driver itself and installs it afterwards to your Unraid server) : Click on 'DONE' and read the alert message that appears on the top right hand corner and close it with the 'X': You can skip Step 4 if you are want to use the LibreELEC driver package (selected by default) if you want to choose another driver package go to the Plugin itself PLUGINS -> DVB-Driver and choose which version that you want to install and click on 'UPDATE' (currently LibreELEC, TBS-OpenSource, DigitalDevices and Xbox One USB DVB Adapter drivers available) : Reboot your server MAIN -> REBOOT: After the reboot go back to the Plugin page PLUGINS -> DVB-Driver and check if the cards are properly recognized (if your card(s) aren't recognized please see the Troubleshooting section or make a post in this thread but please be sure to read the Reporting Problems section in this post) : Utilize the DVB card(s) in a Docker container: To utilize your DVB card(s) in your Docker container, in this example for Tvheadend, add '--device=/dev/dvb/' to the 'Extra Parameters' in your Docker template (you have to enable 'Advanced view' in the template to see this option) : Now you should see the card(s) in the Docker container: IMPORTANT: If you switch between driver packages a reboot is always necessary! DigitalDevices Notes: (This applies only if you selected the DigitalDevices drivers in the Plugin) If you are experiencing I²C-Timeouts in your syslog please append 'ddbridge.msi=0' to your syslinux configuration (example below). You can also switch the operating modes for the Max S8/SX8/SX8 Basic with the following options: 'ddbridge.fmode=0' 4-tuner mode (internal multi-switch deactivated) 'ddbridge.fmode=1' Quad-LNB/normal outputs of the multiswitch 'ddbridge.fmode=2' Quattro-LNB / cascade outputs of the multiswitch 'ddbridge.fmode=3' Unicable or JESS LNB / Unicabel output of the multiswitch Link to source You also can combine 'ddbridge.msi=0' (but you don't have to if you don't experience I²C-Timeouts) and for example 'ddbridge.fmode=0' here is a short example how to do it: Go to the 'Main' tab and click on the blue text 'Flash': Scroll a little down and append like mentioned above the commands to the syslinux configuration: (As stated above you don't need to append 'ddbridge.msi=0' if you don't experience I²C-Timeouts) Click on 'Apply' on the bottom and reboot your server! TBS-OpenSource Notes: You can also switch the operating modes from the TBS Cards, in this example for the TBS-6909 or TBS-6903-x, if you append one of the following commands to your syslinux configuration (how to is above): 'mxl58x.mode=0' Mode 0 -> see picture below 'mxl58x.mode=1' Mode 1 -> see picture below 'mxl58x.mode=2' Mode 2 -> see picture below Modes: Link to source Troubleshooting: (This section will be updated as soon as someone reports a common issue and will grow over time) Reporting Problems: Please be sure if you have a problem to always include a screenshot from the Plugin page, a textfile or a link to pastebin of the command 'lspci -v' or 'lsusb -v' - depending on the card you are using PCIe or USB (simply open up a Unraid terminal with the button on the top right of Unraid and type in one of the two commands without quotes) and also the output of 'dmesg' in a textfile or a link to pastebin (simply to not spam the thread with the output).
    1 point
  3. Note: this thread is locked. Any questions, please post in the unRaid FAQ Feedback Thread. Community Applications - Application Policies Community Applications has one fundamental goal: Ensuring that the end-user experience with the various add-ons to your server is consistent and trouble free. The application lists contained within CA are moderated and vetted. Every attempt is made to ensure that only safe and compatible applications are present. As the unRaid community gets larger, and more applications become available within Community Applications, the following should be noted: All applications are subject to approval for inclusion. Closed source plugins are not accepted into CA. (Note that a plugin may include closed-source binaries which in certain circumstances do not violate this rule - Moderator's discretion). In certain exceptional cases an exemption to this rule may be granted. Closed source applications within an docker application are generally not accepted within CA unless they are from a reputable source or are a well known application (eg: Crashplan, Plex et al). In other words, an application created by the template maintainer MUST be open source and subject to code examination. All GitHub repositories / dockerHub repositories must have 2 Factor Authentication enabled, and an acknowledgement of this must be given to the authors/maintainers of Community Applications or Limetech. Plugins which are better suited as a docker application are not eligible for inclusion in CA. "Proof Of Concept" applications are generally not accepted into CA. If it is accepted into CA, then such applications must include an appropriate notice within its description. Any application that contains malicious software or intent is subject to immediate removal with no notification being given. This also includes any other software included within the application such as crypto mining unless the application itself is for crypto mining. No Exceptions. Bugs within applications can (and do) happen. This is outside of the control of the moderators of CA. Depending upon the circumstances, the application may be subject to moderation due to the bug. This moderation may be mild or in the cases where the bug could cause data-loss severe resulting in the possible blacklisting of the application. In most cases, the author is given time to rectify the bug before moderation happens. Minor issues with any application will tend to not have any moderation applied. As a general rule, it is recommended to always keep your applications (especially plugins and unRaid itself) up to date. In the case of egregious software errors, the moderators of CA will err on the side of the user instead of the side of the author. Plugins may on occasion (this is an exception, rather than the rule) have problems / bugs when run on a release candidate of unRaid. More leeway is given to authors of the plugin in this situation than if the issue occurs on a stable release of unRaid. Any application listed within CA is subject to at any time various means of moderation. This includes but is not limited to fixing template errors, assigning minimum / maximum versions of unRaid the application is compatible with, notifying users of any issues with their installed applications via the Fix Common Problems plugin, deprecating an abandoned application,etc. Notification to the template maintainers may or may not be given. So called abandoned applications (where the author / maintainer) has completely abandoned support for the application may or may not be removed from CA. This primarily depends upon whether or not the application works for its designed purpose. However, should another template be published within CA that supersedes the abandoned template, then the abandoned one may be removed with no notice being given. Any application template not meeting certain minimum standards results in automatic removal of that application until such time as the template is revised to meet those standards. (As an example, all applications must include a reasonable description.) In certain circumstances, it may be more appropriate to utilize "branches" in templates than to submit multiple templates. This is discretionary of the moderators. See here. Any violation of the security policies enforced by CA and the associated application feed results in automatic blacklisting of an author's entire template repository. No warnings and no exceptions. The case of any submitted application which refers to the exact same dockerHub repository as an existing application will not be accepted. In certain circumstances though the pre-existing application template may be removed and the new one accepted in its place - Moderator's discretion. All templates within a specific repository must have different application names. In case of a conflict within the same repository, one of the templates is automatically removed Donation links are allowed and encouraged but only show up for installed apps and on your "Repository" section in Community Applications. All descriptions, icons etc must not be "offensive" and should adhere to "good taste". Furthermore, animated icons are not allowed. In the situations where there is already a multitude of certain applications available (ie: Plex, nzbGet, Radarr, etc) new submissions of those applications will not be accepted. An exception may however be made if the new submission brings something unique to the application. This is at the discretion of the moderators of CA. A further explanation of the last point is in order (In this example, I am referring to Plex Media Center itself, not the various add-ons available for Plex, eg Plex Connect, plpp, gaps, etc) Utilizing Plex as an example, there are already applications within CA from Binhex, LinuxServer along with the official Plex Container. All of these are extremely well supported and maintained, and fundamentally there is absolutely no difference between any of them. It is extremely unlikely that any new submission of a Plex application will bring any tangible benefits to the unRaid community, and will more than likely only cause confusion for the end-user as to "which one do I install?" The end-user experience is of utmost importance to the authors of Community Applications, it's moderators, and Limetech themselves. This however does NOT mean that no new Plex will be accepted. If a new Plex application is submitted and it does bring something new / unique to the application / container it may be accepted at the moderator's discretion. Should any user wish to run a version of Plex that is not available within CA, there are multiple options available. ( Performing a dockerHub search for the application, having CA manage so-called "private applications", or utilizing the template repository system of unRaid itself. ) See here. The intent here is to not stifle any innovation from any given author, but rather to ensure that the end-user experience remains consistently high. If the circumstances regarding an already present Plex application change ( no longer maintained / supported, or is deemed to be extraneous and not benefiting the unRaid community, etc ) then that existing application may be removed and new submissions for Plex may be accepted. CA does allow installations of deprecated / incompatible applications by visiting it's Settings Page. (Although it is not recommended to do this.) Any plugin or docker application which is classified as being Beta from the author is identified within CA. This classification does not however mean that there will be problems with the application. The ability to install applications that are outside of CA's control (plugin or docker) will never be impeded. (Although it isn't recommended to install any plugin that is not available within CA) All actions taken by a moderator of CA (or via the associated application feed) is publicly viewable either within CA under it's Statistics section, or via a GitHub Repository. In the rare case of a controversial decision taken by the moderators of CA, the decisions are reviewed by a larger select group of trusted unRaid users and the staff members of Limetech. If as a maintainer / author you disagree with any actions taken by the moderators of CA you should bring your concerns in a PM to @Squid. If the decision made by Squid does not satisfy you, then the final decision will be made by @SpencerJ (Note: The moderators of CA @pluginCop and @dockerPolice do not read or reply to any PM) On the other side of the coin, if as a user you feel that some application should be moderated in some way, then feel free to PM @Squid who will then delegate appropriately to one of the moderators of CA. Note: this document may be amended at any time, and any new policies added (or policies changed) will be retroactive to any / all applications within CA. Further note: All download counts listed within CA are based upon dockerHub. In the case of ghcr containers, the stats will be gathered from the dockerHub version instead. If an equivalent dockerHub container does not exist, then no download stats will be listed. Also, not all dockerHub containers have the ability to discern the download counts. To get your apps added to CA, see HERE.
    1 point
  4. Hello, I'm making a czech localization of unraid. It's still very much work in progress, but I think it's ready for public testing. The translation currently resides in my github repo at https://github.com/Pavuucek/unraid_lang-cs_CZ. Just download a master branch snapshot file https://github.com/Pavuucek/unraid_lang-cs_CZ/archive/refs/heads/master.zip and install it via Tools/Language. You need to enable developer mode first. Please report any issues you may find either here or as an issue on github (link above).
    1 point
  5. Turn off ssl for web GUI to stop that redirect.
    1 point
  6. You might even consider not caching some shares. And don't think you can fix things by running Mover more often. Mover is intended for idle time. It is impossible to move from cache to array as fast as you can write to cache. If you need to write more than cache can hold, don't cache.
    1 point
  7. For upcoming 6.9.2 we updated the realtek r8125 to version 9.005.01 (from 9.003.05). Might fix this, but since Realtek, might not
    1 point
  8. @ChillZwix Why have you given 50G to docker.img? Have you had problems filling it? 20G is usually more than enough, and making it larger won't fix the problem causing it to fill, it will only make it take longer to fill. The usual cause of filling docker.img is an application writing to a path that isn't mapped. Linux is case-sensitive. Also, your cache was completely full in those diagnostics and that is probably what corrupted docker.img. You need to configure things so you don't fill cache or you will corrupt docker.img again and probably corrupt cache and have to reformat it. Go to Main - Array Operations and Stop the array. Go to Main - Pool Devices and click on Cache. Set Minimum free space to larger than the largest file you expect to write to a cached share. If cache has less than that amount free new writes will overflow to the array. Go to Main - Array Operations and Start the array.
    1 point
  9. All sorted now, up and running on both E5-2683v4 cpus and now have 96GB of ram and 32 cores to play with, sweet! Did have a few bad ram sticks cos in total i should have 160gb but heyho ... cant win them all with second hand hardware is suppose. Thanks to all who helped me along ... uldise, SimonF and others.
    1 point
  10. @twisteddemon I think a lot of us are down too.
    1 point
  11. I ended up discovering that the Windows Photos app was attempting to access an old share that I had recently deleted. Unfortunately there wasn't anything that pointed me toward that conclusion, just a general sense that there was SOMETHING on my main computer that was trying to access an Unraid share that it didn't have access to. In any case, I deleted the Windows Photos app and the errors stopped. I hope this info is helpful to somebody!
    1 point
  12. So this started working fine yesterday and then I updated the plugin. Well now it’s borked again. Even my avatar and name have removed themselves from my web GUI.
    1 point
  13. Isn't this a infiniband card or something like that? Unraid doesn't support Infiniband in general I think... You always can compile them on your own with the Custom Build Mode. EDIT: Which card are you having exactly?
    1 point
  14. Thank you @Squid! Basically is what it is. After writing the post, I asked by self if it could be the GPU, so I searched and installed to drivers for GPU and it was that. With the GPU drivers, the consumptions went to 80watts. But then I figure it out that the GPU stopped working on the VM. Seing the video you shared I understand why. ( i will not lose hours looking for answers xD) It was nice to see the GPU statistics in the unraid, but unfortunally I will need to uninstall the drivers. Leaving the vm paused will do the job. Again, thank you for the info! (I was my first post in the forum )
    1 point
  15. Ok. Also die Console über Unraid geht schonmal nicht, weil er die Verbindung mit "Shell" und nicht "Bash" aufbaut. Das kannst du ändern in dem du den Container editierst und das hier umstellst: Die Meldung die du im Portainer bekommst habe ich schonmal gehabt, und zwar hier: https://github.com/buanet/ioBroker.docker/issues/167 Lösung war dann dies: Vermutlich war die Ursache, dass die betreffende ioBroker Instanz irgendwann mal zu Docker migriert wurde. Weiß nicht ob das bei dir auch der Fall war. In jedem Fall solltest du das File "iobroker" (in /opt/iobroker) mal bezüglich der Shebang prüfen. Das geht zum Beispiel mit nano oder vi. Sofern das im container nicht installiert ist muss du es natürlich vorher installieren. Wenn du weißt was du tust kannst du auch z.B. per sftp auf dein ioBroker Verzeichnis zugreifen und die Datei in nem Editor öffnen. da gibt es allerdings den ein oder anderen Fallstrick, vor allem wenn du mit Windows unterwegs bist.... MfG, André
    1 point
  16. you should be editing /serverdata/serverfiles/ShooterGame/Saved/Config/LinuxServer/GameUserSettings.ini but the ShooterGame server does not respect the file permission mask when it writes to the gonfig files. So the safest method I have found is to stop the server container then edit /mnt/cache/appdata/ark-se/ShooterGame/Saved/Config/LinuxServer/GameUserSettings.ini from the unraid console, then restart the server container.
    1 point
  17. You could make use of the "wasted" space by adding more devices to make it a 3- or 4-device pool. Take a look at the btrfs calculator at https://carfax.org.uk/btrfs-usage/
    1 point
  18. Unraid doesn't support more than one partition per disk. The only way to access multiple partitions is via the Unassigned Devices plugin, in which case the whole of the device would have to remain outside the array and cache pool.
    1 point
  19. This, theoretically. I've seen instances where that didn't work well, but it should, so try it. Just make sure the only drives listed for formatting are the ones you expect to format, as it's an all or nothing operation. Any disks listed as unmountable will be formatted.
    1 point
  20. Honestly, what you're seeing makes perfect sense and fits with the general capabilities. unRAID doesn't seem to use system memory as a data cache, as aggressively as Windows does on similar hardware. That 58MB/s data rate seems reasonable if you're reading/writing to the same disks, and have parity active; there's a lot of expensive work going on when doing that. Reconstruct/Turbo write might help, but as @vr2gb mentioned, you'd get the highest rates doing segregated move from-to task. It might not be the shortest in terms of time. If you have a LOT of data that needs to be moved between disks you might be better off disabling parity during the move(s), then rebuild it.
    1 point
  21. It doesn't work like that. I don't know of any way to specify redundancy and still use the wasted capacity using BTRFS. BTRFS either treats each member as part of a redundant set (RAID1), or as an individual volume adding up all members with no redundancy or striping (Single), or a striped set with no redundancy (RAID0). https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices
    1 point
  22. Das sieht schon mal gut aus. Du bist auf dem richtigen Weg. Allerdings hast du da jetzt eine andere Fehlermeldung. Nix mit iobroker not found... Die Meldung die du da hast hab ich schonmal irgendwo gesehen. Ich schau mal nachher was ich finde, wenn ich am PC bin... MfG, André
    1 point
  23. You should try setting Tunable (md_write_method) ( in disk setting ) from auto to reconstruct write, but this not help all time. Write to cache SSD then move back to array by mover is a common solution. That is true for Unraid array is not a high performance storage, but you shouldn't compare it's performance in Windows when it is a single disk. Most people will use Unassigned disk plugin, SSD/RAID cached pool, large RAM cache ... etc to improve the performance for different need. Last, lot of CPU core but with low frequency also a problem cause, array will also limit by its slowest member disks too. Different hardware/config have large different performance result, so you will found some people have decent result but some just fair. For example A 1-data+1-parity disk setup ( Ryzen 1700 + 10Gb NIC ) will got 130MB/s ( power save mode @ 2Ghz ) to 250MB/s SMB transfer speed ( high performance @ 3.4Ghz ), but same CPU with large RAM cache approach, it can transfer large file in 1GB/s.
    1 point
  24. Please delete the Container and the 'openttd' directory that is located in your 'appdata' folder and pull a fresh copy from the CA App, update the container and it is now working again.
    1 point
  25. Collabora as a separate container works great inside nextcloud using the app store connector.
    1 point
  26. I have just released the version of the plugin that will make it easy to create a log file on the flash drive when logging level is set as Testing. I would be grateful if you can set that to be active ; recreate your issue; and then send me the resulting log file.
    1 point
  27. Some emby users suggested to lock down the firewall to only accept tcp/443 connections from the Cloudflare ip ranges and also use the fail2ban cloudflare integration with monitoring of the emby logs to ban anyone who tries to brute force a login. I have also hidden all accounts from the login screen. More info in the thread attached. https://emby.media/community/index.php?/topic/69779-remote-security-i-just-dont-get-it/
    1 point
  28. That shows that at the moment the pause IS scheduled for 8:30 at the cron level which explains why the pause happened at 8:30. unRaid assumes any .cron file present in plugin’s folder contains entries to be added to those for the root user. This file is regenerated if you make any change to the plugins settings so if you want a different time make a nominal change and hit Apply. If you now examine the file you should see that the times for pause and resume match your current settings. My suspicion is that the last time you changed the plugin settings was when there was a syntax error in the underlying script file that stopped this file from being regenerated to match the new settings.
    1 point
  29. About 52hrs uptime... Marking the as solved.
    1 point
  30. For that reason it isn't possible to say why the disk was disabled but bad connections are the most common cause. However, the disk seems ok from its SMART report, if rather too hot. I'd replace the SATA cable, check the power cable and then rebuild onto the same disk. With the array stopped, unassign Disk 2, start the array, stop the array, reassign Disk 2 and let it rebuild. And give some thought to improving the cooling.
    1 point
  31. Impressive build. In my main PC (not the server) I use one of these... https://aquacomputer.de/newsreader/items/aquastream-ultimate.html (they're not cheap though). It has an integrated coolant temperature sensor, fan controller, pump controller and so on. The control curves can be set up using WIndows software with whatever parameters you need, and it then stores the configuration in non-volatile storage within the pump/fan controller hardware. Such a device could remove the need to monitor and control from the server if so desired. Just a thought - although I would imagine you've already invested significantly in your current implementation.
    1 point
  32. Nice docker that xmrig, many thanks Now is there a way to connect to that docker and see the console and the miner?
    1 point
  33. Performance decrease seems to be in the rx direction. r8169 per default has interrupt coalescing disabled what may contribute to this. You can enable it via ethtool or even better check the following: echo 20000 > /sys/class/net/<if>/gro_flush_timeout echo 1 > /sys/class/net/<if>/napi_defer_hard_irqs
    1 point
  34. Ever wanted more storage in your Unraid Server. See what a 2 petabytes of storage looks like using Nimbus data 100TB ssds. (check todays date )
    1 point
  35. Hey, I'm getting this error in PavlovVR Connecting anonymously to Steam Public...Logged in OK Waiting for user info...OK Success! App '622970' already up to date. ---Prepare Server--- ---Checking if 'Game.ini' exists--- ---'Game.ini' found--- ---Server ready--- ---Start Server--- ln: failed to create hard link '/serverdata/.steam/sdk64/steamclient.so' => '../serverfiles/linux64/steamclient.so': Invalid cross-device link ln: failed to create hard link '/serverdata/serverfiles/Pavlov/Binaries/Linux/steamclient.so': File exists /serverdata/serverfiles/Pavlov/Binaries/Linux/PavlovServer: error while loading shared libraries: libc++.so.1: cannot open shared object file: No such file or directory All the other game servers have worked fine for me, and I can't find any Pavlov or hardlink issues in this thread, any ideas?
    1 point
  36. Changed Status to Closed Changed Priority to Other
    1 point
  37. Yeah looks to be sorted now, very strange.
    1 point
  38. Apologies all for the lack of updates on this project. I'll be picking it back up and working on it again very soon. I shall update you all here once I have some news.
    1 point
  39. If you recreated docker.img it will be on cache and you can just delete the one on disk1.
    1 point
  40. Ich777's kernel builder fixed it for me. I have an RX 5700XT. Let the container do its thing, then simply copy the output to /boot and reboot. I can reboot my VM as much as I want now.
    1 point
  41. Thanks to @ich777 for the nvidia driver download script for complete driver version support.
    1 point
  42. Any plans to add 2FA to the Secure Remote Access feature? I really like this feature.
    1 point
  43. Build a decent system with this Board. IPMI Fan stuff doesn't work yet, everything else is fine.
    1 point
  44. Might be late to the party but everything has been working fine on this board with Unraid apart from all the IPMI fan stuff which I hacked together a solution that you can see in that post. There is actually a bug on this board that you may not have noticed yet, but RX traffic on the network adapters with the specific version of the firmware/BIOS is actually degrading the network performance to 1/3 of the speed. Again, only on RX. I have already raised this issue with Asrock Rack and they have confirmed the issue with a specific AMD driver combination after many many emails and actually sending my board back. A friend of mine has the X570D4U-2L2T and doesn't have this issue, so its only for the X570D4U where the firmware has been affected. Other than that, this board is killer. I have not found another board on the market that is anything like this for this price tag. Happy to answer anything else.
    1 point
  45. All, I encountered this problem for the first time today across most of my PCs, and the fixes in this thread did not work. What did work was adding a line to the [global] section of my unRAID smb extra configuration settings: client min protocol = SMB2 To be clear for those who are less versed in smb settings, this is in addition to a line I already had in there for the server, which is simply min protocol = SMB2. Note, I assume this change will break any SMB1 clients trying to connect, but hopefully those are increasingly few and far between. That said, I have not turned off the SMB1 client in my Windows features, though I assume it will no longer be necessary for accessing unRAID. I might test that later this week and report back with an update.
    1 point
  46. Thanks for sharing. Here's my modified script that also includes Unassigned Devices mounting/unmounting the USB drive. #!/bin/sh LOGFILE="/mnt/user/logs/b0rgLOG.txt" LOGFILE2="/mnt/user/logs/b0rg-RClone-DETAILS.txt" # Close if rclone/borg running if pgrep "borg" || pgrep "rclone" > /dev/null then echo "$(date "+%m-%d-%Y %T") : Backup already running, exiting" 2>&1 | tee -a $LOGFILE exit exit fi # Close if parity sync running #PARITYCHK=$(/root/mdcmd status | egrep STARTED) #if [[ $PARITYCHK == *"STARTED"* ]]; then # echo "Parity check running, exiting" # exit # exit #fi UD_DRIVE=$(ls -l /dev/disk/by-id/*My_Passport* | grep -v part1 | grep -o '...$') echo "The My Passport HDD is located at " $UD_DRIVE echo "Mounting My Passport HDD" echo "....." /usr/local/sbin/rc.unassigned mount /dev/$UD_DRIVE echo "My Passport HDD Mounted" echo "....." #This is the location your Borg program will store the backup data to export BORG_REPO='/mnt/disks/My_Passport/b0rg' #This is the location you want Rclone to send the BORG_REPO to #export CLOUDDEST='GDrive:/Backups/borg/TDS-Repo-V2/' #Setting this, so you won't be asked for your repository passphrase: export BORG_PASSPHRASE='XXXXXXXXXXXXXXXXXXXXXXXXXXXX' #or this to ask an external program to supply the passphrase: (I leave this blank) #export BORG_PASSCOMMAND='' #I store the cache on the cache instead of tmp so Borg has persistent records after a reboot. export BORG_CACHE_DIR='/mnt/user/appdata/borg/cache/' export BORG_BASE_DIR='/mnt/user/appdata/borg/' #Backup the most important directories into an archive (I keep a list of excluded directories in the excluded.txt file) SECONDS=0 BORG_OPTS="--verbose --info --list --filter AMEx --files-cache=mtime,size --stats --show-rc --compression lz4 --exclude-caches" echo "$(date "+%m-%d-%Y %T") : Borg backup has started" 2>&1 | tee -a $LOGFILE borg create $BORG_OPTS $BORG_REPO::'{hostname}-{now}' /mnt/user/backup/testB0RG/ >> $LOGFILE2 2>&1 backup_exit=$? # Use the `prune` subcommand to maintain 7 daily, 4 weekly and 6 monthly # archives of THIS machine. The '{hostname}-' prefix is very important to # limit prune's operation to this machine's archives and not apply to # other machines' archives also: echo "$(date "+%m-%d-%Y %T") : Borg pruning has started" 2>&1 | tee -a $LOGFILE borg prune --list --prefix '{hostname}-' --show-rc --keep-daily 7 --keep-weekly 4 --keep-monthly 6 >> $LOGFILE2 2>&1 prune_exit=$? #echo "$(date "+%m-%d-%Y %T") : Borg pruning has completed" 2>&1 | tee -a $LOGFILE # use highest exit code as global exit code global_exit=$(( backup_exit > prune_exit ? backup_exit : prune_exit )) # Execute if no errors if [ ${global_exit} -eq 0 ]; then borgstart=$SECONDS echo "$(date "+%m-%d-%Y %T") : Borg backup completed in $(($borgstart/ 3600))h:$(($borgstart% 3600/60))m:$(($borgstart% 60))s" | tee -a >> $LOGFILE 2>&1 #Reset timer # SECONDS=0 # echo "$(date "+%m-%d-%Y %T") : Rclone Borg sync has started" >> $LOGFILE # rclone sync $BORG_REPO $CLOUDDEST -P --stats 1s -v 2>&1 | tee -a $LOGFILE2 # rclonestart=$SECONDS # echo "$(date "+%m-%d-%Y %T") : Rclone Borg sync completed in $(($rclonestart/ 3600))h:$(($rclonestart% 3600/60))m:$(($rclonestart% 60))s" 2>&1 | tee -a $LOGFILE # All other errors else echo "$(date "+%m-%d-%Y %T") : Borg has errors code:" $global_exit 2>&1 | tee -a $LOGFILE fi echo "....." echo "Unmounting My Passport HDD" /usr/local/sbin/rc.unassigned umount /dev/$UD_DRIVE echo "My Passport HDD Unmounted" echo "....." exit ${global_exit}
    1 point
  47. Yeah I can provide my borg script here. If you need help on it let me know. But borg makes a local backup and rsync clones it off site. This gives you 3 copies of your data and 2 of them local. Also the script will not re-run if rsync hasn't finished its last operation (slow internet) or if parity sync is running. The key factor in not having everything being constantly checked by Borg is the files-cache=mtime,size. I was noticing everytime I ran Borg it would index files that haven't changed. This command fixed that which has to do with unRAID's constant changing inode values. The borg docs are very good (https://borgbackup.readthedocs.io/en/stable/) Let me know if you get stuck. Obviously this script wont work until you setup your repository. #!/bin/sh LOGFILE="/boot/logs/TDS-Log.txt" LOGFILE2="/boot/logs/Borg-RClone-Log.txt" # Close if rclone/borg running if pgrep "borg" || pgrep "rclone" > /dev/null then echo "$(date "+%m-%d-%Y %T") : Backup already running, exiting" 2>&1 | tee -a $LOGFILE exit exit fi # Close if parity sync running #PARITYCHK=$(/root/mdcmd status | egrep STARTED) #if [[ $PARITYCHK == *"STARTED"* ]]; then # echo "Parity check running, exiting" # exit # exit #fi #This is the location your Borg program will store the backup data to export BORG_REPO='/mnt/disks/Backups/Borg/' #This is the location you want Rclone to send the BORG_REPO to export CLOUDDEST='GDrive:/Backups/borg/TDS-Repo-V2/' #Setting this, so you won't be asked for your repository passphrase: export BORG_PASSPHRASE='<MYENCRYPTIONKEYPASSWORD>' #or this to ask an external program to supply the passphrase: (I leave this blank) #export BORG_PASSCOMMAND='' #I store the cache on the cache instead of tmp so Borg has persistent records after a reboot. export BORG_CACHE_DIR='/mnt/user/appdata/borg/cache/' export BORG_BASE_DIR='/mnt/user/appdata/borg/' #Backup the most important directories into an archive (I keep a list of excluded directories in the excluded.txt file) SECONDS=0 echo "$(date "+%m-%d-%Y %T") : Borg backup has started" 2>&1 | tee -a $LOGFILE borg create \ --verbose \ --info \ --list \ --filter AMEx \ --files-cache=mtime,size \ --stats \ --show-rc \ --compression lz4 \ --exclude-caches \ --exclude-from /mnt/disks/Backups/Borg/Excluded.txt \ \ $BORG_REPO::'{hostname}-{now}' \ \ /mnt/user/Archive \ /mnt/disks/Backups/unRAID-Auto-Backup \ /mnt/user/Backups \ /mnt/user/Nextcloud \ /mnt/user/system/ \ >> $LOGFILE2 2>&1 backup_exit=$? # Use the `prune` subcommand to maintain 7 daily, 4 weekly and 6 monthly # archives of THIS machine. The '{hostname}-' prefix is very important to # limit prune's operation to this machine's archives and not apply to # other machines' archives also: #echo "$(date "+%m-%d-%Y %T") : Borg pruning has started" 2>&1 | tee -a $LOGFILE borg prune \ --list \ --prefix '{hostname}-' \ --show-rc \ --keep-daily 7 \ --keep-weekly 4 \ --keep-monthly 6 \ >> $LOGFILE2 2>&1 prune_exit=$? #echo "$(date "+%m-%d-%Y %T") : Borg pruning has completed" 2>&1 | tee -a $LOGFILE # use highest exit code as global exit code global_exit=$(( backup_exit > prune_exit ? backup_exit : prune_exit )) # Execute if no errors if [ ${global_exit} -eq 0 ]; then borgstart=$SECONDS echo "$(date "+%m-%d-%Y %T") : Borg backup completed in $(($borgstart/ 3600))h:$(($borgstart% 3600/60))m:$(($borgstart% 60))s" | tee -a >> $LOGFILE 2>&1 #Reset timer SECONDS=0 echo "$(date "+%m-%d-%Y %T") : Rclone Borg sync has started" >> $LOGFILE rclone sync $BORG_REPO $CLOUDDEST -P --stats 1s -v 2>&1 | tee -a $LOGFILE2 rclonestart=$SECONDS echo "$(date "+%m-%d-%Y %T") : Rclone Borg sync completed in $(($rclonestart/ 3600))h:$(($rclonestart% 3600/60))m:$(($rclonestart% 60))s" 2>&1 | tee -a $LOGFILE # All other errors else echo "$(date "+%m-%d-%Y %T") : Borg has errors code:" $global_exit 2>&1 | tee -a $LOGFILE fi exit ${global_exit}
    1 point
  48. I had this problem as well in the past, as i upgraded my RAM from 8 to 16GB and the settings vm.dirty_background_ratio 1 vm.dirty_ratio 2 fixed this problem, so it should be a default for systems with more than 8GB RAM.
    1 point