Leaderboard

Popular Content

Showing content with the highest reputation on 02/12/20 in Posts

  1. Having a plain webhook notifier would be nice, one that can be very customised (preferably being able to create a json payload). A plain webhook method allows a user to use pretty much their push notification service of choice. This may, or may not be a tangent of my wish to send notifications to discord without using the "slack hack", as expressed on the 2020 poll
    2 points
  2. Ok, Updates finished https://github.com/BinsonBuzz/unraid_rclone_mount. Tidy ups to mount and cleanup script. Upload script has some good changes: configurable --min-age as part of the config section configurable --exclusion as part of the config section. I've added 8 which should be plenty Service account counters work 100% now (I think!) Added ability to do backup jobs For 99% of users this should mean the main script doesn't need touching. I added #4 to reduce the amount of edits I have to do to support my own jobs, including my backup job. Now that the main body supports my backup job, future updates will be much faster and they'll be fewer errors.
    2 points
  3. #!/bin/bash #Set your Unraid version here in the form 6-7-3 UNRAID_VERSION="6-8-2" # Set the type of build you want here - nvidia or stock BUILD_TYPE="nvidia" #Set the download location here DOWNLOAD_LOCATION="/mnt/cache/downloads/nvidia" echo Downloading v$UNRAID_VERSION of the $BUILD_TYPE build to the $DOWNLOAD_LOCATION folder #Make target directory [[ ! -d ${DOWNLOAD_LOCATION} ]] && \ mkdir -p ${DOWNLOAD_LOCATION} #download files wget https://lsio.ams3.digitaloceanspaces.com/unraid-nvidia/${UNRAID_VERSION}/${BUILD_TYPE}/bzimage -O ${DOWNLOAD_LOCATION}/bzimage wget https://lsio.ams3.digitaloceanspaces.com/unraid-nvidia/${UNRAID_VERSION}/${BUILD_TYPE}/bzroot -O ${DOWNLOAD_LOCATION}/bzroot wget https://lsio.ams3.digitaloceanspaces.com/unraid-nvidia/${UNRAID_VERSION}/${BUILD_TYPE}/bzroot-gui -O ${DOWNLOAD_LOCATION}/bzroot-gui wget https://lsio.ams3.digitaloceanspaces.com/unraid-nvidia/${UNRAID_VERSION}/${BUILD_TYPE}/bzfirmware -O ${DOWNLOAD_LOCATION}/bzfirmware wget https://lsio.ams3.digitaloceanspaces.com/unraid-nvidia/${UNRAID_VERSION}/${BUILD_TYPE}/bzmodules -O ${DOWNLOAD_LOCATION}/bzmodules #download sha356 files wget https://lsio.ams3.digitaloceanspaces.com/unraid-nvidia/${UNRAID_VERSION}/${BUILD_TYPE}/bzimage.sha256 -O ${DOWNLOAD_LOCATION}/bzimage.sha256 wget https://lsio.ams3.digitaloceanspaces.com/unraid-nvidia/${UNRAID_VERSION}/${BUILD_TYPE}/bzroot.sha256 -O ${DOWNLOAD_LOCATION}/bzroot.sha256 wget https://lsio.ams3.digitaloceanspaces.com/unraid-nvidia/${UNRAID_VERSION}/${BUILD_TYPE}/bzroot-gui.sha256 -O ${DOWNLOAD_LOCATION}/bzroot-gui.sha256 wget https://lsio.ams3.digitaloceanspaces.com/unraid-nvidia/${UNRAID_VERSION}/${BUILD_TYPE}/bzfirmware.sha256 -O ${DOWNLOAD_LOCATION}/bzfirmware.sha256 wget https://lsio.ams3.digitaloceanspaces.com/unraid-nvidia/${UNRAID_VERSION}/${BUILD_TYPE}/bzmodules.sha256 -O ${DOWNLOAD_LOCATION}/bzmodules.sha256 #check sha256 files BZIMAGESHA256=$(cat ${DOWNLOAD_LOCATION}/bzimage.sha256 | cut -c1-64) BZROOTSHA256=$(cat ${DOWNLOAD_LOCATION}/bzroot.sha256 | cut -c1-64) BZROOTGUISHA256=$(cat ${DOWNLOAD_LOCATION}/bzroot-gui.sha256 | cut -c1-64) BZFIRMWARESHA256=$(cat ${DOWNLOAD_LOCATION}/bzfirmware.sha256 | cut -c1-64) BZMODULESSHA256=$(cat ${DOWNLOAD_LOCATION}/bzmodules.sha256 | cut -c1-64) #calculate sha256 on downloaded files BZIMAGE=$(sha256sum $DOWNLOAD_LOCATION/bzimage | cut -c1-64) BZROOT=$(sha256sum $DOWNLOAD_LOCATION/bzroot | cut -c1-64) BZROOTGUI=$(sha256sum $DOWNLOAD_LOCATION/bzroot-gui | cut -c1-64) BZFIRMWARE=$(sha256sum $DOWNLOAD_LOCATION/bzfirmware | cut -c1-64) BZMODULES=$(sha256sum $DOWNLOAD_LOCATION/bzmodules | cut -c1-64) #Compare expected with actual downloaded files [[ $BZIMAGESHA256 == $BZIMAGE ]]; echo "bzimage passed sha256 verification" [[ $BZROOTSHA256 == $BZROOT ]]; echo "bzroot passed sha256 verification" [[ $BZROOTGUISHA256 == $BZROOTGUI ]]; echo "bzroot-gui passed sha256 verification" [[ $BZFIRMWARESHA256 == $BZFIRMWARE ]]; echo "bzfirmware passed sha256 verification" [[ $BZMODULESSHA256 == $BZMODULES ]]; echo "bzmodules passed sha256 verification" That script will do it. Need to change the 3 parameters to suit. chmod +x it to make it executable, if all the SHA256 sums match copy it across to your flash disk.
    2 points
  4. I don't really mind to fix it the way Squid proposes (as soon as I can spare some cycles), although I do need to wrap my head around what I actually need to do I'm trying to figure out if that requires a new github repo for the replacement. Or perhaps I can just change the name of the tgz artifact to something different ? And then on unraid, instead of `/boot/config/plugins/unbalance` and `/usr/local/emhttp/plugins/unbalance` I would need to change it to `/boot/config/plugins/unbalanced` and `/usr/local/emhttp/plugins/unbalanced` (for example) ? What about the plugin name, would that need to change ?
    1 point
  5. Unraid is an appliance. There is only one user: root. We can rename to "admin" but it's still root. There are not traditional user logins. Users are only used to validate SMB connections. Running as non-root would not have prevented this vulnerability which btw, was a couple 1-line bugs. re: the request: we have a blog post that talk about this: https://unraid.net/blog/unraid-os-6-8-2-and-general-security-tips Sure I can go reply in there...
    1 point
  6. Anybody who lives and dies by the ethics of soccer hooliganism is all right in my books
    1 point
  7. Let's start with "reliability" and tinkering. Usually once set up (and go through all the hoops to get things set up), you EITHER are happy with it and really don't have to do any more tinkering OR are constantly tinkering because there's something that keeps on bugging you. The most common complain with gaming on VM with Threadripper is high fps variance. You can get close to bare metal average fps but with the min can be quite a bit lower, depending on games and config. Personally I have never found it to be a big deal but I have seen so many complaints about it. If gaming is very important to you, I highly highly recommend you NOT consider doing any VM stuff. VM just cannot beat bare metal performance. Now with regards to your intended PCIe use. There's no need for a USB card. Ryzen / TR motherboard has at least one onboard USB controller that can be passed through to a VM. Most TR motherboard comes with 3 M.2 slots. sTR4 can come with 4 M.2 slots. Failing that, you can even get something like the Asus Hyper M.2 to break out a x16 slot to 4 x4 M.2 slots for many more NVMe. Your GT 710 will by itself occupy a x16 slot (even if it's running at x8 or even PCIe 2.0 x4 speed). That assumes you buy a Gigabyte motherboard, which allows you to pick which x16 physical slot as initial display i.e. what Unraid boots with. Other motherboards are likely to require you to waste the 1st PCIe slot for the GT 710 should you want Unraid to boot with it.
    1 point
  8. If you change the Use Cache setting then nothing gets moved until the mover next gets run. If you change that setting to No or Only then mover will never move any files anyway.
    1 point
  9. Not exactly user error, but doing it in CA in a way that it wasn't designed to do. Or put another way, a bug in CA that's been there for a few years that no one else noticed because the procedure the users were using differs from the instructions.
    1 point
  10. Correct. What the vid would be referring to is that plugin.
    1 point
  11. Welcome to the team @zspearmint, I can't wait to see your contributions.
    1 point
  12. As for your actual question, and if I'm understanding it correctly, if you disable cache for that share, as in set it to cache="no", then any new files written to that share would go to the array, immediatly after the change, but it won't do anything to existing files, i.e., any existing vdisk will remain in use on cache.
    1 point
  13. I'm currently running a similar setup to you (3900x) with minecraft server, Gaming VM (1070) and 2 smaller VMs (piHole, Storj) and I do kind of wish I had considered a Threadripper to get the extra PCIe lanes, but for CPU power the 3900x has been plenty. I'm not trying to talk you out of spending more money, but if you are looking at doing this mostly for extra power for plex, maybe consider getting a P2000/4000 to do the transcoding. The 3950x should have plenty of horsepower for your needs, even the 3900x might be enough, but throw in a good board and offload the transcoding to a gpu might be a better/cheaper option. This is the route I plan to take. I only have one video card in my system, and pass it through to my vm for gaming, so you could get away without needed that 710. Look at the x570 boards and see if any of them fit your needs. If you still want to go with Threadripper, go for it, but I'm not sure the extra cost is worth what you gain.
    1 point
  14. Don't get too excited as I was... It has nothing to do with the reset we need in virtio. https://forum.level1techs.com/t/navi-reset-kernel-patch/147547/103
    1 point
  15. If all the custom exclusions weren't filled in and used, it errored out. I've just fixed this so they can be empty and now you can enter custom commands, rather than just exclusions - see v0.95.1 https://github.com/BinsonBuzz/unraid_rclone_mount/blob/latest---mergerfs-support/rclone_upload # Add extra commands or filters Command1="--exclude downloads/**" Command2="" Command3="" Command4="" Command5="" Command6="" Command7="" Command8="" # process files rclone $RcloneCommand $LocalFilesLocation $RcloneUploadRemoteName:$BackupRemoteLocation $BackupDir \ --user-agent="$RcloneUploadRemoteName" \ -vv \ --buffer-size 512M \ --drive-chunk-size 512M \ --tpslimit 8 \ --checkers 8 \ --transfers 4 \ --order-by modtime,ascending \ --min-age $MinimumAge \ $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \ --exclude *fuse_hidden* \ --exclude *_HIDDEN \ --exclude .recycle** \ --exclude .Recycle.Bin/** \ --exclude *.backup~* \ --exclude *.partial~* \ --drive-stop-on-upload-limit \ --bwlimit "${BWLimit1Time},${BWLimit1} ${BWLimit2Time},${BWLimit2} ${BWLimit3Time},${BWLimit3}" \ --bind=$RCloneMountIP $DeleteEmpty
    1 point
  16. What we need is to put all our servers to good use and create a p2p download network specific to unraid :D
    1 point
  17. Has what been fixed? There was an issue where if you had VMs and docker apps with a fixed IP then the log would get spammed with some errors that happened during the RC phase of 6.8, but that has been fixed on the stable versions.
    1 point
  18. Ok, here is everything you need to do to get this working. First edit bitwarden container then click on "advanced" Extra Parameters: -e LOG_FILE=/log/bitwarden.log -e LOG_LEVEL=warn -e EXTENDED_LOGGING=true Then add path: container path: /log host path: /mnt/user/syslog (unraid share you want bitwarden to log to) access mode: read/write #apply/done Next edit letsencrypt container then add path: container path: /log host path: /mnt/user/syslog (unraid share you want bitwarden to log to) access mode: read/write #apply/done Now edit ../appdata/letsencrypt/fail2ban/jail.local * at the BOTTOM of the file add: [bitwarden] enabled = true port = http,https filter = bitwarden action = iptables-allports[name=bitwarden] logpath = /log/bitwarden.log maxretry = 3 bantime = 14400 findtime = 14400 #save/close Then create and edit ../appdata/letsencrypt/fail2ban/filter.d/bitwarden.conf and add: [INCLUDES] before = common.conf [Definition] failregex = ^.*Username or password is incorrect\. Try again\. IP: <ADDR>\. Username:.*$ ignoreregex = #save and close #restart letsencrypt container ***Testing Use your phone or something outside your lan and once you fail 3 logins you will be banned. To show banned ips and unban enter the letsencrypt console from the docker window. Lists banned ips: iptables -n -L --line-numbers Unbans ip: fail2ban-client set bitwarden unbanip 107.224.235.134 exit -End
    1 point
  19. @Fiservedpi You don't need to worry about generating images in the sub-folders or anything, Tautulli can provide image URL's. The image URL has to be publicly accessible and is automatically grabbed from Tautulli when the notification is pushed. To use this, enable image hosting in Tautulli. Go to Settings > 3rd Party APIs and select an option from the Image Host drop down. I self host my Tautulli now but in the past I have used Imgur with no problems.
    1 point
  20. I would have no idea what version a plugin requires. Some plugins may not update their packages frequently or to the latest. Sometimes I may not either. Also could be just a different build of the same package/version, which was the case here. I just check the plg files for packages and don't uninstall those packages, so the plugins will continue to work.
    1 point
  21. Well its very simple - I have a sonarr that looks after everything from 1980-1989 another for 1990-1999, 2000-20010 etc. Same for movies. Those are then structured the same way in teamdrives. `teamdrive_name:tv/1980s` and then it all comes together in a merger under `/mergerdir/tv`.
    1 point
  22. I don't see how any of these will work because the disk isn't initializing correctly, very unlikely you can mount it with any OS, yes at some point it appeared under UD, but if you check no partition was detected, so not possible to mount.
    1 point
  23. Hey everyone! Stoked to have officially joined the team 🙌
    1 point
  24. just did what you said and it is working flawlessly, thank you so much for your help
    1 point
  25. A client asked me to build this frame, which can hold 12x 5.25-inch drives.
    1 point
  26. Hopefully this won't confuse somebody more, but, I asked about the setup on the Bazarr Discord and got some info from @morpheus65535 on how to get things mapped/pathed correctly. In the initial setup of the Bazarr container, you chose the path for where Radarr/Sonarr are currently mapped to, in my case: This will set the internal mapping of the subtitle paths to /movies and /tv (NOTE THE cAsE. IT IS DIFFERENT THAN THE DOCKER TEMPLATE MAKES IT APPEAR TO BE DURING INITIAL SETUP.) Actually maps like this. (Check the configuration page for those Host Paths) That's fair enough, just a mapping description discrepancy, but here is where the confusion is going to lie for most of us, when you integrate Radarr/Sonarr during the first run of Bazarr, it's going to "import" the mapping of your Radarr/Sonarr containers, in my case, to '/media'. THIS is the mapping you want to change in the Bazarr setup. Point your '/media' to '/tv' and '/movies' (see below) and again, and MIND THE CASES.
    1 point
  27. Although the cache drive concept was originally implemented, as its name suggests, to cache writes to the array on a per share basis, it is also very commonly used now as the home for the appdata share which is where all the docker configurations live. Depending upon the nature of the docker, performance of dockers is much better if the cache drive is an SSD, particularly for those dockers that do file downloading, editing, encoding, etc. I have some dockers that download files, but, I also use the Handbrake docker for encoding movies and its watch folder and output folders are on the cache drive as well. As you mentioned, you could also host VMs on the cache drive, although, for heavily-used VMs that require a lot of storage and/or disk operations, some opt for a separate SSD per VM or one for all VMs for lighter VMs. These are commonly configured as unassigned devices. You can now create cache pools of multiple drives as well that offer some redundancy for appdata or if you are caching a large amount of files or a smaller amount of large files until such time as the mover moves them to the parity-protected array. The bottom line is that the "cache" drive is now used for more than just file caching to speed up initial writes. Personally, I have two SSDs in my main system; one 256GB for appdata/dockers (the "cache" drive) and another 480GB for VMs (an unassigned device). I don't cache any writes to the array.
    1 point
  28. What you are proposing, if it does work, would be extremely slow, and unsafe for your data. Whether or not they would all be detected would be determined by the specific enclosure, but I don't know of any way to know beforehand, you will have to test. It may work, but it's definitely NOT recommended.
    1 point
  29. Look on the tools tab. Many people with unRAID arrays become somewhat anal about having parity protection. A second without it is a second when you are at risk for data loss if a drive fails. Rebuilding parity is a relatively slow process, and until it is done you are not protected. For that reason, many people prefer not to remove a disk if they can help it. After all, eventually you'll need more space. But it is pretty easy to do. You just tell unRAID to foget about your current array (New Config) and then redefine the array without the disk you wanted to remove. Just make sure to assign regularly parity to parity. If you have dual parity, you'd have to rebuild the second parity (probably best to leave it out, and then add it back after the new array is started and stopped once. And it would rebuild it.) There is a trickier way taking advantage of the fact that a drive full of binary zero is "invisible" to parity. So you can fill a disk with zeroes while it is in the array, then do the New Config telling it to trust parity. And define the array without that disk, telling the New Config to trust parity. Because you zeroed out the disk, parity will still be accurate. And you would have silently removed the disk (destroying any use of the disk as a backup of the data is contained). You can do a fancier trick to pull out two drives - just clone a disk (sector by sector). Two identical disks cancel each other out parity wise (at least single parity, maybe not dual) . But anyway, I digress .. I used to be the kind of person that looked for these "maintain parity at all costs" methods, but not any more. Instead, I recommend running a parity check before removing it, and looking at the SMART attributes for every disk, before and after. By comparing the before and after values, you can see if any of the disk is having signs of impending failure. Values like reallocated sectors and pending sectors on the rise are danger signs. If you see these problems it is not a good time to remove a disk (except THAT one)! But if the before and after SMART values for these are both 0, and nothing else is looking concerning, doing the new config and rebuilding parity is very low risk. Enjoy your array!
    1 point