Leaderboard

Popular Content

Showing content with the highest reputation on 09/20/20 in Posts

  1. None. A name is a name is a name. I respond to Andrew, Squid, (and my wife's favourite: Asshole). Doesn't change who I am. The whole point is to change the repository from linuxserver/letsencrypt to linuxserver/swag. The only place this would cause an issue is if you're routing your traffic from other containers through "Letsencrypt" vs "Swag". Which you're probably not. (You tend to only do that with containers that connect to a VPN ie:Binhex, and not this one which simply forwards requests to a different port)
    2 points
  2. We have indeed made a lot of progress in this thread. I now have a temporary stopgap solution running on my system that seems to work very well (SAS drives spin down in sync with Unraid's schedule, no sporadic / unexpected spin-ups). Since quite a few people expressed interest in this, I thought I'd share this stopgap. So I packaged it into a single run-and-forget script. We can use it until Limetech puts the permanent solution into standard Unraid code. To use, simply place the attached script somewhere on your flash drive (e.g. /boot/extra) and run it like so: bash /boot/extra/unraid-sas-spindown-pack It should be effective immediately. Assuming it works well for you, you can add a line in your "go" script to run it upon system boot. Essentially, it does the following: 1. Install a script that spins down a SAS drive. The script is triggered by the Unraid syslog message reporting this drive's (intended) spin down, and actually spins it down. 2. Install an rsyslog filter that mobilizes the script in #1. 3. Install a wrapper for "smartctl", which works around smartctl's deficiency of not supporting the "-n standby" flag for non-ATA devices. When this flag is detected and the target device is SAS, smartctl is bypassed. As always, no warranty, use at your own risk. It works for me. With that said, please report any issue. Thanks and credit points go to this great community, with special mention to @SimonF and @Cilusse. EDIT: Just uploaded an updated version. Please use this one instead; previous one had a small but nasty bug that sneaked in during final packing. Apologies. unraid-sas-spindown-pack
    2 points
  3. Hi, I have a feature request for some form of unraid API. I have been using Unraid for along time now and was using with with esxi. Since v6 it has taken over as my host machine and I am doing more with it than ever. I did notice there was a previous request for this in December 2014, Link, for an API but this was highlighted as not a priority at the time and given that unraid v6 was still in early beta stages this makes sense. At the moment almost all the control requires ssh/telnet or web console access. This to me poses a potential security risk and think that an API might be able to provide access to some of the commonly used functions without providing access to all the system functions of unraid. Some of the features that I would like to see in this this API are below: Access token to bypass the basic authentication for API. Possible multiple tokens with different access. example tokens, Global, Docker only, VM only, System only, Docker and VMs only, Plugins.... This could be managed via a webpage with automatically generated token but be able to be manually set or regenerated. Some form of system/docker/vm overview that is easily obtained. I use Zeron's jsonvars plugin to provide me with information about the server for some of my home automation applications and monitoring. I also use SNMP and each has there own advantages. Potentially make the API available for plugins to publish information. Docker remote control. Ability to start/stop/restart containers would be the starting point. I am sure there would be other things that people might like to see. VM remote control. Ability to start/stop/restart vm's would be the starting point. This might be a work around the physical power button problem that has been requested, for example an arduino, home automation system or even IFTTT could be used to send commands. Unraid system control. Starting, stopping, powerdown, restart, start parity, spinup, spindown check etc. This should probably should be limited to non destructive system functions as to limit malicious/accidental control. This may make it easier for third-parties to develop monitoring plugins or integration into automation systems. There could be an option to provide some form of share control. I have recently become more Ransom-ware aware and have put in processes to try an limit damage in the event of an attack. Some of the suggestions on this forum are to change the user share from read-only to write when needed. An API could potentially change a share from read/write status of a share. The control of this could handled by the user in the form of a script or webpage etc. Possibility to run command-line command/scripts. This goes against my non-destructive comment above but could be useful for anything not covered by the API. It probably should have it own API token. There is the possibility that this may be able to be done as a plugin. It might be a good starting location but I still think an API should be part of the core unraid functionality. I have noticed a few businesses/enterprises these days embracing API strategies, (example Telstra Australia) and they usually cite the collaboration, efficiency and development benefits. I have seen first-hand what having everybody developing their own backdoor/link to systems can have, especially on the inability to control system load and when trying to upgrade/change systems. The development and documentation of an API is no small task but I think it might unlock some key development/features for the future of unraid.
    1 point
  4. Had a local business need to expose their CRM server to the public net today and the owner did not want to open any ports. Cloudflare's Argo Tunnel came to mind. They had an existing Unraid server handling file shares and backups, so started looking at ways to leverage this (actually underutilised) server. Thought I'd share the steps I got to getting the tunnel to work here. Below steps assume understanding/experience with reverse proxy setups and User Scripts. The setup consists of two broad steps: A. Install any reverse proxy as a Docker image (I used Nginx Proxy Manager) and take note of the exposed port / IP. In this example, I will be setting only the HTTP proxy on port 1880. This reverse proxy is the entry point of the tunnel. Configure this proxy to connect to whichever other services you have. B. Installing cloudflared and run on startup ssh into your server and download the cloudflared binary wget https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-amd64.tgz unzip the tgz tar -xvzf cloudflared-stable-linux-amd64.tgz Login to Cloudflare (this will produce a URL. Open that URL on your browser) ./cloudflared tunnel login Once authenticated, verify that the tunnel works (change your.hostname.com to your hostname) ./cloudflared tunnel --hostname your.hostname.com --url http://localhost:1880 Then visit your.hostname.com, you should see a Cloudflare welcome page. If DNS hasn't propagated, try setting your DNS resolver to 1.1.1.1 Save your configuration as a YAML-formatted file in ~/.cloudflared/config.yml; The contents should look like this: hostname: your.hostname.com url: http://localhost:1880 Copy the contents of ~/.cloudflared into /etc/cloudflared mkdir -p /etc/cloudflared cp ~/.cloudflared/config.yml /etc/cloudflared/ cp ~/.cloudflared/cert.pem /etc/cloudflared/ Install the User Scripts plugin if you haven't already, and create a new script. I named mine cloudflared Remove the default description file and copy the contents of the script below: #!/bin/bash #description=Launches cloudflared with config and cert loaded in /etc/cloudflared #backgroundOnly=true #arrayStarted=true # Above lines set the script info read: https://forums.unraid.net/topic/48286-plugin-ca-user-scripts/page/7/?tab=comments#comment-512697 # Set path to cloudflared binary configpath=/etc/cloudflared echo "Starting Cloudflared Binary with config and cert in $configpath" /root/cloudflared --config $configpath/config.yml --origincert $configpath/cert.pem echo "Exiting Cloudflared Binary" exit Refresh the User Scripts page and set the script to run on startup of array View the logs to ensure that your routes are secured and established. You should see something like this: Starting Cloudflared Binary with config and cert in /etc/cloudflared time="2019-07-24T01:36:27+08:00" level=info msg="Version 2019.7.0" time="2019-07-24T01:36:27+08:00" level=info msg="GOOS: linux, GOVersion: go1.11.5, GoArch: amd64" time="2019-07-24T01:36:27+08:00" level=info msg=Flags config=/etc/cloudflared/config.yml hostname=your.hostname.com logfile=/var/log/cloudflared.log origincert=/etc/cloudflared/cert.pem proxy-dns-upstream="https://1.1.1.1/dns-query, https://1.0.0.1/dns-query" url="http://localhost:1880" time="2019-07-24T01:36:27+08:00" level=info msg="Starting metrics server" addr="127.0.0.1:38457" time="2019-07-24T01:36:27+08:00" level=info msg="Autoupdate frequency is set to 24h0m0s" time="2019-07-24T01:36:27+08:00" level=info msg="Proxying tunnel requests to http://localhost:1880" time="2019-07-24T01:36:30+08:00" level=info msg="Connected to HKG" time="2019-07-24T01:36:30+08:00" level=info msg="Each HA connection's tunnel IDs: map[<REDACTED>]" time="2019-07-24T01:36:30+08:00" level=info msg="Route propagating, it may take up to 1 minute for your new route to become functional" time="2019-07-24T01:36:32+08:00" level=info msg="Connected to SIN" time="2019-07-24T01:36:32+08:00" level=info msg="Each HA connection's tunnel IDs: map[<REDACTED>]" time="2019-07-24T01:36:32+08:00" level=info msg="Route propagating, it may take up to 1 minute for your new route to become functional" time="2019-07-24T01:36:33+08:00" level=info msg="Connected to HKG" time="2019-07-24T01:36:33+08:00" level=info msg="Each HA connection's tunnel IDs: map[<REDACTED>]" time="2019-07-24T01:36:33+08:00" level=info msg="Route propagating, it may take up to 1 minute for your new route to become functional" time="2019-07-24T01:36:34+08:00" level=info msg="Connected to SIN" time="2019-07-24T01:36:34+08:00" level=info msg="Each HA connection's tunnel IDs: map[<REDACTED>]" time="2019-07-24T01:36:34+08:00" level=info msg="Route propagating, it may take up to 1 minute for your new route to become functional" Voila!
    1 point
  5. just a heads up with Eco, there is now a native linux server, should be way better for a container. Latest Patch Notes:
    1 point
  6. Understood. Just a different approach from what I am used to seeing on other containers with VPNs where it simply refuses to run without a VPN link. As a novice user, it scared me to see the container running without it. I do not fully understand all the back end work that goes into this and simply reported it as it was something I did not expect and thought it may have been an issue. Thank you for the explanation and updating the container.
    1 point
  7. I have a new version of smartctl build on 7.2 and the revision r5083 as of yesterday. Use at own risk. Talking to devs to get my changes added will post once ticket is raised. smartctl 7.2 2020-09-19 r5083 smartctl
    1 point
  8. 1 point
  9. Already know Mopidy, trying to setup this container, and it is a little bit harder than normally. But i saw this commend, en saw i had the same "aadbbd6bbef9" in de config. This originates from github is you copy the mopidy.conf. Easly to be looked over. Can this be fixed for other users? https://github.com/maschhoff/docker/blob/master/mopidy/mopidy.conf
    1 point
  10. If VPN_ENABLED is set to yes but there is no valid VPN connection/configuration then the web GUI will not open.
    1 point
  11. Temporarily set VPN_ENABLED to no and see if that fixes your access issue.
    1 point
  12. THANKS!! I couldn't get anywhere with PIA Third Generation ... Thought I needed to update to NextGen. DE Berlin is working a treat. Decent speeds too, around 10mb/s 🙂 I hadn't tried DE Berlin server before searching on here & posting, previously it was terrible for me. I was getting best speeds from Canada. Thanks again!!
    1 point
  13. I was able to figure this out. Looked into my motherboard PCIE slots and found out I was using a gen2 slot instead of a gen 3. Appreciate you trying to help!
    1 point
  14. Jipp, this is exactly the same for my xeon... If i use /tower/share i get ~500 Mb/s and if i use /tower/cache/share i get ~980 MB/s... On direct share access one single core has 100% usage... However, as i have 128 GB ram an my cache is 10% even the nvme cache isn't used on file copy until it's finished. After that it gets flushed to cache at 3-4 GB/s. So RAM should always be fast, but it isn't, because of single thread on direct share writing. Greetings Dark
    1 point
  15. Just installed the OpenVPN HyDeSa container. The kill switch does not seem to work correctly. I just installed it and the container running and is reporting my real IP address(used curl ifconfig.io to verify). I did not setup any OVPN config files yet so I would assume the container should refuse to start or at least refuse connection to the internet shouldn't it?
    1 point
  16. This doesn't have anything to do with Grafana, but if you guys like disk stats, check out the Scrutiny container in CA.
    1 point
  17. update to 5.9RC5 updated corefreq kernel module(added utils for corefreq module run corefreq-cli-run) updated paragon ntfs3 v5 updated dax virtio improvements corefreq-cli app
    1 point
  18. Update (12/09/2020): Added Pi-Hole DoT DoH. This This docker supercedes my previous Pi-Hole with DoH and Pi-Hole with DoT dockers. Pondering if I should write a mod module to add VPN to LSIO dockers. Hey, my latest Pi-Hole DoT DoH docker has exposed config files so you can add additional services (and remove cloudflare). Just edit cloudflared.yml in your Unraid appdata folder. Note: Cloudflared is the app the enable DoH. Cloudflare (no "d") is the DNS service.
    1 point
  19. I have just set this up, and thought i should summarize this fairly wordy and bitty Thread to a step by step guide. To set up Unraid as a Rsync server for use as a Rsync destination for compatible clients, such as Synology hyperbackup. 1. (optional) Open Unraid web interface and set up a new share('s) that you want to use with Rsync 2. Open Unraid web interface and open a new web terminal window by clicking the 6th icon from the right, at the top right of the interface ( or SSH in to your unraid box) 3. Type or copy and paste the following one line at a time. (SHIFT + CONTROL + V to paste in to the unraid web terminal) mkdir /boot/custom mkdir /boot/custom/etc mkdir /boot/custom/etc/rc.d nano /boot/custom/etc/rsyncd.conf 4. Type your Rsync config. As a guide use the below example, modified from @WeeboTech uid = root gid = root use chroot = no max connections = 4 pid file = /var/run/rsyncd.pid timeout = 600 [backups] # Rsync Modual name (basicaly Rsync share name) Synology hyperbackup calls this "Backup Modual" path = /mnt/user/backups # Unraid share location. /mnt/user/YOURSHARENAME could also be a subdirectory of a share comment = Backups # Modual Description read only = FALSE [vmware] # Add multiple Rsync moduals as required path = /mnt/user/backups/vmware comment = VMWare Backups read only = FALSE 5. Press CONTROL + x then press y and then ENTER to save the config 6. Type or copy and paste the following: nano /boot/custom/etc/rc.d/S20-init.rsyncd 7. Type or copy and paste the following: #!/bin/bash if ! grep ^rsync /etc/inetd.conf > /dev/null ; then cat <<-EOF >> /etc/inetd.conf rsync stream tcp nowait root /usr/sbin/tcpd /usr/bin/rsync --daemon EOF read PID < /var/run/inetd.pid kill -1 ${PID} fi cp /boot/custom/etc/rsyncd.conf /etc/rsyncd.conf 8. Press CONTROL + x then press y and then ENTER to save the Script 9. To add your script to the Go file, its quicker to use echo to send a line to the end of the file. Type or Copy and paste: echo "bash /boot/custom/etc/rc.d/S20-init.rsyncd" >> /boot/config/go 10. Type or copy and paste the following: ( I am not sure if chmod is needed, however its something i did trying to get this to work) chmod =x /boot/custom/etc/rc.d/S20-init.rsyncd bash /boot/custom/etc/rc.d/S20-init.rsyncd rsync rsync://127.0.0.1 11. The last command above is to check rsync is working locally on your Unraid server. It should return the rsync module's and comments from your rsync.conf like below: root@YOURUNRAIDSERVERNAME:/# rsync rsync://127.0.0.1 backups Backups vmware VMWare Backups 12. If the last command Displays your rsync module's then you may want to quickly check if rsync can be reached by your unraids domain name or network interface IP: rsync rsync://192.168.0.100 # replace ip with your unraid server ip Or rsync rsync://UNRAIDSERVERNAME.local #obviously replace with your sever name ;) End. Now check that your rsync client connects to unraid. i used Synology Hyper Backup, Created a new data backup, Under file server selected rsync > next change server type to "rsync-compatible server", fill in your unraid server Ip or domain name, transfer encryption "off" - not sure how to get encryption to work, please post below if you know how port "873" username "root" - i guess couldset up a second account and grant appropriate privileges using CLI on unraid? password "YOURUNRAIDROOTPASSWORD" Backup module: "your Rsync module from rsynd.conf" directory "subdirectory inside your rsync module / unraid share" Hope this helps someone. ( Edited, thanks Dr_Frankenstein )
    1 point
  20. So - I'm seriously thinking that there's a setting I'm missing but maybe I'm going crazy. There's nothing being uploaded but I am getting downloads. Am I going crazy. What should I look to fix?
    1 point
  21. Why is the Listening port is always Closed no matter what I try on my router? I opened for TCP on different ports. I tried as bridge then as host. Nothing helps. Other Docker apps works correctly with a forwarded port.
    1 point
  22. Beautiful job. This looks like a nice backup NAS. Please post photos from the inside.
    1 point
  23. Hey folks, I'm an unRAID newb, but thought I'd share my little project. I picked up a 2nd hand Coolermaster Elite 130 Mini-ITX case that originally had just one external 5.25" drive bay and a couple of internal mounts for drives. I took to it with a pair of tin snips and a hacksaw and was able to squeeze in a pair of 3-bay hot-plug drive cages and made up my own internal brackets to mount them. I can take some internal photos if anyone's interested, but with the depth of the drive cages, there is not a millimeter to spare between the back of the drive and the motherboard. I wanted something small, quiet, low-power and with external drive access. Internally, I also have a pair of SSDs for cache - one m.2 and one 2.5". The motherboard is a ROG Strix X470-i, populated with a Ryzen 2200G and 16 GB of RAM. The solitary PCIe slot is populated with a 2-port (soon to be replaced with a 4-port) SATA board. I have a pair of 3 TB 7200 rpm drives installed and two more arriving tomorrow. The box is just going to be a home storage & plex server to start with and we'll see where it goes from there...
    1 point
  24. I followed these instructions to the letter on my main and backup servers. After running the indicated commands, modifying the go file and executing these commands on each server, moving files between the two servers is now possible without a password prompt. After rebooting my backup server, all the appropriate SSH files persist and the backup from main to backup server runs without a password prompt. Success! Thanks to @ken-ji and @tr0910 for this information. Combining Ken-Ji's instructions above and the intermediate tests in the original post and tr0910's sample script, automating rsync backup via ssh works great. Now it's on to refining my backup script and automating it via the User Scripts plugin. The unRAID community, as always, comes through again.
    1 point
  25. 1 point
  26. My take on you problem: 1) You updated Nzbget - which now comes with unrar 5.4x+. These versions of unrar also restores files permissions from the archive, if you do not pass the "-ai" option. 2) Without the "-ai" option most archives will still unpack with correct permissons as normally the rw attributes are set correctly. Only a few archives contain (at least for me) unusual permissions/attributes -> and these lead to your problems If you do not want to use the -ai option you can also use the umask option, as pointed out by another user. This explains: 1) That the problem only occured recently and only on some files 2) That resetting permissions from the unraid->tools fixes the issue for exisitng files While the nzbget settings page points out the "-ai" option, it is not the default. The reasoning might be that it is a useful feature to include attributes in an archive and that the expected behavious is that they are restored intact.
    1 point
  27. run the newperms script over the host side of the downloads folder, change the umask setting like i showed you to cover for future downloads, and if you still can't delete files, then it is an error rooted in shares and SMB.
    1 point
  28. go to settings tab in nzbget and this section scroll to bottom, set umask like this. enjoy, lol
    1 point
  29. I have a share on my unRAID server called Downloads, its used mainly by NZBget which deposits things it downloads into this folder. I have SMB share settings setup for the only user I have setup on this server, a user called Ashman, who has full read/write permissions to this share. Just yesterday I noticed that all of a sudden, I am unable to delete anything in this share. I use a Mac and regularly connect to this share, not I get an error that I don't have permission to delete files. If I connect to this share from a windows 10 pc, I get a similar error that mentions User0. I changed the permissions on the share which are usually set to secure, to private, no change in behaviour, changed it to public and I was able to delete files. I created a new user, gave them full read/write permissions to the share, changed the SMB permissions back to secure, still unable to delete files as the new user. What should my next steps be to resolve this?
    1 point