Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 08/22/20 in all areas

  1. 6 points
    I code for fun and my dockers are mostly about adding niche features to stuff. DockerHub: https://hub.docker.com/u/testdasi If you like my work, a donation to my burger fund is very much appreciated. List: Grafana Unraid Stack OpenVPN HyRoSa OpenVPN HyDeSa OpenVPN AIO Client Pi-Hole DoT DoH DNS DoH companion Grafana Unraid Stack Meet Gus! He has everything you need to start monitoring Unraid (Grafana - Influxdb - Telegraf - Loki - Promtail). Also comes with a sleek made-for-Unraid dashboard pre-installed ready for your customisation. Choice of HDDTemp or S.M.A.R.T for HDD/SSD monitoring. Ability to view Unraid syslog in a Grafana panel with Loki and Promtail. NOTE: comes default to Host network. If you want to run at with Bridge network, remember to map port 3006 to access Grafana. Don't change the port ENV variables unless you are comfortable changing the various config files as things are very tightly integrated. Docker Hub: https://hub.docker.com/r/testdasi/grafana-unraid-stack Github: https://github.com/testdasi/grafana-unraid-stack OpenVPN HyRoSa (NZB)Hydra2 - RTorrent (Flood GUI) - Sabnzbd. Same as OpenVPN HyDeSa except with rTorrent instead of Deluge. I personally prefer rTorrent + Flood over the alternatives. Port-forwarding is unfortunately not supported at the moment (and it also requires your VPN service to provide a way to do it). Torrent still works fine without port-forwarding but if it's critical to you, I recommend binhex/arch-rtorrentvpn or my derived ruTorrentVPN Plus Plus docker (see further below.). NOTE: You must create an openvpn subfolder under your appdata folder and place the OpenVPN configuration there (must include openvpn.ovpn + credentials + certs). For more detailed instructions, please refer to Docker Hub / Github links below. Docker Hub: https://hub.docker.com/r/testdasi/openvpn-hyrosa Github: https://github.com/testdasi/openvpn-hyrosa OpenVPN HyDeSa (NZB)Hydra2 - Deluge - Sabnzbd. Now your torrent and usenet are protected behind OpenVPN Client (with kill switch) and DNS-over-TLS. Socks5 + HTTP proxy are also included for your convenience e.g. to also send Sonarr and Radarr web traffic through the VPN. Port-forwarding is unfortunately not supported at the moment (and it also requires your VPN service to provide a way to do it). Torrent still works fine without port-forwarding but if it's critical to you, I recommend binhex/arch-rtorrentvpn or my derived ruTorrentVPN Plus Plus docker (see further below.). NOTE: You must create an openvpn subfolder under your appdata folder and place the OpenVPN configuration there (must include openvpn.ovpn + credentials + certs). For more detailed instructions, please refer to Docker Hub / Github links below. Docker Hub: https://hub.docker.com/r/testdasi/openvpn-hydesa Github: https://github.com/testdasi/openvpn-hydesa OpenVPN AIO Client An "all-in-one" docker for all your private browsing needs. Including OpenVPN client with nftables kill switch DNS server to DoT (DNS-over-TLS) services Socks5 + HTTP proxy to both VPN and TOR with (additional) piping kill switch for the proxies. Default repository with VPN + TOR: testdasi/openvpn-client-aio:stable-amd64 Optional repository with only VPN: testdasi/openvpn-client-aio:stable-torless-amd64 NOTE: you must place your own OpenVPN configuration to the host path that is mapped to /etc/openvpn (The ovpn file must be named openvpn.ovpn. Credentials + certs can be in the same file or split out into other files - the flexibility is yours.) For more detailed instructions, please refer to Docker Hub / Github links below. Docker Hub: https://hub.docker.com/r/testdasi/openvpn-client-aio Github: https://github.com/testdasi/openvpn-client-aio Pi-Hole DoT DoH Official pihole docker with added DNS-over-TLS (DoT) and DNS-over-HTTPS (DoH). DoH uses cloudflare (1.1.1.1/1.0.0.1) and DoT uses google (8.8.8.8/8.8.4.4). Config files are exposed so you can modify them as you wish e.g. to add more services. This docker supercedes my previous Pi-Hole with DoH and Pi-Hole with DoT dockers. For more detailed instructions, please refer to Docker Hub / Github links below. Docker Hub: https://hub.docker.com/r/testdasi/pihole-dot-doh Github: https://github.com/testdasi/pihole-dot-doh DNS DoH companion Simple DNS server to connect to DNS-over-HTTPS. Easy fast way to add DNS functionality to an OpenVPN docker (using --network=container:) and/or enable DNS encryption for your local network / devices. Emphasis on simplicity (hence a "companion"). If you want bells and whistles, I recommend ICH777's DoH Client. Update (21/09/2020): OpenVPN-based dockers now will crash out if the user doesn't provide an ovpn file as per instructions. Deprecated rutorrent-plus-plus as binhex has implemented multi-remote functionality. Please use his docker instead. Added Grafana Unraid Stack
  2. 6 points
    Well this is what I sent them. Maybe the possibility of losing customers is a higher priority to them. "I am using the legacy OVPN connections with port forwarding in a docker on my unRAID server. The community seems to be having major issues with connections to endpoints that support port forwarding. It seems like these issues have started happening in the past recent few weeks. I would like to know if there is a plan to support OVPN connections on your next generation servers? There is a section of your customer base that are using these connections that I fear you will loose if you do not address this concern."
  3. 5 points
    OK guys, multi remote endpoint support is now in for this image please pull down the new image (this change will be rolled out to all my vpn images shortly). What this means is that the image will now loop through the entire list, for example, pia port forward enabled endpoints, all you need to do is edit your ovpn config file and add the remote endpoints at the top and sort into the order you want them to be tried, an example pia ovpn file is below (mine):- remote ca-toronto.privateinternetaccess.com 1198 udp remote ca-montreal.privateinternetaccess.com 1198 udp remote ca-vancouver.privateinternetaccess.com 1198 udp remote de-berlin.privateinternetaccess.com 1198 udp remote de-frankfurt.privateinternetaccess.com 1198 udp remote france.privateinternetaccess.com 1198 udp remote czech.privateinternetaccess.com 1198 udp remote spain.privateinternetaccess.com 1198 udp remote ro.privateinternetaccess.com 1198 udp client dev tun resolv-retry infinite nobind persist-key # -----faster GCM----- cipher aes-128-gcm auth sha256 ncp-disable # -----faster GCM----- tls-client remote-cert-tls server auth-user-pass credentials.conf comp-lzo verb 1 crl-verify crl.rsa.2048.pem ca ca.rsa.2048.crt disable-occ I did look at multi ovpn file support, but this is easier to do and as openvpn supports multi remote lines, it felt like the most logical approach. note:- Due to ns lookup for all remote lines, and potential failure and subsequent try of the next remote line, time to initialisation of the app may take longer. p.s. I dont want to talk about how difficult this was to shoe horn in, i need to lie down in a dark room now and not think about bash for a while :-), any issues let me know!.
  4. 5 points
    OK guys, multi remote endpoint support is now in for this image please pull down the new image (this change will be rolled out to all my vpn images shortly). What this means is that the image will now loop through the entire list, for example, pia port forward enabled endpoints, all you need to do is edit your ovpn config file and add the remote endpoints at the top and sort into the order you want them to be tried, an example pia ovpn file is below (mine):- remote ca-toronto.privateinternetaccess.com 1198 udp remote ca-montreal.privateinternetaccess.com 1198 udp remote ca-vancouver.privateinternetaccess.com 1198 udp remote de-berlin.privateinternetaccess.com 1198 udp remote de-frankfurt.privateinternetaccess.com 1198 udp remote france.privateinternetaccess.com 1198 udp remote czech.privateinternetaccess.com 1198 udp remote spain.privateinternetaccess.com 1198 udp remote ro.privateinternetaccess.com 1198 udp client dev tun resolv-retry infinite nobind persist-key # -----faster GCM----- cipher aes-128-gcm auth sha256 ncp-disable # -----faster GCM----- tls-client remote-cert-tls server auth-user-pass credentials.conf comp-lzo verb 1 crl-verify crl.rsa.2048.pem ca ca.rsa.2048.crt disable-occ I did look at multi ovpn file support, but this is easier to do and as openvpn supports multi remote lines, it felt like the most logical approach. note:- Due to ns lookup for all remote lines, and potential failure and subsequent try of the next remote line, time to initialisation of the app may take longer. p.s. I dont want to talk about how difficult this was to shoe horn in, i need to lie down in a dark room now and not think about bash for a while :-), any issues let me know!.
  5. 5 points
    got kids?, then you need them to read this 🙂 https://ia902600.us.archive.org/15/items/mommybook/mommybook.pdf
  6. 5 points
    ok progress with PIA!, they now at least accept there is an issue, email just received:-
  7. 4 points
    Ultimate UNRAID Dashboard (UUD) Current Release: Version 1.3 Overview: This is my attempt to develop the Ultimate Grafana/Telegraf/InfluxDB dashboard. This entire endeavor started when one of our fellow users posed a simple, but complex question in the below forum topic. I decided to give it a shot, as I am an IT professional, specifically in enterprise data warehouse/SQL server. If you are a Grafana developer, or have had experience building dashboards/panels for UNRAID, please let me know. I would love to collaborate. Version 1.3 Screenshots - Serial Numbers Redacted (Click the Images as They are Very High Resolution): Disclaimer: This is based on my 30 Drive UNRAID Array. So this shows an example of a fully maxed out UNRAID setup with max drives, dual CPUs, Dual NICs, etc. You will/may need to adjust panels & queries to accommodate your individual UNRAID architecture. This is a heavily modified and customized version of GilbN's original off of his tutorial website, with new and original code. As such, he is a co-developer on this version. I have spent many hours custom coding new functionality and features based on that original template. Much has been learned and I am excited to see how far this can go in the future. GilbN has been gracious enough to help support my modded version here as he wrote the back-end. Thanks again! Developers: @falconexe New Panels/DB Queries/Look & Feel @GilbN Original Template/Backend/Dynamics Contributors: @Roxedus Thanks: @hermy65 For putting me on this path... Dependencies: Docker - InfluxDB Docker - Telegraf Network Type: HOST (Otherwise You May Not Get All Server Metrics) Telegraf.config Telegraf Plugin - [[inputs.ipmi_sensor]] Bash Into Telegraf Docker and Run "apk add lm_sensors" Telegraf Plugin - [[inputs.smart]] Enable in telegraf.config Also Enable "attributes = true" Bash Into Telegraf Docker and Run "apk add smartmontools" Telegraf Plugin - [[inputs.ipmitool]] Enable in telegraf.config Bash Into Telegraf Docker and Run "apk add ipmitool" Telegraf Pugin - [[inputs.apcupsd]] Enable in telegraf.config Docker Config Add New Path (NOTE: This path has now been merged into Atribe's Telegraf Docker Image. (Thanks @GilbN) Then Edit telegraf.conf > [[inputs.diskio]] > Add device_tags = ["ID_SERIAL"] > Use ID_SERIAL Flag in Grafana This Means That Upon Booting, You Don't Have to Worry About SD* Mounts Changing You Could Override the Serial Number With "DISK01" etc. So the Serials Would Never Show Unless You Want Them To Post Arguments "/bin/sh -c 'apk update && apk upgrade && apk add ipmitool && apk add smartmontools && telegraf'" Docker - Grafana CA Plugin: IPMI Tools Dashboard Variables (Update These For Your Server): Let me know if you have any questions or are having any issues getting this up and running if you are interested. I am happy to help. I haven't been this geeked out about my UNRAID server in a very long time. This is the cherry on top for my UNRAID experience going back to 2014 when I built my first server. Thanks everyone! VERSION 1.3 (Latest) Ultimate UNRAID Dashboard - Version 1.3 - 2020-09-21 (falconexe).json VERSION 1.2 (Deprecated) Ultimate UNRAID Dashboard - Version 1.2 - falconexe.json
  8. 4 points
    VERSION 1.3 IS FINALLY HERE! After many many hours, little sleep, and just short of a metric crap-ton of edits/revisions/code changes/code merges, I have finally finished development on UUD version 1.3. This is a HUGE update and should make everyone's lives much easier trying to adapt this to their own UNRAID server. The dashboard is now extremely dynamic. I have removed hard coding throughout and implemented REGEX where possible. If it can't work for EVERYONE's server, I rewrote the code so it could. I wanted to give a HUGE shout-out to @GilbN for his continued help, support, and coding on this dashboard. He has been extremely helpful, especially with the dynamics (REGEX and Global Dashboard Variables). Without further ado... Core Changes: This Release is Related to Bug Fixes, Code Cleanup, Adding More Dynamic Ability For a Vast Range of Users/Configurations, and Continued Fine Tuning Added/Modified Dashboard Variables (See Below) to Support Wide Range of Users/Architecture/Use Cases Implemented REGEX Throughout Dashboard When and Where Possible Re-Wrote All Code Using Serial Numbers Where Possible (DiskIO, S.M.A.R.T. Device Temperatures, etc.) You Can Now Set Value Mappings on These Serials Numbers to Forever Label Drives by Drive Number! See Panel Descriptions For Usage Requires Additional Setup: There is a Way to Make Drive Order/Mapping Permanent by Using the Serial Number of the Drive Add the Following Path to the Telegraf Docker Then Edit telegraf.conf > [[inputs.diskio]] > Add device_tags = ["ID_SERIAL"] > Use ID_SERIAL Flag in Grafana This Means That Upon Booting, You Don't Have to Worry About SD* Mounts Changing You Could Override the Serial Number With "DISK01" etc. So the Serials Would Never Show Unless You Want Them To Fan Speed Gauges: Added Regex to Extract Fans From IPMI Sensors List Updated Drive Temperatures (Celsius) Panel to Display All Drive Temps Including Flash (Boot) Updated Docker CPU to Use New Variable Name "cputhreads" Instead of "cpucores" (Deprecated) Changed All Array Share Queries to Use Path = "/mnt/user0" Instead of Device = "sfhs" (Provides More Accurate/Consistent Results) Changed/Ensured All Panel Reference Query IDs to Be In Order by Appearance/Alphabetical (Query 1 = A, Query 2 = B, etc.) Changed All References and Code Involving Dual Assets to 1/2 As Opposed to 0/1 Not Treating as an Array - Example: CPU 00/CPU 01 Becomes CPU 01/CPU 02 Updated Code In Drive S.M.A.R.T. Health Summary Panel to Use Current Standards/Nomenclature Bug Fixes: UPS - Cost This Year Resolved Calculation Error Where Proper Daily Growth Was Not Enumerating Grammatical Errors Throughout Dashboard New Panels/Graphs Array Storage Utilized % Displays Percentage of Array Usage Cache Storage Utilized Displays Usage in GB of Cache Drive(s) Cache Storage Utilized % Displays Percentage of Cache Drive(s) Usage Network Interfaces (TX) Uses REGEX | Includes All Networks Interfaces (NICs, Bridges, Docker, & Virtual) Network Interfaces (RX) Uses REGEX | Includes All Networks Interfaces (NICs, Bridges, Docker, & Virtual) UI Changes: Moved Uptime Clock Panel Above "Overwatch" Section Changed Uptime Clock Panel to Transparent Style Moved Array/Disk Stat Panels to Very Top of Dashboard Renamed Panels Fan Speeds > Fan Speed Gauges Disk Storage > Array Disk Storage Drive S.M.A.R.T. Health > Drive S.M.A.R.T. Health Overview Drive Temperatures > Drive Temperatures (Celsius) CPU 00 > CPU 01 CPU 01 > CPU 02 UPS Load vs Time Left > UPS Load Vs. Time Left This Year's Cost > Cost This Year Dashboard Variables: Modified CPU Cores Variable Name to Match Label: cpucores > cputhreads Modified UPS Variables to Have "UPS" Prefix in Name kwhprice > upskwhprice maxwatt > upsmaxwatt Added Flash (Boot) Drive Variable Uses Serial Numbers. This way it will NEVER CHANGE. Set it once and forget it! (Requires New Dependency - See Above) Usage: Select Single Flash (Boot Drive) Added Parity Drive(s) Variable Uses Serial Numbers. This way they will NEVER CHANGE. Set it once and forget it! (Requires New Dependency - See Above) Usage: Select 1 or More Parity Drives Added Cache Drive(s) Variable Uses Serial Numbers. This way they will NEVER CHANGE. Set it once and forget it! (Requires New Dependency - See Above) Usage: Select 1 or More Cache Drives Added Array Drive(s) Variable Uses Serial Numbers. This way they will NEVER CHANGE. Set it once and forget it! (Requires New Dependency - See Above) Usage: Select 1 or More Array Drives (Not Flash, Not Parity, & Not Cache) Added Descriptions To Following Panels: Fan Speed Gauges Note: Uses REGEX to Parse Fan Names From IPMI Sensor List! Array Growth (Week) Note: Query Options > Min Interval - Must Match on Week/Month/Year (Set to 2h [2 Hours] by Default For Performance Reasons Array Growth (Month) Note: Query Options > Min Interval - Must Match on Week/Month/Year (Set to 2h [2 Hours] by Default For Performance Reasons) Array Growth (Annual) Note: Query Options > Min Interval - Must Match on Week/Month/Year (Set to 2h [2 Hours] by Default For Performance Reasons) Docker CPU Note: Uses Variable Flash I/O (Read & Write) Note: Uses Variable Cache I/O (Read & Write) Note: Uses Variable Array I/O (Read) Note: Uses Variables | Use Overrides > Display Name to Dynamically Name Serial Number (Field Name) to Drive Number Array I/O (Write) Note: Uses Variables | Use Overrides > Display Name to Dynamically Name Serial Number (Field Name) to Drive Number Drive S.M.A.R.T. Health Overview Note: Uses REGEX Drive Temperatures (Celsius) User Overrides > Display Name to Dynamically Name Serial Number (Field Name) to Drive Number Network Interfaces (RX) Note: Uses REGEX | Includes All Networks Interfaces (NICs, Bridges, Docker, & Virtual) Network Interfaces (TX) Note: Uses REGEX | Includes All Networks Interfaces (NICs, Bridges, Docker, & Virtual) CPU 01 Note: Uses REGEX To Find Cores in CPU 01. Change REGEX According to Your Number of Cores! Example: CPU 01 Has 8 Cores (16 With HyperThreading) - The REGEX For Cores 0-15 is "/cpu(1[6-9]|2[0-9]|3[01])|cpu-total/" CPU 02 Note: Uses REGEX To Find Cores in CPU 02. Change REGEX According to Your Number of Cores! Example: CPU 02 Has 8 Cores (16 With HyperThreading) - The REGEX For Cores 16-31 is "/cpu(1[6-9]|2[0-9]|3[01])/" CPU 01 Core Load Note: Uses REGEX To Find Cores in CPU 01. Change REGEX According to Your Number of Cores! Example: CPU 01 Has 8 Cores (16 With HyperThreading) - The REGEX For Cores 0-15 is "/cpu(1[6-9]|2[0-9]|3[01])|cpu-total/" CPU 02 Core Load Note: Uses REGEX To Find Cores in CPU 02. Change REGEX According to Your Number of Cores! Example: CPU 02 Has 8 Cores (16 With HyperThreading) - The REGEX For Cores 16-31 is "/cpu(1[6-9]|2[0-9]|3[01])/" IPMI Fan Speeds Note: Uses REGEX to Parse Fan Names From IPMI Sensor List! UPS Load % Note: Uses Variables Current Load kWh Note: Uses Variables Average UPS Load Note: Uses Variables Current UPS Load Note: Uses Variables UPS Load Vs. Time Left Note: Uses Variables Estimated Yearly Cost Note: Uses Variables | Adjust Field Unit For Your Set UPS Currency Variable As Required! Actual Cost This Year Note: Uses Variables | Adjust Field Unit For Your Set UPS Currency Variable As Required! Average Daily Cost Note: Uses Variables | Adjust Field Unit For Your Set UPS Currency Variable As Required! See Post Number 1 For the New JSON File! Alright, I'm finally heading to bed. Let me know if you run into any issues. Thanks guys. I hope you ENJOY! 😁
  9. 4 points
    Hey guys, just wanted to say a huge thanks for putting @GilbN and I in the top 3 yesterday on the entire forum. Never thought that would happen. We really appreciate it! (wipes away small tears...) 😅
  10. 4 points
  11. 4 points
    Ok, let me be a little more clear. There is no publicly accessible official timeline. What limetech does with their internal development is kept private, for many reasons. My speculation of the main reason is that the wrath of users over no timeline is tiny compared to multiple missed deadlines. In the distant past there were loose timelines issued, and the flak that ensued was rather spectacular, IIRC. Rather than getting beaten up over progress reports, it's easier for the team to stay focused internally and release when ready, rather than try to justify delays. When you have a very small team, every man hour is precious. Keeping the masses up to date with every little setback doesn't move the project forward, it just demoralizes with all the negative comments. Even "constructive" requests for updates take time to answer, and it's not up to us to say "well, it's only a small amount of time, surely you can spare it". The team makes choices on time management, it's best just to accept that and be happy when the updates come.
  12. 3 points
    https://github.com/atribe/unRAID-docker/pull/7 Merged. Template should be updated in around 1 hour.
  13. 3 points
    We have indeed made a lot of progress in this thread. I now have a temporary stopgap solution running on my system that seems to work very well (SAS drives spin down in sync with Unraid's schedule, no sporadic / unexpected spin-ups). Since quite a few people expressed interest in this, I thought I'd share this stopgap. So I packaged it into a single run-and-forget script. We can use it until Limetech puts the permanent solution into standard Unraid code. To use, simply place the attached script somewhere on your flash drive (e.g. /boot/extra) and run it like so: bash /boot/extra/unraid-sas-spindown-pack It should be effective immediately. Assuming it works well for you, you can add a line in your "go" script to run it upon system boot. Essentially, it does the following: 1. Install a script that spins down a SAS drive. The script is triggered by the Unraid syslog message reporting this drive's (intended) spin down, and actually spins it down. 2. Install an rsyslog filter that mobilizes the script in #1. 3. Install a wrapper for "smartctl", which works around smartctl's deficiency of not supporting the "-n standby" flag for non-ATA devices. When this flag is detected and the target device is SAS, smartctl is bypassed. As always, no warranty, use at your own risk. It works for me. With that said, please report any issue. Thanks and credit points go to this great community, with special mention to @SimonF and @Cilusse. EDIT: Just uploaded an updated version. Please use this one instead; previous one had a small but nasty bug that sneaked in during final packing. Apologies. unraid-sas-spindown-pack
  14. 3 points
    Would it be possible to implement SR-IOV into Unraid? I specifically want it for SMB Direct, but there are other applications for it like using a single gpu for multiple vms.
  15. 3 points
    You had doubts that data collecting junkies love data? 🤣
  16. 3 points
    Are you really too lazy to read a few of the latest posts in a thread before posting the exact same issue that's been discussed for the past week?
  17. 3 points
    @binhex Does the container only use a single .ovpn file from the appdata directory for configuration? Can I put all of the PIA port-forwarding capable server .ovpn files in there so that it can try them until it gets a working API? Just this week I had to change from Montreal to Vancouver to Spain. I didn't know if you have it scripted to try the next ovpn in a sequence.
  18. 3 points
    As you all are probably aware PIA has issues with regards to port forwarding on their legacy network, i have raised a support ticket and i have just gone through the pain of checking each endpoint and thought i would share my email as it details the results, things are slightly better than i thought but still not great!:- i will let you know what the response is from the PIA tech support, i am not holding my breath!.
  19. 3 points
    @binhex Would it be possible to have the script run through a list of .ovpn files after failing to port forward with one? Use case would be, CA-Montreal is having port-forward API issues right now, and as it is the first in the list alphabetically (I assume), the script only chooses it to retry with. Could it instead take a list of all .ovpn files in the config/openvpn directory and try them each in succession until a successful API call is made? As always, thanks for your work!
  20. 3 points
    Been running it for a few months now and love it, it's been super stable (disabled C states and set power supply idle control to typical in BIOS). Both Intel NIC's work out the box and register as 10Gb and the IOMMU groupings are nice as well. Details of my system are in my signature. I am on the latest 6.9.0Beta25 using a recompiled Kernel to add Nvidia Drivers from @ich777 's nice docker tool - Here For BIOS and BMC: BMC Version - 1.30.00 - Download BIOS Version - L1.37 (BETA BIOS I requested from Support) - Download For the BETA BIOS, it's been rock solid. The difference with this one is not only (current release bios doesn't even have bifurcation) does it allow you to set bifurcation, but it does so on a per PCIe slot basis. BETA BIOS prior had PCIe slots setup in groups. So if you wanted to set one slot to x4x4x4x4, then you got two slots as x4x4x4x4, I didn't like that. I am posting it in my onedrive for you since I find it a requirement.
  21. 3 points
    Hallo, schau mal unter Settings > Network Settings. Dort solltest du beide Karten finden und auch das Gateway eingeben können. DNS Server (bis zu 3 Stück) habe ich nur bei eth0. Weiter unten unter "Interface Rules" kannst du die Zuweisungen der Netzwerkkarten (anhand der MAC Adresse) zu den eth0 / eth1 Port erledigen. Wenn dein Router (DSK/Kabel) DNS macht (und wenn auch nur Forwarding), solltest du diesen als Gateway eintragen - das könnte schon ausreichen. Ansonten Zuweisung unter "Interface Rules" mal ändern. Logisch....achte darauf, dass beide Netzwerkports per Kabel am Router/Switch (im selben VLAN, falls vorhanden) hängen.....
  22. 3 points
    UNRAID Fanless & Silent Case: Streacom DB4 Fanless Chassis Power: Streacom ST-NANO120 Pico Mobo: Supermicro X11SCL-IF CPU: Intel Xeon E-2134 4 Cores | 8 Threads @ 3,4 GHz RAM: 2x Kingston 32GB DDR4 ECC NET: 2x Gigabit (bond; fault tollerence), 1x IPMI USB-Drive: 16GB Verbatim Store'n'Go USB 3.0 NVME-SSD (VMs & Cache): Samsung 970 Pro 512GB HDDs: 3x Seagate ST12000VN0007 12TB VMs: Windows 10 2004 Windows Server 2019 & Exchange 2019 Proxmox Mailgateway DOCKERs: letsencrypt piHole mariaDB Nextcloud bitwarden photoprism airsonic jellyfin dokuwiki calibre quake 3 server unifi controller jitsi (in planing ...) Build this little beauty in the beginning of last year and i am still loving it. The chassis is a bit expensive but the best case i have ever had my hands on: well designed and thought out: simply fantastic! A huge shoutout to "spaceinvader one" and everybody in this community introducing me to docker (which i completely neglected since its implementation in UnRaid) what makes this build more versatile than i could have ever dreamed of.
  23. 3 points
    ca-montreal is working fine (im using it), others may also work, but as you mentioned until the whole next-gen work is done things could be up and down.
  24. 3 points
    "Time is an illusion. Lunchtime doubly so."
  25. 3 points
    In a case where the current release does not have support for the users hardware it is not an unreasonable recommendation. You can argue this both ways. Many users are using the current beta successfully on live systems without any major issues. Also as well as providing support for newer hardware the beta does fix some significant known issues that are present in the 6.8.3 release. It is likely the next 6.9.0 release will be a rc one so it is not as though we are early in the beta release cycle.
  26. 3 points
    I'll revisit this topic for 6.10 release - why? Because we need to make some other non-trivial changes in md/unraid driver to support 5.8 kernel. Here's my pattern in dealing with driver changes: Realize any non-trivial driver coding is intricate and perilous. That is, one tiny bug can cause quite a bit of damage, as in losing data kind of damage. Therefore whenever I embark on driver changes, I have to "clear the deck" of all other coding distractions and concentrate just on this, and then test, and then test some more. First, however, is making a business case for the changes in the first place. For example, is coding time better spent dealing with identifying a possible specific failed device in a P+Q array, and then what to do about it, or say, adding ZFS support (or pick any other feature)? In your particular example, it would help to demonstrate a realistic use case for this feature that could benefit a lot of users, considering also that not everyone is even running a P+Q array.
  27. 3 points
    Plex sucks. If you're only doing local streaming use Kodi
  28. 3 points
    Because LT has a small team and other features are way higher on the priority list. You will have to out-scream the [Include graphics card driver] crowd, the [Include Wifi] crowd and the [ZFS] crowd.
  29. 2 points
    Nice! Can you post a screenshot or two of the updated Dashboard? I'll be sure to share this on our social media/monthly newsletter.
  30. 2 points
    I was using rm with the 'find' command to purge aged files in the recycle bin. I've changed it now because it seems that 'rm -rf' removes too much and removes files that haven't been aged. I switched to using the -delete switch with 'find' which doesn't exhibit the same behavour. He was just explaining how the use of 'rm -rf; with 'find' was potentially causing some problems. I was able to duplicate the issue he was explaining.
  31. 2 points
    Hi Guys So I’ve finally started using rclone mount again after several years with only doing weekly scripted backups. This prompted me to give the plugin a bit of an overhaul to fix some of my own long-standing nuisances with the plugin. Gone are the default scripts and the script editor (You should really use UserScripts for all your script needs). The config file can still be edited. The settings page has been overhauled and i intend to deprecate the beta branch of the plugin for this new unified model. You are now able to change branch at will in the settings page as well as update rclone if a new version is available. Current version as well as newest version will now be shown in the settings page. If you want to check it out you can install/update the plugin manually by inserting THIS URL in the plugins/install plugin settings menu. It's a drop in replacement for the stable branch. Whenever this version has proven itself without any major bugs i will merge it with the master branch and deprecate the beta branch. @Stupifier @DZMM if you get the chance please test out the new version and see if there's any glaring bugs I’ve missed. Edit: A picture to show the new settings page
  32. 2 points
    Hey guys. I am hard at work in version 1.3 (bug fixes mostly). But I did come across something pretty interesting. There is a way to make drive order/mapping permanent by using the serial number of the drive. You have to add the following path to the Telegraf docker. Then Edit telegraf.conf > [[inputs.diskio]] > Add device_tags = ["ID_SERIAL"] > Use ID_SERIAL Flag is Grafana This means that upon booting, you don't have to worry about SD* changing, and you could override the serial number with "DISK01" etc. So the serials would never show unless you want them to. I will test and report back.
  33. 2 points
    Im connected to toronto without issue (with strict port forwarding set to off) - torrents happily transferring. have been for a couple days now.
  34. 2 points
  35. 2 points
    Glad you got it sorted. 👍 A recurring theme for ASRock board owners when issues occur.
  36. 2 points
    Feature Request: Like many others I use port forwarding. Periodically the Canadian locations will fail for several days and I have to go in an manually reconfigure for a new location. Can you instead adjust the docker to accept a list of locations and in a failure event just try the next location on the list in a round robbin fashion?
  37. 2 points
    maybe the message is finally getting through that they have a problem, its only taken them two weeks! 🙄
  38. 2 points
    The situation for PIA is as follows (at present):- Port forwarding on their legacy network is near enough dead, there are only certain servers for endpoint ca-vancouver that still work right now. Port forwarding using openvpn on their next-gen network is currently not possible, api is not accessible. Native wireguard support on their next-gen network is currently not possible (hacks are available but they are fragile!). So where does that leave us?, well not quite up sh*t creek without a paddle :-), you can set 'STRICT_PORT_FORWARDING' to 'no' and this will then allow you to connect to any legacy endpoint, however this will mean you wont have a working incoming port so speeds will be lower than usual - its not ideal i know, but its the best we have got right now until PIA sort their sh*t out (not happy). If you want to help out then PLEASE raise a support ticket via PIA web portal (https://www.privateinternetaccess.com/helpdesk/new-ticket) and bitch about the above, the more people that complain the more pressure they have to do something about it!. If you are completely pissed off with PIA (and i would completely understand this!) and want to switch then i recommend Mulllvad at this time, they are more pricey but they are solid and are privacy focused and support port forwarding.
  39. 2 points
    Sounds like you've got a plan that should work. One thing you might reconsider Since these are already known to be good might as well just go ahead and put them in with the New Config. If you put them in during New Config no need for them to be clear since parity is being rebuilt anyway, and Unraid will let you format them after you start the array and begin parity sync. You can even format them during parity sync if you want.
  40. 2 points
    I had a similar issue. Windows 10 VM boot-looping with blue screen. Ryzen 3950x. Solved it by adding the following to the end of the VM xml as suggested elswhere on this forum: <qemu:commandline> <qemu:arg value='-cpu'/> <qemu:arg value='host,topoext=on,invtsc=on,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vpindex,hv-synic,hv-stimer,hv-reset,hv-frequencies,host-cache-info=on,l3-cache=off,-amd-stibp'/> </qemu:commandline>
  41. 2 points
    I have verified the port, cable works. I tried reseting my equipment and it didn't work. I will definitely try 6.9 beta today. I really really appreciate all the help. I will post the results of using the beta version today. EDIT- UPDATE: Switching to 6.9 beta made all the difference. After dealing with a few more issues I reset CMOS, tweaked a few more things in the BIOS, created a new USB boot device with 6.9, and disabling Network Stack, everything worked. I am able to boot to GUI mode if I want to work from the machine and it showed up on my network with all drives showing up properly. I am excited!!! I am certain I will run into other problems along the way, but the main issues of getting started, are at least resolved.
  42. 2 points
    I just updated my hardware a month ago from an Intel E6600 with 2 Gb ram (DDR2 baby!). 😎 Ran fine for a decade with unraid, but I just used it as a NAS. Cache_dirs was literally the only program I ran (well, and pre_clear). Now I'm with the cool kids like you with 32 Gig.
  43. 2 points
    @lzrdking71 @theGrok i have identified the issue, not exactly sure how it was related to my previous change but it has caused a race condition, i have now corrected this and the image is building, please pull it down in around 1 hour from now.
  44. 2 points
    Nice Unraid blog by @nachbelichtet! https://nachbelichtet.com/unraid-die-omv-und-freenas-alternative-fuer-heimserver/
  45. 2 points
  46. 2 points
    The first release candidate of OpenZFS 2.0 has been released https://github.com/openzfs/zfs/releases/tag/zfs-2.0.0-rc1 I have built it for unRAID 6.9.0 beta 25 For those already running ZFS 0.8.4-1 on unRAID 6.9.0 beta 25 and want to update, you can just un-install this plugin and re-install it again (don´t worry , you wont have any ZFS downtime) or run this command and reboot rm /boot/config/plugins/unRAID6-ZFS/packages/zfs-0.8.4-unRAID-6.9.0-beta25.x86_64.tgz Either way you should see this: #Before root@Tower:~# modinfo zfs | grep version version: 0.8.4-1 srcversion: E9712003D310D2B54A51C97 #After root@Tower:~# modinfo zfs | grep version version: 2.0.0-rc1 srcversion: 6A6B870B7C76FB81D4FEFB4
  47. 2 points
    Never have done this myself, but from what I read on those forums you probably might even skip the reconfigure step. Everything important is on the USB key and identify disks with their S/N. The steps would be : backup flash drive config and appdata just in case power off remove old HW install new HW make sure every drive is well plugged in boot on the flash drive enjoy Damn, trurl was faster.
  48. 2 points
    Devices dropping and being added with a new ID won't be automatically re-added, you need to reboot then run a scrub, also see here for better pool monitoring.
  49. 2 points
    I'm currently searching for some users that help test my custom build with iSCSI built into Unraid (v6.9.0 beta25). EDIT: Also made a build for Unraid v6.8.3. Currently the creation of the iSCSI target is command line only (will write a plugin for that but for now it should also work this way - only a few commands in targetcli). The configuration is stored on the boot drive and loaded/unloaded with the array start/stop. If somebody is willing to test the build please contact me. As always I will release the complete source code and also implement it in my 'Unraid-Kernel-Helper Docker Container' so that everyone can build his own version with other features like nVdidia, ZFS, DVB also built in.
  50. 2 points
    I have the Rx 570 passed through. Had to install drivers through device management window as everytime I installed via .exe it would lock up, black screen. Don't force shutdown as if it's like mine the gpu won't restart and you would have to restart server.. may help, may not, hope so