Leaderboard
Popular Content
Showing content with the highest reputation on 04/14/21 in Posts
-
Ich wollte hier mal meinen aktuellen Stand zeigen. Meinen Server habe ich selbst in einem 10 Zoll Rack verbaut: HDDs und Motherboard sind auf simplen Rackböden montiert: Hardware MB: Gigabyte C246N-WU2 CPU: Xeon E-2146G mit Boxed Kühler vom i3-9100 (der vorher verbaut war) RAM: 64GB ECC NT: Corsair SF450 Platinum LAN: 10G QNAP Karte HDD: 126TB bestehend aus 1x 18TB Ultrastar (Parität) und 7x 18TB WD Elements (Ultrastar White Label) Cache: 1TB WD 750N NVMe M.2 SSD USV: AEG Protect NAS quer auf Gummifüßen3 points
-
Today's blog follows a couple of student's educational journey with Unraid in their classroom: https://unraid.net/blog/unraid-in-the-classroom If you are an educator and would like to teach with Unraid in the classroom, please reach out to me directly as we would love to support this educational program at your place of instruction!3 points
-
I am using flash drives that I have had for 5 years plus with no problems. My experience is that if you stick with USB2 drives and avoid the tiny form factor ones they DO tend to be reliable.2 points
-
Hopefully @giganode has not too much to do so he can answer here soon since he is the guy who can help with the AMD cards.2 points
-
So we finally got a couple people back on the server playing at the same time. There was minimal building but mainly a long lasting boss fight. And weirdly enough the server recorded inverse RAM spikes during the time this was happening (from 20:00 to 2:00 with a small break at 23:00 ish)... Other than that the RAM usage had continued the same pattern between 3.5 GB and 4 GB. At this point it really just seems like a Valheim server software 'feature' so I probably won't continue with updates here as there's nothing Ich77 can do about that! Thanks1 point
-
This release contains bug fixes and minor improvements. To upgrade: First create a backup of your USB flash boot device: Main/Flash/Flash Backup If you are running any 6.4 or later release, click 'Check for Updates' on the Tools/Update OS page. If you are running a pre-6.4 release, click 'Check for Updates' on the Plugins page. If the above doesn't work, navigate to Plugins/Install Plugin, select/copy/paste this plugin URL and click Install: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer.plg Bugs: If you discover a bu1 point
-
I have flash drives in use since the unRAID 4.7 days... more than 10 years ago. 1 is a USB 1.0 drive and the other is a USB 2.0 drive. both are running fine and not given me any problems. If you are losing USB drive at 1 per year that seems quite excessive for the amount of writes that should be going to a flash drive normally.1 point
-
The screenshots indicate that both video decoding and video encoding are hardware accelerated. However, the GPU can only transcode the video stream. What it can't do is re-wrap from MKV to MP4, transcode the audio or burn in subtitles - they all use the CPU and the CPU also has to feed the GPU with data. I don't know what you were expecting to see but it looks about right to me. Try turning off the subtitles and see how much of a difference it makes.1 point
-
I was able to rescue the most important data. Thank you for your help guys!!1 point
-
1) make sure your current config is NOT set to auto start the array. 2) note the position of each drive, as listed in unRAID. cross-reference with the serial# on each disk if needed. TAKE SPECIAL NOTE OF PARITY! 3) move the disks and flash drive to the new machine and boot. 4) your array will most likely be all wrong. do a new config, assign the disks as they were before, and ESPECIALLY THE PARITY! 5) if you did it right, you can tag the parity as VALID, and go on with your life. otherwise, you can rebuild parity, but only AFTER you verify all your data drives are present1 point
-
Hi, also seem to be having issue at the moment. Logs showing some errors. can you offer any advice please. Unable to access the GUI. PC on same network as unraid/deluge 192.168.10.0/24 Getting this 2021-04-14 14:45:41,730 DEBG 'start-script' stdout output: [warn] Unable to successfully download PIA json to generate token from URL 'https://10.0.0.1/authv3/generateToken' [info] 4 retries left [info] Retrying in 10 secs... Thanks supervisord.log1 point
-
Very unlikely to happen as Apple itself has dropped AFP from the latest Mac OS Big Sur. You should be able to set up Time Machine to point to an SMB share.1 point
-
that will be why you cannot access it now, as 'LAN_NETWORK defined as '10.10.20.0/24'' and your IP will now be in a different range and thus blocked. this might of been fixed by switching endpoint, it def looks to be working now, so i think your issue is now that you are blocked on your vpn range due to LAN_NETWORK not including your vpn range, see previous comment above, wait till you get home and try it on your lan, or alternatively add in your vpn range to LAN_NETWORK (use comma to separate the networks) and restart the container and try accessing the web ui again.1 point
-
I think some of the PIA servers are having issues with Wireguard? I usually use Toronto CA, but had to change to Ontario CA to get it to work.1 point
-
its fine, honestly, i didnt want it to come across narky, i just wanted to prevent any further 'i just upgraded unraid and now its broke' type posts, thats all 🙂1 point
-
Problem are eth117 / br117, suspect cause by dirty setting in network config file. A simple way was delete whole config file or edit yourself.1 point
-
1 point
-
I managed to get my FiveM server up and running using the info in this thread to get the txadmin set up. I was having issues getting the recipe to run for the base ESX default recipe. I had to add the mariadb docker containers and myphpadmin docker containers. it appears to be saving and working curerntly but it's 6AM and I'm calling it a night if anyone else has set up an ESX roleplaying server I'd like to know how you did it and what issues you encountered. I currently can't figure out how to update txAdmin it says I'm using an outdated version but it works so I guess the annoying red te1 point
-
No idea..but start reading from here: It seems @giganode solved the same issue by changing the refresh rate.1 point
-
Can you try to do a of container and try it again if it stops working after some time? I have updated the container and it should hopefully work now.1 point
-
You are a god damn hero. Thank you! I seem to be full up round again.1 point
-
This is it yes, it's marked as beta for the application, not the packaging as a docker image. Sent from my CLT-L09 using Tapatalk1 point
-
So the drives finished formatting overnight and now the smartctl -i /dev/sdX shows no more Type 2 formatting. Will now proceed with parity build and see if errors come up again.1 point
-
Success! The array started without problem. The faulty disk is being emulated and the data appears to be completely intact. The replacement drive goes in this morning along with a second parity drive. Many thanks for your assistance! Very much appreciated!1 point
-
You can do this yourself with a 'user.sh' somewhere on your server (but I don't recommend to put it into the root of your CSGO server directory) and the container will check if the library is installed on every start/restart of the container. To do that: Create a file somewhere on your server named 'csgo.sh' Put the following in the file (please note the '-y' at the third line so that this library will be automatically installed): #!/bin/bash apt-get update apt-get -y install lib32z1 Mount the script in the Docker template from you server to the1 point
-
As I recently added a (12tb) disk and was interested in this thread. My journey... Before getting UnRAID, I had a 24tb RAID5 (4x6tb = 18tb usable, 1 parity). I now I have an UnRAID with 3x12tb and 1x6tb. I "decided" to loose 12tb to parity and have 30tb usable. The "old" RAID5 I use for backup now. I had a drive fail in the RAID5 and it rebuilt fine, but it wasn't pleasant. It takes a LONG time to transfer/save/rebuild data. Backup is everything; parity is making it "easy." I say, "easy" because I spent weeks transferring data to UnRAID. Test how1 point
-
As far as I know you're not at RAID anything. Why do you think you're at RAID 0? Your 2 data disks are independent and provide no redundancy. And I'm not suggesting RAID anything. Unraid IS NOT RAID. Even after you add parity you won't be at RAID anything, at least not technically. Unraid allows a parity disk that lets you rebuild a missing disk from parity and all the other disks, but the implementation is different from any traditional RAID. Unlike RAID, Unraid allows you to mix different sized disks in the array, and you can easily add disks without rebui1 point
-
I am suggesting that parity is never a substitute for backups. But parity is still a good idea. Don't cache the initial data load. Never try to cache more than cache can hold. It is impossible to move from cache to array as fast as you can write to cache. If you keep all the source data until after you get it all copied to your Unraid server, you could wait until then to add parity since the copy would go faster without parity. Then, after making sure you have another copy of anything important and irreplaceable, you could reuse the disks with th1 point
-
Most remote desktop software isn't optimised for gaming, but software like Steam Link is. Steam recently released a Linux version, so you could install a Linux distro, install Steam Link and stream from your more powerful server (i.e. a gaming VM with a GPU etc). All peripherals are managed on the client side (your old pc) through Steam Link. I play games on my lounge tv using the android tv version of this app and its a pretty good experience. You can also set this up with a rasberry pi if you want to sell off your old pc.1 point
-
I found the solution to this particular problem: I had to enable "Host access to custom networks" in the Advanced Docker settings and enable the desired Subnet.1 point
-
1 point
-
So ein kleines Update für den Server. Wer eine gute und preiswerte USV für das 19" Rack sucht: PowerWalker VI 1200 RLE Ergebnis: - Tut was sie soll - Integriert sich super in Unraid - Preiswert - Schaut gut aus und man hat gleich eine Stromverbrauchsanzeige integriert Die Anbindung an Unraid hat out of the box funktioniert. Ich habe allerdings noch den "NUT Network UPS Tools" installiert um die Daten an meinen iobroker Nut Adapter weiterzugeben für Auswertungen etc.. PS: Die 36w sind Idle Betri1 point
-
It's in the beta. Switch your repository to emby/embyserver:beta or wait for the next release.1 point
-
Ist dir der Stromverbrauch egal? Weil das wird ein Stromschlucker sondergleichen. Für Docker und 1G LAN ist die Leistung der CPU die untere Grenze. So gut wie alle aktuellen CPUs übertreffen diese Leistung bei weitem. Brauchst du für Plex eine iGPU? Weil die hätte die CPU ja nicht mal. Du hast vermutlich keine Lust 500 € auszugeben? So in dem Bereich läge meine Empfehlung für dich: https://geizhals.de/?cat=WL-1881351 Oder die Sparversion für 330 €: https://geizhals.de/?cat=WL-1928373 Die Hardware bietet viele Upgrade-Mögl1 point
-
...was ist das fürn MB? 2x SATA3 und 2x SATA2, nur eine M.2 unbekannter Spezifikation.....zumindest wenn man dem reichsten Mann der Welt trauen darf. Hast Du nen guten Feuermelder und ne Versicherung? Ganz ehrlich...ein gebrauchter namhafter Dell, Fujitsu, Lenovo oder HP, zur Not auch in DDR3 tut es auch, wenn nicht noch was besseres zu finden ist. Was ist Dein Budget, ohne Platten? Edit: bzw. was genau hast Du schon auf dem Tisch?1 point
-
I got a HBA last week from this guy : https://www.ebay.fr/usr/bd-xl?_trksid=p2047675.l2559 It got there pretty fast from the Netherlands (to France). It's just a sample of one, but that is better than nothing.1 point
-
Woah, Nelly! Just hit 65MB/s over Wireguard before even configuring an ideal endpoint. That dog'll hunt. Thank you boys.1 point
-
no. see q28:- https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md1 point
-
change the endpoint, see q28 for how to do this:- https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md1 point
-
DEVELOPER UPDATE: 😂 But for real guys, I'm going to be stepping away from the UUD for the foreseeable future. I have a lot going on in my personal life (divorce among other stuff) and I just need a break. This thing is getting too large to support by myself. And it is getting BIG. Maybe too big for one dash. I have plenty of ideas for 1.7, but not even sure if you guys will want/use them. Not to mention the updates that would be required to support InfluxDB 2.X. At this point, it is big enough to have most of what people need, but adaptable enough for people to create custom panels1 point
-
How to Setup Nextcloud on unRAID for your Own Personal Cloud Storage https://www.youtube.com/watch?v=fUPmVZ9CgtM How to Setup and Configure a Reverse Proxy on unRAID with LetsEncrypt & NGINX (Swag nehmen) https://www.youtube.com/watch?v=I0lhZc25Sro Swag nehmen, dann muss man das letzte Video nicht mehr machen: How to Migrate from Letsencrypt to the New Swag Container https://www.youtube.com/watch?v=qnEuHKdf7N01 point
-
Ich würde vorschlagen die Datenbank zu sichern bzw in eine neue Datenbank zu kopieren. Mit der spielt man dann. Also den anderen Nextcloud-Container installieren, stoppen und dann die Web-Dateien und Shares in das Verzeichnis des neuen Containers kopieren. Im letzten Schritt dann in der config.php des neuen Containers die Datenbank so anpassen, dass sie sich mit der Kopie verbindet und zum Schluss starten und schauen was passiert. Solange man nur mit den Kopien der Daten arbeitet, kann ja nichts schief gehen. PS Ich nutze zum Abgleich von Verzeichnissen und Dateien Beyond Compare.1 point
-
1 point
-
Unraid only disables a disk when a write to it fails. The failed write makes the disk out-of-sync with the array so it has to be rebuilt. Read errors won't disable a disk because a failed read doesn't make the disk out-of-sync with the array. But it is possible that a failed read will cause Unraid to get the data from the parity calculation and try to write it back to the disk. If that write fails the disk is disabled.1 point
-
Don't know if this has been mentions yet but, quicksync stops working after version 1.22.0.4163. I tested this across multiple containers, each with the same results.1 point
-
maybe add a feature => buton to mark the topic as solved automaticaly, would be nice1 point
-
Note to self: info here: https://packaging.python.org/tutorials/installing-packages/#ensure-you-can-run-pip-from-the-command-line pip does not work from scratch, first run: python -m ensurepip --default-pip Then optionally upgrade pip using: pip install --upgrade pip And then just install the package from https://pypi.org/project/requests/ using: pip install requests1 point
-
If your problem is solved, or was a simple misconception on your part, then please do NOT delete the thread. Other users may still find it interesting, and there may also be links within other threads to your posts1 point