Leaderboard

Popular Content

Showing content with the highest reputation on 07/25/21 in all areas

  1. Yes, this one is a bit strange, I'll need to dig deeper to figure out what's the issue
    2 points
  2. How to setup linuxserver's Transmission on a VPN without using a different docker image with VPN support. This guide assumes you have setup transmission already is only meant to get Mullvad VPN working with it. Mullvad Account & Configuration Generate an account with Mullvad VPN: https://mullvad.net/en/account/#/ Save the account number so you can access your account later on, you will not be able to recover this information later and there is no password. Make sure you make a payment else you will not be able to create port forwarding or generate config files in the coming steps. Generate config files: https://mullvad.net/en/account/#/wireguard-config/ Select Linux, Click Generate Key, Select Country/City/Server and download the file. Unzip this file and save the files for later use. Create Port Forward: https://mullvad.net/en/account/#/ports Select the same city & key that you selected in step 2 and click Add Port. In this example, the port that was randomly generated was '55214'. We'll need this for transmission incoming port. You will not be able to select the port number to forward, a random one will be assigned to you! Privoxyvpn Configuration Install binhex-privoxyvpn Set a VPN User and Password (not sure this is really needed, but i set it anyway) Set Provider to 'custom' Set Client to 'wireguard' Set ENABLE_PRIVOXY to 'yes' Click on Add Another Path, Port, Var... We need to add the port that was genreated during the Mullvad setup. In our example it is '55214'. We need to add two of these, one for UDP and TCP. Now we need to add the port for the transmission web/gui interface, which is 9091. And we are done with configuration, click apply/done and let it install and start. Once Privoxyvpn started, its going to be complaining about not having a wireguard config (look at the log). Once it complains about no wireguard config, stop Privoxyvpn if it hasn't already. Copy one of the conf files from the mullvad zip file you downloaded to '/mnt/user/appdata/binhex-privoxyvpn/wireguard' (see note below). Each config file is a different server if you selected "all servers" during config creation. Only copy one file. At a later date, if you want to switch servers, delete everything in the wireguard folder and copy a new config, reboot Privoxyvpn. ** Pick your favorite way to get access to the '/mnt/user/appdata/' folder. I like to share the folder so I can use windows networking to get access. I know this isn't the best way, but its how I do it. Some like to use binhex krusader app. Start Privoxyvpn and log into its console Type: curl ifconfig.io This will show you your WAN IP address, hopefully its showing the exit point of your mullvad VPN! Check your normal WAN by going to http://ifconfig.io/ in your normal web browser. All done with the Privoxyvpn configuration and if the WAN IP was the VPN exit point then you are good to go. Lets move on to configuring transmission. Transmission Configuration Stop transmission and edit the docker Click advanced view Set Extra Parameters to '--net=container:binhex-privoxyvpn' Set Network Type to 'none' Set 'Host Port 2' to the port that was generated on the mullvad setup. In our example it was '55214'. I like to click edit here, and update the default value and description.. just my preference. Click apply and start transmission Open the console for transmission Type: curl ifconfig.io This should show the same IP of the VPN WAN exit node that we saw during step 12 of the Privoxyvpn setup. At this point every thing will be working. However, you may have noticed you don't have access to the web interface or transmission remote gui isn't connecting any more. Well read on friend, we'll get that straightened out. Transmission Web GUI Access Transmission is now behind a proxy, to get access we also need to be behind the proxy. I am a primarily a chrome user. So for my case, I'm going to use Firefox to set it up as a permanent browser that is behind a VPN at all times. While my chrome is my normal un-VPN browser. Download and install Firefox: https://www.mozilla.org/ Click the top right menu and goto 'Options' Scroll all the way down to the bottom and click on 'Settings...' under 'Network Settings'. Enter your unraid's LAN IP address in 'HTTP Proxy' and check the box "Also use this proxy for FTP and HTTPS' Go to the 'Docker' page on your unraid server and look at the IP:9091 port mapping line. My example is 172.17.0.7:9091 which tells us that we can go to this port while on the VPN via: http://172.17.0.7:9091 This will give us access to our transmission web gui. Transmission Remote GUI Access If you have not heard of transmission remote gui, i highly recommend it. Its real nice for managing your transmission server. You can install it by going to: https://github.com/transmission-remote-gui/transgui/releases Lets start by editing our connection. Go to the 'Docker' page on your unraid server and look at the IP:9091 port mapping line. My example is 172.17.0.7:9091 Enter the IP and port from the previous step into transmission. In my example it was 172.17.0.7 & 9091 Goto the 'Proxy' tab and enter your unraid server's ip address and port 8118 as well as the username/password if you provided it in the privoxyvpn setup step 2. And thats it, should be working!
    1 point
  3. Big thanks to Head to Community Apps and Search or look under "Language". Thank you to all who participated in the translation of this.
    1 point
  4. Start passt ja soweit und ist auch normales Verhalten. BIOS ist ein Ansatz wenn nichts geht, zum Start würde ich jetzt erstmal bei Audio Device auch bitte die Nvidia nehmen und nicht die Intel ...
    1 point
  5. I use the tuner plugin. I also only run it every other month. It runs from 10PM on the first Sunday of the month to 6AM the next morning, and runs for either 6 or 7 nights. Sometimes after 6 nights it has a couple of hours, or a couple of minutes. However, for some reason, with the tuner plugin, it seems to take longer than if I were to just let it run straight through. My parity checks also slowed down when I went from two HBAs for the array to one. But, I really wanted to install a GPU, so I needed to free up a slot. Now that I have switched to 40GbE networking, I can probably remove one of the 10GbE cards, install another HBA and split the array again. HBA prices (like just about everything else) has gone through the roof. I only paid like $240 for the 9400-16i.
    1 point
  6. Pool device replacement is currently broken, if the pool is raid1 you can add the new device and then remove the other ones one by one, alternatively you can use this to backup pool to array, install new device and move data back.
    1 point
  7. Ich hab mir nicht genau angesehen welche generation das ist aber ab Broadwell funktioniert GVT-g. Wenn entweder Intel-GPU-TOP oder Intel-GVT-g Plugins drauf sind nicht da das die Plugins erledigen aber Grundsätzlich ja.
    1 point
  8. V2 hat Sockel 1155 und V4 Sockel 1150. Sind also gänzlich andere Generationen und daher blöde Namenswahl durch Intel. Ist denn v4 eine Voraussetzung für GVT-g? Wie gesagt würde ich eh über einen Generationenwechsel nachdenken. Sowas in der Richtung: https://www.ebay-kleinanzeigen.de/s-anzeige/gaming-pc-gtx-1050ti-4g/1796960802-228-760 Den alten Kram dann verkaufen und man kommt vielleicht bei 200 € raus?! Stärkere / effizientere GPU gleich inklusive.
    1 point
  9. You should move data from your cache. Maybe your issue has to do with your video share ? Is it possible that videos are downloaded on cache and moved to "video" ? Since the mover does not consider files with cache set to NO, they might pile up. In anycase, your diagnostics might help provide better advices. Go to Tools / Diagnostics and attach the full zip to your next post.
    1 point
  10. PCIe 4.0 x8 = PCIe 3.0 x16 (approximately) You will need to spend more on a riser to ensure it runs at PCIe4.0 speeds though, many will drop down to PCIe 3.0 I think you'd barely notice a difference in performance, if any... one area to watch, I believe the bottom slot goes though the chipset, so make sure you don't use that slot for gaming.
    1 point
  11. 3090 tuf (2.5 slot) on a dark hero leaves half a slot clearance, which is ok, but the strix is 2.75 slot wide, so you may want to consider a buy a different card to help with thermals. You can also consider mounting a fan in front of this space to help with airflow. If noise isn't an issue, you can use msi after burner to ramp up fans to keep area cooler. Ultimately you will need to move a lot of air in and out of the case to keep up. One scenario that might help with spacing,: 3070 on top, P2200 on bottom, 3080 on a riser somewhere it can expel heat more easily i.e. vertical mount I've watercooled my cpu, and will eventually do the same for the GPU to get space back and get better thermals.
    1 point
  12. This is what I used: mkdir /tmp/ramdisk mount -t tmpfs -o size=115G tmpfs /tmp/ramdisk If Machinaris is running before you create the ram disk, you need to restart the Machinaris container, to make sure it picks it up correctly. You can console into the container and type "df -h" to verify paths, drive size, and how much room is available. Filesystem Size Used Avail Use% Mounted on tmpfs 120G 0 120G 0% /plotting2
    1 point
  13. This solution works perfectly for running multiple instances of Factorio on unRAID. Thank you.
    1 point
  14. I use HandBrake with a custom preset to convert to x265 MKV file. I then setup a watch folder to any new media gets converted before going to my library. For files you currently have, you can toss them in the watch folder, or manually queue up files to be converted.
    1 point
  15. No, wouldn't work because the Driver versions are tied to the unRAID or better speaking to the Kernel versions that unRAID runs on. You can always check here wich versions available, for example for unRAID version 6.9.2 here: Click The main problem here is that I only list the 8 "newest" driver versions that are available on the plugin page to keep the page tidied up. The next thing is that I won't compile this specific driver version for newer unRAID versions, I think you have two options here, stick with 6.9.2 and install the driver manually or ask the developer form nsfminerOC to update or find a solution why it's not working on newer Nvidia driver versions. You can of course install the driver manually but that would involve some file editing.
    1 point
  16. You are amazing! I've been waiting for something like this for a while. So easy to set up. Great work!
    1 point
  17. Bios setting, restore power after ac loss. Sent from my iPhone using Tapatalk
    1 point
  18. Absolutely! You really want to be creating the new portable plots now, not the original solo plots anymore. As you say, with portable plots (new pool_contract_address, not the old pool_pk) you can either: Self pool. When you win a block, you will earn all the XCH rewards OR Join a pool and get consistent XCH farming rewards. The average returns are the same, but it is much less volatile. You can easily switch pools without having to re-plot. With portable plots you can switch between both selections, if you change your mind, above using the same plots.
    1 point
  19. @SquidOx Je te confirme que ça fonctionne, tu peux lire mon poste ICI 😁
    1 point
  20. The proper solution is the update, if necessary the workaround can be found there :
    1 point
  21. That is a 6.8.3 issue due to a change at the remote end. It is fixed in 6.9.2 (although if for some reason you cannot upgrade to that release there is a workaround posted in the forums).
    1 point
  22. Yea, I'm having a lot of super weird occurrences after the most recent update also. The UI/Dashboard for unmanic becomes pretty much unresponsive. set_mempolicy: Operation not permitted in my logs. And I have one fire that just keeps failing. [h264 @ 0x5605116f0680] SEI type 195 size 888 truncated at 48 [h264 @ 0x5605116f0680] SEI type 170 size 2032 truncated at 928 [h264 @ 0x5605116f0680] SEI type 81 size 1920 truncated at 32 [h264 @ 0x5605116f0680] SEI type 195 size 888 truncated at 47 [h264 @ 0x5605116f0680] SEI type 170 size 2032 truncated at 927 [h264 @ 0x5605116f0680] SEI type 81 size 1920 truncated at 31 [h264 @ 0x5605116f0680] A non-intra slice in an IDR NAL unit. [h264 @ 0x5605116f0680] decode_slice_header error [h264 @ 0x5605116f0680] no frame! [h264 @ 0x5605116f0680] SEI type 163 size 248 truncated at 32 [h264 @ 0x5605116f0680] non-existing PPS 2 referenced [h264 @ 0x5605116f0680] SEI type 163 size 248 truncated at 31 [h264 @ 0x5605116f0680] non-existing PPS 2 referenced [h264 @ 0x5605116f0680] decode_slice_header error [h264 @ 0x5605116f0680] no frame! [h264 @ 0x5605116f0680] SEI type 195 size 1448 truncated at 32 [h264 @ 0x5605116f0680] SEI type 195 size 1448 truncated at 30 [h264 @ 0x5605116f0680] top block unavailable for requested intra mode -1 [h264 @ 0x5605116f0680] error while decoding MB 0 0, bytestream 24 [h264 @ 0x5605116f0680] concealing 3600 DC, 3600 AC, 3600 MV errors in I frame [h264 @ 0x5605116f0680] SEI type 33 size 2024 truncated at 16 [h264 @ 0x5605116f0680] non-existing PPS 2 referenced Guessed Channel Layout for Input Stream #0.1 : 5.1 Literally all it says in the log. And it takes a good 10-20 seconds for that to even appear. The most recent unmanic push... is weird.
    1 point
  23. Can someone get this Youtube frontend working on Unraid? https://hub.docker.com/r/omarroth/invidious
    1 point
  24. I've been troubleshooting erroneous disk utilization warning emails and found there is no way to visually determine what any particular disk's utilization status is. I would like to propose adding a column to the display grid (far right between Free & View) on the Main tab (Array Devices & Cache Devices sub tabs) with simply "%" for heading and the calculated percent full for each disk. Maybe go out a couple decimal places so you can spot one about to go over without it actually having to happen. Total at the bottom but frankly that already covered by the "Show array utilization indicator" doohickey.
    1 point