Leaderboard

Popular Content

Showing content with the highest reputation on 05/10/21 in all areas

  1. Out of the box this docker isn't configured optimally for logging, after some research these are two changes I've made to get better logging. Both changes are made to the config.yaml file that will be in chia's appdata folder: change log_level to INFO (from WARNING) change log_stdout to true (from false) Change one adds more useful info to the logs, change two pushes the logs to the standard docker logging mechanism, which means it is visible from the GUI's log button. Happy hunting!
    3 points
  2. Overview: Support thread for Partition Pixel/Chia in CA. Application: Chia - https://github.com/Chia-Network/chia-blockchain "Docker Hub": https://github.com/orgs/chia-network/packages/container/package/chia GitHub: https://github.com/Chia-Network/chia-docker This is not my docker, nor my blockchain, and I'm not a developer for them either. I simply did an Unraid template for the already existing docker so that way It will be easier for me and others to install the docker on an existing Unraid Server. I can support any changes required to the xml template and provide assistance on how to use the parameters or how to use the docker itself. Please read on SSD Endurance if you don't know about Chia and you plan on farming it : https://github.com/Chia-Network/chia-blockchain/wiki/SSD-Endurance Instructions: Install Partition Pixel's Chia via CA. Create a 'chia' directory inside of your appdata folder. Skip to step 4 if you do not have an existing chia wallet Inside this new folder, create a new file called 'mnemonic.txt' and copy and paste your 24 words mnemonic from your wallet inside (every word one after another on the same line with 1 space in between like this sentence). Back on the docker template, choose a location for your plotting if you plan on plotting on your server (preferably a fast SSD here) Choose a location for storing your plots (this is where they will be used to 'farm', preferably HDD here) Feel free to click on show more settings and change any other variable or path you would like Save changes, pull down the container and enjoy ! If you have some unassigned or external HDDs that you want to use for farming: edit /mnt/user/appdata/chia/mainnet/config/config.yaml Add more plot directories like so : plot_directories: - /plots - /plots2 Create a new path in the docker template like so : config type : Path container path : /plots2 host path : /mnt/an_unassigned_hdd/plots/ Here are some often used command lines to get you started: Open a console in the docker container, then type : venv/bin/chia farm summary venv/bin/chia wallet show venv/bin/chia show -s -c venv/bin/chia plots check Command to start plotting : venv/bin/chia plots create -b 5000 -r 2 -n 1 -t /plotting/plot1 -d /plots -b is amount of ram you want to give -r is the amount of threads -n is the number of plots you want to queue -t is temp dir -d is the completed directory From user ropes: If you only want to harvest on this docker, then you don't need to create a mnemonic file with your passphrase. Instead you can do the following (more secure imo) : chia plots create [other plot options] -f <farmer key> -p <pool key> If you want to run in Parallel just run the command in another terminal window as many times as your rig will allow. Here are all the available CLI commands for chia : https://github.com/Chia-Network/chia-blockchain/wiki/CLI-Commands-Reference From user tjb_altf4:
    2 points
  3. You do not have to enable remote mode. This is what the My Servers settings page looks like: The link to the flash backup information is here: https://wiki.unraid.net/My_Servers#Flash_Backups_are_Not_Encrypted That Wiki page will probably answer any other questions you might have.
    2 points
  4. This is how I monitor my farm. If you guys install the nerd pack from community applications, then navigate to nerd pack under settings, install tmux. Now launch the unraid terminal & run tmux, then you can have multiple (tabbed) terminal windows in one window. I have 1 tab per plot & can jump around them as needed. Also you can close the terminal window & relaunch your session by running tmux again from the terminal. Also works across different devices or applications, so you can ssh in from your phone or putty etc & just run tmux again & you will be able to see those other terminal windows. Hope this helps.
    2 points
  5. Hi @mathgoy, I've been using Frigate with an NVIDIA GPU. to set it up, go the the docker template and add: --rm --runtime=nvidia to the "Extra Parameters". You will also need to add two new Variables: "NVIDIA_VISIBLE_DEVICES" and "NVIDIA_DRIVER_CAPABILITIES": After that you should be able to run "nvidia-smi" in the docker console: Note that you won't get anything under processes here even after you'll get the GPU decoding to work. After that you should adjust you're config to use the HW decoding. see here: https://blakeblackshear.github.io/frigate/configuration/nvdec Those are my changes: (all my cameras are 1080P h264) When you're done you should be able to see the GPU being used by running "nvidia-smi" from the Unraid console: I have 8 cameras, so 8 processes (and another one is Deepstack). Hope it helps.
    2 points
  6. Hi guys this week i have made a tutorial about how to add a cache drive to your server. It also shows how to upgrade or replace an existing cache drive without loosing data Also you will see how to create a raided btrfs cache pool. Hope you find it useful! How to add a cache drive, replace a cache drive or create a cache pool
    1 point
  7. @Partition Pixel it should be: ghcr.io/chia-network/chia:latest I updated yesterday, and I'm currently on 1.1.6.dev0 according chia version You should only need to press the check for updates button on docker page, if you want to force a check
    1 point
  8. Nice! Glad you found something to meet your needs. Never surprises me with this community ;-). As far as a backup solution goes, it's something we've talked about for years, but backup is an entire business in and of itself. There are a plethora of companies/solutions out there for this ranging from the basic like rsync to the enterprise like CommVault. And of course there are a million little variants and specialities in between such as Veaam for VM backups. And on top of that if we want to make use of features like btrfs snapshots or send/receive, that can further complicate things. So while we may eventually bring in a backup solution, that isn't a promise and in the meantime, you can probably find a variety of ways to backup your system with some basic Googling, but here's a nice write-up from our friend over at @spxlabs: https://www.spxlabs.com/blog/2020/10/2/unraid-to-remote-unraid-backup-server-with-wireguard-and-rsync
    1 point
  9. Just from a logic standpoint, left side doesn't make any sense. Good find. I hadn't had a chance to dig through it. I also have minimal PHP experience, but that's just standard logic. $tmp>1 would just return true or false, I have no idea how this actually works for anyone. You can't get count of a boolean. When I get a few minutes, I'll see what is on my working machine. edit: if(count($tmp>1) && (trim($tmp[1])!='')) It's the same on my working machine, how the hell does this work?
    1 point
  10. Danke für die Info, hab es jetzt selber hinbekommen. Der hat die Netzwerktreiber für den nachgerüsteten Lan Port nicht geladen. Er hat schon gebootet und auch alles richtig geladen. Hab mal nen Monitor dran gehängt. Treiber error nct irgendwas. Auf dem 2,5G Port der mit der Beta nicht gelaufen ist, geht es jetzt ohne Probleme. Also das Ende der Geschicht, der 2,5G Port geht jetzt und der 1G Port nicht. Also grad umgekehrt wie bei der Beta 😂😂
    1 point
  11. Thank you for that long and detailed answer. I think I'll be able to fix it, and I will do that tomorrow after my Exam. Best, Kippenhof
    1 point
  12. It dropped again, try swapping NVMe slots and see if the problems stays with the slot or follows the device.
    1 point
  13. So I got it to save my value. Line 38 of "/usr/share/webapps/rutorrent/plugins/cookies/cookies.php" looks like this: if(count($tmp>1) && (trim($tmp[1])!='')) if you change it to if( (count($tmp)>1) && (trim($tmp[1])!='')) You just need to move the comparison outside of the "count" function. This is all I could find that MIGHT be relevant? https://www.php.net/manual/en/function.count.php#refsect1-function.count-changelog Edit to add: The source file hasn't been changed in 9 years. So I dunno why this is suddenly a problem for us. I don't know enough PHP to guess. https://github.com/Novik/ruTorrent/blob/master/plugins/cookies/cookies.php
    1 point
  14. Hat sich alles erledigt. Der Server hatte wirklich keine Verbindung zum Internet. Jetzt geht es. Irgendwie hat es bei der installation einen Fehler mit dem Netzwerk gegeben.
    1 point
  15. 1 point
  16. Hi @IpDo Thanks for the Tips. I did exactly what you were advising and It worked. I was missing the GPUID (i was using 'all" as it is mentioned in the documentation) as well as the extra arguments After it worked with the regular template, I moved to the new frigate-nvidia template (big thanks to @yayitazale) and it was even easier since all the variables were created. Just one thing. In the YAML config file, the documentation is telling us to add the following to enable the hardware decoding: It didn't work for me and it crashed the docker (even deleting it for some reason, several times!) Instead of that, I followed your recommendations and added the following which worked like a charm: Thanks again mate
    1 point
  17. Thanks both. That was simple enough! 🙈 Parity building is now humming along nicely. It'll probably be done by midnight. Cheers for the swift response!
    1 point
  18. edit: misread the issue, never mind
    1 point
  19. Just saw your post in the unbalance thread, and was about to suggest you check here. No need now
    1 point
  20. This is normally caused by the container still having an active network connection. Usually caused by not stopping a container before removal. docker network disconnect <network name> <container name> That command should kill any active connections. Sent from my GM1913 using Tapatalk
    1 point
  21. I have FIREWALL = true, wouldn't that do effectively the same?
    1 point
  22. Ok thanks - will try and open up my mItx then and extract out its beloved rtx 2070 I got as a retrospective steal in April 2020! Re plex, yes I have plex pass and set it up according to your guide and was running a 4k video on my phone set to max 1080p quality - the UI showed this accurately but the video froze. Will keep you posted - looks like will need to find a stopgap cheap basic nvidia gpu in interim (just for windows vm hosting blue iris).. something like a GT 710 I guess?? Blue iris won't be hardware accelerated on that basis but its just to make the vm function. Thanks again UPDATE: So I ran memtestG80 which test's GPU VRAM - it works in 128 MB iterations and starts off fine and then suddenly throws off a ton of errors... got the same result from OCCT testing software. Interestingly enough the ebay seller immediately accepted my return... pretty horrible situation for many whereby its impossible to buy anything but the most basic GPUs new! UPDATE2: Am pleased to say I returned that card and got a refund.. then "splashed out" on a 1660, just about squeezed it into the case and passthrough works immediately with no crashes on demos at all So much time wasted thanks to a lovely dodgy ebay seller!!
    1 point
  23. Haven't even started; with this microchip shortage causing PC parts to have ridiculous prices I'm still waiting. Hope things get better.
    1 point
  24. Looks like new image has been build, it should appear shortly. It is recommended to update as soon as you can.
    1 point
  25. UPDATE: Problem seems to be fixed. I first lowered the RAM to 2667 MT/s, server still crashed after about a day. I then disabled c-states in the go file and that seemed to do it (running Ryzen 3900X). Been up for over 4 days without an issue, this is the longest it has gone without a crash. Thank you for the help!
    1 point
  26. Zuerst musst du den gewünschten Pfad bei deinem Nextcloud Container hinzufügen. Dazu gehst du unter "Edit" ins Docker Template. Hier fügst du einen neuen Pfad hinzu. Config Type = Path Name = Host Path % Container Path = /mnt/Unraid/Musik Host Path = /mnt/user/Musik/ >> hier deinen gewünschten Share angeben Access Mode = Read/Write Danach in Nextcloud mit einem Admin Account in den Einstellungen unter "externer Speicher" wie folgt hinzufügen. Unter "Konfiguration" muss der "Container Path" von oben verwendet werden.
    1 point
  27. Same reason why you drive on a parkway, park on a driveway and get ticketed if you do the opposite. It's simply a misnomer commonly in use here (same thing as referring to the container itself as a "docker")
    1 point
  28. OK. I did not know if reboot shut off power briefly. I guess not. Thanks. I remember the South Park episode, where the internet went off all around the world, causing porn to be unavailable and many other problems, until they power cycled the router.
    1 point
  29. Your vdisk share (presumably where you've got the vdisk mounted for the VM) exists on the cache drive and on disk 1 / 2. On which drive is the vdisk for the VM actually stored? On the array will have a huge performance impact.
    1 point
  30. Yes I think they released 1.1.5 for this (https://github.com/Chia-Network/chia-blockchain/releases/tag/1.1.5), the docker version should come along soon enough Some people need to open the port on their router, some don't, if you needed it to make things work it's possible and fine (imo)
    1 point
  31. WOW! Thank you for this thread. I have been scratching my head about this all week. I posted here. Instead of tweaking the drives, I am just going to disable the spin down delay for any drives in the enclosure and hope for an update to the driver.
    1 point
  32. You could in theory use your GPU in the way you describe, using the amdgpu driver module, but I'm not aware of any Handbrake container that can actually make use of it. (If you find one please let me know, as I could make good use of it too.) Two containers that can take advantage of AMD GPU hardware acceleration, however, are @ich777's Jellyfin container and mauimauer's fork of the linuxserver Plex container. You might also be interested in the Radeontop plugin and the GPU Statistics plugin. All (except mauimauer's Plex) are available via Community Applications.
    1 point
  33. Did you figure this out? I have the same issue. The ip in the log belongs to my laptop. Restarting my laptop stops it, then after a day or so it fils the log again with the same error.
    1 point
  34. Sorry mein Fehler, du musst vorab über die CA App das "Nerdpack" installieren. Dann über Nerdpack "kbd-1.15.3-x86_64-2.txz" installieren. Jetzt müsste Loadkeys funktionieren
    1 point
  35. Read the first post. There is the howto. You don't have to fill in the IP of the Plex Server. The Server URL you have to insert is something like https://app.plex.tv/desktop#!/server/e46.....
    1 point
  36. Can you post the output please? Also from memory, I think that the wallet needs to be fully synced with the chia network before it displays anything. Sent from my GM1913 using Tapatalk
    1 point
  37. Needs to be the other way round. Plotting is your temporary storage, which should be your ssd & your plots are where the final files are stored, this is for your HDD. This command will show you your wallet. docker exec -it chia venv/bin/chia wallet show Sent from my GM1913 using Tapatalk
    1 point
  38. That depends, you can of course try the for example the Phoenixminer from @lnxd or you could try Jellyfin if you got some video files that you can transcode (please read the second post of this thread how to do that for example with Jellyfin and add all the necessary parameters to the container).
    1 point
  39. Kommt darauf an was du machen willst und wie viele Daten du hast. Berücksichtige ich meinen Verwendungszweck von Unraid würde ich es wie folgt einrichten. Alle HDDs in das Array NVME's in Cache Pool und somit ein btrfs Raid -> darauf würde ich z.b. Nextcloud legen die SSD alle weiteren Cache Pool -> darauf dann die Docker und VM's legen. (P.S.: ob das mit der Hardware realisierbar ist habe ich mir nicht angeschaut) Sorry da ich überall das eine Video poste aber es erklärt einfach alles: All about Using Multiple Cache Pools and Shares in Unraid 6.9
    1 point
  40. That is the default path for the log file. The firs hit on Google tells you how you can change the path.
    1 point
  41. You seem to have log level too high and it's logging too much info that is meant for debug. On the link you can see how to change that. And there is also info on how to change the location of the log file. But remember where you set it to be, because in the future if you have another problem and someone asks for the logs, you have to know where it is https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/logging_configuration.html
    1 point
  42. I don't use GUS, but that looks like you are correct.
    1 point
  43. I would suggest joining the discord. I'm not even sure if the IRC is still maintained.
    1 point
  44. At this time, just using the cpu with your docker is pretty sweet for me, so don't worry. I don't have a beefy server, just an i7 3770 with 16gb ram and a 1050ti at the moment so... for just playing it's far ok. I'm testing right know the Trex-miner with the 1050ti to use the unmineable etchash pool and seems to work perfect. About 12.68 MH/s. At the same time I have your docker running on the RandomX pool. And as gpu use with xmrig miner don't seems to be great, may be it's a waste of time trying to implement the symbolic links solution. But if you achieve it, I will love to try it 😁 Thanks!
    1 point
  45. Thanks for the thorough response. Me and the 10479 people that will ask after me VERY MUCH appreciate it :-)
    1 point
  46. Nunja, das passthrough funktioniert ja schon perfekt habe meine RTX2060 an mein Windows 10 durchgereicht, das klappt perfekt.... ich versuche es nun noch einmal mit der 1050TI hatte da auch was gelesen das man den Sound mit durchreichen muss das hatte ich bei meinem versuch vergessen... werde es nocheinmal testen und zur not auf die 5770 zurückgreifen ev. klappt es dann mit dieser die passenden sauberen BIOS Files habe ich mir schon besorgt Ich werde berichten
    1 point
  47. Die Konfiguration ist oftmals nicht ganz trivial. Ich habe da auch sehr viel "ausprobiert" bevor meine RX580 lief. Als erstes muss man die IOMMU Gruppen sauber freischeneiden damit passthrough funktioniert. Dann musste ich auch diverse Bootargumente hinzufügen. Hier mal meine Bootargumente: label Unraid OS Test kernel /bzimage append isolcpus=2-7,10-15 iommu=pt ppcie_acs_override=downstream,multifunction nomodeset modprobe.blacklist=i2c_i801,i2c_smbussnd_hda_intel intel_pstate=disable initrd=/bzroot Nicht alle davon sind für dich relevant. Aber "nomodeset" und "modprobe.blacklist=i2c_i801,i2c_smbussnd_hda_intel" könnten relevant sein. Aber das hängt von Board, deinen IOMMU Gruppen etc. ab. Da musst Du dich mit sehr viel lesen selber durchbeissen. Gruss, MPC561
    1 point
  48. Posting this just incase anybody else comes here looking for a solution to this. I ran into this issue this evening when trying to pass through a pcie usb card. The reason this happened was that I had the items plugged into the usb hub previously passed through, but when you reboot the server after altering your system file, they are still in the xml. I fixed this by doing the following: 1. Remove the usb items from the card and place them into another slot. 2. Reload the vm template. 3. Uncheck the previously loaded items. 4. Save the template. 5. Re-enable your newly configured card 6. Save the template again.
    1 point
  49. Hey man, Just bumped into the same thing setting up Bitwarden on my unRaid build. I had it running in on a Debian build which I'm looking to phase out now that unRaid will be the Docker host. Seems like we're hitting that exact same thing. Microsoft wrote an article about this: check it out! Comparing the two settings mentioned in the article between my unRaid build and my Debian host: Debian [%%%%%%%%%] :/# sysctl vm.legacy_va_layout vm.legacy_va_layout = 0 [%%%%%%%%%] :/# ulimit -s 8192 unRaid [%%%%%%%%%] # sysctl vm.legacy_va_layout vm.legacy_va_layout = 0 [%%%%%%%%%] # ulimit -s unlimited Apparently the ulimit for the stack size is set differently on my unRaid system. Now I'm not really a fan in modifying these kind of things system wide, since unRaid might have had a good reason to set the to unlimited in the first place, so I was looking for other ways to do it. Fortunately Docker seems to have a switch to modify these on a per container basis: Docker docs Now in my case Bitwarden is using Docker Compose to build the Bitwarden environment, so I had to tweak the yml code. The default docker-compose.yml (located in bwdata/docker) lists the following lines for the mssql container: version: '3' services: mssql: image: bitwarden/mssql:1.32.0 container_name: bitwarden-mssql restart: always volumes: - ../mssql/data:/var/opt/mssql/data - ../logs/mssql:/var/opt/mssql/log - ../mssql/backups:/etc/bitwarden/mssql/backups env_file: - mssql.env - ../env/uid.env - ../env/mssql.override.env Now if we create an override file named docker-compose.override.yml and also place it in the same bwdata/docker directory we only have to put these lines in: # # Override file for the auto-generated docker-compose.yml file provided # by Bitwarden # This file sets ulimits on the mssql container because with the ulimit # stack size set to ulimited systemwide (as is the case on unRaid), # the container refuses to start # ######################################################################### version: '3' services: mssql: ulimits: stack: soft: "8192000" hard: "8192000" After this we can safely start up the container as it merges the settings together with the mssql service, effectively setting the stack size limits for our mssql container environment the same as they were on Debian. Be aware though that the soft and hard limits use bytes instead of kilobytes (that the ulimit -s command takes), so you'd have to convert the values as I did. Hope this helps! Cheers, Sidney
    1 point
  50. I couldn't find any documentation about this. Does it matter if this file is store on cache drive only (for speed) or not? Also how would I know when to increase the default 1GB size?
    1 point