Leaderboard

Popular Content

Showing content with the highest reputation on 10/20/21 in all areas

  1. OK. Update just released which allows descriptions to be on the cards (defaults to be no). Enable it in CA Settings Minor performance increase in certain cases Rearranged debugging If you've got issues with CA not loading / the spinner never disappearing then Go to Settings. The CA settings page is now in there also (User Utilities Section) Enable debugging and apply Go to the apps tab. Wait at least 120 seconds Go back to Settings, CA Settings and hit Download Log. Upload the file here. (Also re-added 6.8.0+ compatibility - NOT TESTED)
    4 points
  2. Just remove Just delete, script changes are resetted at boot Yes
    2 points
  3. Yes, just undo everything. As said a reactivation from Windows may be necessary. Sent from my C64
    2 points
  4. I made the same mistake with flora. I was able to get mine working again by stopping the main machinaris container, removing the machinaris and stats dbs files under /mnt/user/appdata/machinaris/machinaris/dbs/ and starting it back up again and letting it repopulate those.
    2 points
  5. Just got an email from Roon about some major updates to Linux cores. I guess they’re switching from Mono to .NET which requires libicu to be installed ahead of the update. Any thoughts as to if anything needs to be done on our end for this to work properly? https://help.roonlabs.com/portal/en/kb/articles/linux-performance-improvements
    2 points
  6. Tons of posts related to Windows 10 and SMB as the root cause of the inability to connect to unRaid that were fruitless so I'm recording this easy fix for my future self. If you cannot access your unRaid shares via DNS name ( \\tower ) and/or via ip address ( \\192.168.x.y ) then try this. These steps do NOT require you to enable SMB 1.0; which is insecure. Directions: Press the Windows key + R shortcut to open the Run command window. Type in gpedit.msc and press OK. Select Computer Configuration -> Administrative Templates -> Network -> Lanman Workstation and double click Enable insecure guest logons and set it to Enabled. Now attempt to access \\tower Related Errors: Windows cannot access \\tower Windows cannot access \\192.168.1.102 You can't access this shared folder because your organization's security policies block unauthenticated guest access. These policies help protect your PC from unsafe or malicious devices on the network.
    1 point
  7. By default unRAID, the VMs and Docker containers all run within the same network. This is a straightforward solution, it does not require any special network setup and for most users this is a suitable solution. Sometimes more isolation is required, for example let VMs and Docker containers run in their own network environment completely separated from the unRAID server. Setting up such an environment needs changes in the unRAID network settings, but also requires your switch and router to have additional network possibilities to support this environment. The example here makes use of VLANs. This is an approach which allows to split your physical cable into two or more logical connections, which can run fully isolated from each other. If your switch does not support VLANs then the same can be achieved by connecting multiple physical ports (this requires however more ports on the unRAID server). The following assingments are done: network 10.0.101.0/24 = unRAID management connection. It runs on the default link (untagged) network 10.0.104.0/24 = isolated network for VMs. It runs on VLAN 4 (tagged) network 10.0.105.0/24 = isolated network for docker containers. It runs on VLAN 5 (tagged) UNRAID NETWORK SETTINGS We start with the main interface. Make sure the bridge function is enabled (this is required for VMs and docker). In this example both IPv4 and IPv6 are used, but this is not mandatory, e.g. IPv4 only is a good starting choice. Here a static IPv4 address is used, but automatic assignment can be used too. In this case it is recommended that your router (DHCP server) always hands out the same IP address to the unRAID server. Lastly enable VLANs for this interface. VM NETWORK SETTINGS VMs will operate on VLAN 4 which corresponds to interface br0.4. Here again IPv4 and IPv6 are enabled, but it may be limited to IPv4 only, without any IP assignment for unRAID itself. On the router DHCP can be configured, which allows VMs to obtain an IP address automatically. DOCKER NETWORK SETTINGS Docker containers operate on VLAN 5 which corresponds to interface br0.5. We need to assign IP addresses on this interface to ensure that Docker "sees" this interface and makes it a choice in the network selection of a container. Assignment can be automatic if you have a DHCP server running on this interface or static otherwise. VM CONFIGURATION We can set interface br0.4 as the default interface for the VMs which we are going to create (existing VMs you'll need to change individually). Here a new VM gets interface br0.4 assigned. DOCKER CONFIGURATION Docker will use its own built-in DHCP server to assign addresses to containers operating on interface br0.5 This DHCP server however isn't aware of any other DHCP servers (your router). Therefor it is recommended to set an IP range to the Docker DHCP server which is outside the range used by your router (if any) and avoid conflicts. This is done in the Docker settings while the service is stopped. When a docker container is created, the network type br0.5 is selected. This lets the container run on the isolated network. IP addresses can be assigned automatically out of the DHCP pool defined earlier. Leave the field "Fixed IP address" empty in this case. Or containers can use a static address. Fill-in the field "Fixed IP address" in this case. This completes the configuration on the unRAID server. Next we have to setup the switch and router to support the new networks we just created on the server. SWITCH CONFIGURATION The switch must be able to assign VLANs to its different ports. Below is a picture of a TP-LINK switch, other brands should have something similar. ROUTER CONFIGURATION The final piece is the router. Remember all connections eventually terminate on the router and this device makes communication between the different networks possible. If you want to allow or deny certain traffic between the networks, firewall rules on the router need to be created. This is however out of scope for this tutorial. Below is an example of a Ubiquiti USG router, again other brands should offer something similar. That's it. All components are configured and able to handle the different communications. Now you need to create VMs and containers which make use of them. Good luck.
    1 point
  8. This sounds like it could be exactly what I posted about a few posts back. If that Chrome plugin relies on using Deluge's watch folder (aka AutoAdd) to watch for magnet links, I'd look at my post and try that fix.
    1 point
  9. Auf jeden Fall, aber sie hat 125 € gekostet. Da konnte ich nicht nein sagen. Ca 2W (1 bis 2 Seiten vorher siehst du auch meine Messung des ITX): https://www.hardwareluxx.de/community/threads/die-sparsamsten-systeme-30w-idle.1007101/post-28446459 @Anym001 hat das im Einsatz und mit 4 Platten im Spindown um die 8W gehabt. Finde gerade das Foto nicht
    1 point
  10. 1 point
  11. Ok preclear on those 2x 14TB have completed successfully last night. Not a single error, very stable. As a reminder, those where connected outside of the cage directly to the PSU with the SATA 15 pin cables and to the LSI card. Next step in my testing: connect only one cage without Y splitter and pre-clear another couple of disk in there to see if the Y splitter are the issue. (waiting to receive the components to build my own MOLEX cable to power the cages by mail). Thx again for the time you spent helping me.
    1 point
  12. "Denne språkpakken er et arbeid i gang" <- 😆
    1 point
  13. Après le check avec correction (207 erreurs trouvées), j'ai lancé un check sans correction pour vérification : 0 erreur trouvée Pour résumer, en cas d’arrêt brutal du serveur : Au redémarrage, unraid effectue un check de parité sans correction. Si pas d'erreur rouvée, tout va bien, on passe à autre chose... Si erreurs, contrôle des disques (check SMART) et lancement d'un check de parité avec correction. A fin du check avec correction lancement d'un check sans correction pour vérifier que tout est OK Si les erreurs persistent, il a un problème à découvrir et à régler sur la machine (disque, mémoire,...) Dis moi cette méthode te semble cohérente, en bref si j'ai compris la logique du système Je fait un check de parité tous les mois, je l'ai paramétré dans le scheduler sans correction afin de pouvoir les vérifications necessaires en cas d'erreurs. Merci ChatNoir pour ton aide, ça m'a permis de d'avoir les idées plus claires sur ce sujet sensible.
    1 point
  14. Hi, Chives is a neat blockchain fork, but is different from all the others. In particular, Chives requires its own plots of size k29, k30, or k31. Please see the wiki for details. Let me know if you have further questions.
    1 point
  15. Please accept my apologies on this error encountered with choosing Import Mnemonic from the initial Setup screen of v0.6.0. Very dumb oversight on my part. I have fixed in the next version. Easiest workaround is to manually create a file on the Unraid filesystem, just a single line with your 24-word mnemonic, at /mnt/user/appdata/machinaris/mnemonic.txt Then restart the Machinaris container and you should be taken right past the setup screen, to the main Summary page. Hope this helps!
    1 point
  16. ...the rules cfg should work. Maybe the 10G link just needs more/too much time to become active. What if you swap players? Make unraid passive side and CRS active?? Gesendet von meinem SM-G780G mit Tapatalk
    1 point
  17. ...as said, it must/should be the UDEV rules in network config...you should be able to tie a name (eth0) to a MAC.
    1 point
  18. ...just saying that you're only able to set L2-MTU parameter on a physical Interface..a bonding interface is not a physical one. The active side (unraid) should only advertise the MAC (the unraid bond/bridge) on the active link to the CRS, so it does not get confused (as the bond will have a single MAC, that of the first NIC in it, I think). this is what MT help states: https://help.mikrotik.com/docs/display/ROS/Bonding#Bonding-active-backup active-backup This mode uses only one active slave to transmit packets. The additional slave only becomes active if the primary slave fails. The MAC address of the bonding interface is presented onto the active port to avoid confusing the switch. Active-backup is the best choice in high availability setups with multiple switches that are interconnected. Hmmm.... AFAIK the first NIC in the bond will "lend" its MAC to the bond. Normally, in unraid this is eth0. You should check and re-arrange NIC numbering in the network settings of unraid, so that the 10G is eth0. So when booting for the first time/ after reboot, the 10G should be connected in order to activate the NIC for eth0.
    1 point
  19. A container is more than just the application, think of it as a miniature virtual machine. It has an entire operating system, albeit only the pieces essential to supporting the specific application. The tag means the application itself will not be updated, only the supporting OS files inside the container. As an aside, one of the reasons docker containers can be so efficient, they share common pieces between them. So if you have a bunch of LSIO containers using the same internal OS, it's not duplicated no matter how many different containers use those same basic pieces. Running multiples of the same container with different appdata uses almost zero additional resources in the docker image.
    1 point
  20. Source: I confirm both methods are working, i.e. use qemu 6.1 with machine type 6.0, or use qemu 6.1, machine type 6.1 and global custom argument in libvirt.
    1 point
  21. Nix, aber fast alle deine Verbraucher beim nas sind induktiv d.h. genau da ggf. Ungenau. Für eine USV ust im störfall natürlich die Scheinleistung entscheidend. Klassische ac usv haben mir zu viel standby verbrauch. Werd das nas an die dc usv von der fritzbox hängen. Die packt 120w und wird bei sonnen durch 50W PV Modul unterstützt
    1 point
  22. As @itimpi said it will be part of 6.10.0-RC2 and I only can recommend to wait a few more days. Sent from my C64
    1 point
  23. Thanks so much for the reassurance on that! And for everything! I couldn't have confidently done this without your guidance and feedback! And thanks to everybody else also that chipped in and helped me on this issue. I greatly appreciate it! You're all awesome!
    1 point
  24. It is going to be part of 6.10.0 rc2 so hopefully that is not far off.
    1 point
  25. Check if the drive shows up in BIOS. If it doesn't then reconnect it with a different SATA cable and check in BIOS again. If the drive is still not recognized then change a SATA port on the motherboard.
    1 point
  26. Mi caso fue en aliexpress, pero ahora mismo están muy caras...
    1 point
  27. I checked before and the parity swap procedure completed successfully without any errors, so all should be fine.
    1 point
  28. Getting a new error message that the server is outdated. Two of the players on the newer updated Minecraft version are having the issue whereas two players on the older (matching) version are fine. Is there a way to check the docker for updates? I'm on UnRAID 6.9.2 and docker shows the docker was last created 2 days ago (assuming, updated 2 days ago). EDIT: Sigh, should have checked. Docker > Advanced View (top right) > Force Update for docker container. This fixed it; thanks
    1 point
  29. Du baust alles ein. Fügst sie HDD(s) ins Array ein, den Pool aus den SSDs, startest und fertig. Die Standard Shares haben bereits alle die korrekten Cache Settings. Zb appdata steht auf Prefer (bleibt auf SSD).
    1 point
  30. It likely Seagate disk with LSI HBA, wakeup / power-save issue, most read error occur during disk wakeup.
    1 point
  31. Ok pues si nos necesitas ya sabes donde estamos. Saludos cordiales
    1 point
  32. Perfecto pues todo solucionado. Saludos cordiales
    1 point
  33. OK Thanks ! Edited on "Advanced View", added "--user 99:100" on "Extra Parameters:" rules for me !
    1 point
  34. may try this should solve the most issues ...
    1 point
  35. I really wish you hadn't removed the Docker Hub integration. I realize the templates were pretty bare-bones but it at least filled out the name, description, repository etc. making it a lot faster than going to the Docker page, manually adding a container and starting with a completely blank template.
    1 point
  36. Good day. Machinaris v0.6.0 is now available with support for many more blockchain forks: - NChain- cross-farming support for this blockchain fork. - HDDCoin - cross-farming support for this blockchain fork. - Chives - support for this blockchain fork. - Flax - now farmed in a separate Docker container. Core enhancements to underlying components include: - Plotman - enhancement to support plotting for Chives via Madmax. - Chiadog - enhancement to support monitoring of other blockchain forks. Really big thanks to all those that helped review and provide feedback on this big step forward for Machinaris! Unraid users of Machinaris, please follow the upgrade guide: https://github.com/guydavis/machinaris/wiki/Unraid#how-do-i-update-from-v05x-to-v060-with-fork-support
    1 point
  37. You shoud be able to achieve it by configuring your existing switch but you cannot do that using Unifi Controller. You need to do it via the means offered by your switch. EdgeRouter X cannot either be configured using Unifi Controller. BR, R
    1 point
  38. One of the design considerations was "How to relieve the support requests on the maintainers". This is because most support requests in the forum here (but definitely not all) have to do with the application itself (how do I do this?) vs problems with the container (why won't this install?) Since they're all venues of support, they're all consolidated within the same dropdown, unless there is only a single option available. The order is ReadMe, Project, Discord, Support Thread. On the info screen there's also another option available (Registry). Or put another way, one of the historical "complaints" about CA through the various UI's (this is the 3rd major UI for CA) is "too many buttons". This version should make it simpler. And IMO it's simple enough to use that all of the help text has been removed.
    1 point
  39. AutoAdd Issue in Deluge First, let me say that I've benefited greatly from this container and the help in this thread, so thank you all. And although I'm running the container on a Synology unit, I thought I'd finally give something back here for anyone who may be having a similar issue. Background Container was running great for me up to binhex/arch-delugevpn:2.0.3-2-01, but any time I upgraded past that I had problems with Deluge's AutoAdd functionality (basically its "watch" directory capability that was formerly an external plugin that is now baked into Deluge itself). Certain elements worked like Radarr/Sonnar integration, but other elements broke, like when I used manual search inside Jackett, which relies on a blackhole watch folder. I ended up just rolling back the container, it worked fine again, and I stopped worrying about it for a while. However, with the new (rare) potential for IP leakage, it's been nagging at me to move to the new container versions. Initially, I wasn't sure if it was the container, the VPN, or Deluge itself, but it always kind of felt like Deluge given the VPN was up, I could download torrents, and Radarr/Sonarr integration worked -- it was only the AutoAdd turning itself off and not behaving itself when using Jackett manual searches that wasn't working. I'm actually surprised I haven't seen more comments about this here because of how AWESOME using Jackett this way is! (Y'all are missing out, LOL). The Fix I finally put my mind to figuring this out once and for all yesterday, and I believe I tracked it down. Turns out the baked-into-Deluge AutoAdd is currently broken for certain applications (like watching for Magnets), and that version is in the current BinHex containers. Even though the fix hasn't been promoted into Deluge yet (so of course not in the container yet either), there is a manual fix available, and it's easy (just drop an updated AutoAdd egg into the Deluge PlugIns folder and it will take precedence over the baked-in version). I will say that I literally just implemented and tested, so it's possible I'll still run into problems, but it's looking promising at the moment. Thanks again for this container and this thread, enjoy! The temporary fix can be found here
    1 point
  40. Used to dump my RTX A4000 vbios, attached in case someone else needs it. RTX A4000 - 16gb.rom
    1 point
  41. You need to move the downloaded files to a new location before deleting them from the seedbox. Think of the settings -- receive only --- you delete a file from the seedbox and that change is received also. You should think of Syncthing as an intermediate handler and not as a final destination. Download to a temporary location and then move the files out to their permanent residence. This can be handled by Sonarr/Radarr, not sure about Plex.
    1 point
  42. Well let's analyze this a bit, shall we, since you're taking such a strong interest into this scenario and my position on this matter. I really don't follow the "free" statement honestly. I don't expect free and that's not what I am saying. But anyway, let's say I buy a license and use it for 3 months, I am unhappy with the product or it just no longer fits my requirements, so I turn around around and sale my system which of course comes with the license. So yes to answer your question is a waste of my money if I can't turn around and recover some cost, after just 3 months of use. What you're hearing from me is what you think you want to hear or your interpretation of this matter I never spoke or mentioned about a periodic fee or anything like that. That's a can of worms you don't want to open. Milking the cow is not my favorite type of licensing models and usually I walk away from that, not just walk away but actually RUN Is my money and is my choice to what licensing models I support and give my money to. Is just business, nothing personal This is my last statement on this matter as I feel that I am taking too much valuable time and real estate from support topics Again agree, to disagree, respectfully
    1 point
  43. For transcoding only, no benefit to buy the the more expensive P620. I picked up a PNY Quadro P400 for my system for ~$125, works great for my family's PLEX server.
    1 point
  44. Was hast du denn schon probiert? Hast du sie für Unraid ausgeschlossen zb mit dem VFIO PCI Config Plugin? Hyper V deaktiviert? Nutzt du OVMF als BIOS? ROM der GPU ausgelesen und modifiziert? Hier auch ein Video https://youtu.be/sOifIPJxUrM
    1 point
  45. There was an update for the container midnight and my server auto updated at 6am without problems, and looks like my dbengine database files are persistent now. So looks like the "delete obsolete charts files = no" setting solved the problem.
    1 point
  46. Ok, I looked through the global section of the netdata config file and found this: delete obsolete charts files = yes I set this to "no" and restarted the docker. Maybe this prevent the system from deleting the cache files??? Now testing with this until the next update...
    1 point
  47. @primeval_god After a day of testing this, it is not working as it should. I now have graphs back about 5hours. But I configured the dbengine cache size to 4GB and it runs from about 23hours ago, the cache file size is now about 18MB only. I added a new path to the docker container: /var/cache/netdata (container) -> /mnt/user/appdata/netdata/cache/ (host) It is working correctly, because I can see the generated dbengine files in my appdata/netdata/cache/dbengine folder. But, in the meantime there was an update to the container and I set the docker auto update to run at 6am and my graphs are persistent back exactly to the same time! So I think, when the docker update invoked, it is cleared my dbengine cache completly. How can I resolve this issue? Can I configure the docker somehow for not touching the dbengine cache folder when an update is invoked?
    1 point
  48. This is just to close this and add a little fix for other people searching this topic. Install this in docker : ELK Go through the wiki linked on the docker page, and make sure the variables are correct. Add a variable : MAX_OPEN_FILES set to 65536 To get this to stick you need to set the ELK image as privileged (need to toggle advanced) Download community apps script manager Add the script below to run at start of array: sysctl -w vm.max_map_count=262144 After this elk stack is fully running, you will still need to set it up with index and all that to parse data.
    1 point