Leaderboard

Popular Content

Showing content with the highest reputation on 09/21/22 in all areas

  1. Thank you, got it through by opening ports as per your edit on 1st page! My favourite docker on unraid! Roon rules, now on the go!
    2 points
  2. Ussually mover is run by schedule but some times we run mover manually. When we run it manually it would be nice to know how long will take mover to copy all the files. I feel safe not working with files while mover is running. (sure is paranoia but I feel safe) Would it be possible to add some kind of % bar showing information about the mover process? At least I will know how long will it take. Thankyou Gus
    1 point
  3. The ability to have Matrix as a notification agent would be great. Also having a section for custom agents. An example script to send a message to Matrix room (inc E2EE) 1 2 #!/bin/bash # https://gist.github.com/RickCogley/69f430d4418ae5498e8febab44d241c9 # https://gist.github.com/travnewmatic/769cd9532504e0f85983d69acd4a7d29 # # Get Access Token # curl -XPOST -d '{"type":"m.login.password", "user":"***your_username***", "password":"***your_password***"}' "https://matrix.org/_matrix/client/r0/login" msgtype=m.text homeserver=<homeserver> room=<room> accesstoken=<accesstoken> #Piped Version #curl -XPOST -d "$( jq -Rsc --arg msgtype "$msgtype" '{$msgtype, body:.}')" -H 'Authorization: Bearer $accesstoken' "https://$homeserver/_matrix/client/r0/rooms/$room/send/m.room.message" #Arg Version curl -XPOST -d '{'\"msgtype\":\"$msgtype\"', '\"body\"':'\""$*"\"'}' -H 'Authorization: Bearer $accesstoken' "https://$homeserver/_matrix/client/r0/rooms/$room/send/m.room.message" #RichText Version #curl -XPOST -d '{"msgtype":"m.text", "body":"**Hello**","format":"org.matrix.custom.html","formatted_body":"<strong>hello</strong>"}' -H 'Authorization: Bearer $accesstoken' "https://$homeserver/_matrix/client/r0/rooms/$room/send/m.room.message"
    1 point
  4. I am head of community at CrowdSec (https://crowdsec.net) and although a bit biased (but also based on users requesting this on our Discord) I'll suggest support for CrowdSec on Unraid. In practice it would mean making Unraid-containers out of the existing ones. For those unfamilar to CrowdSec it consists of two parts: an agent who does log parsing and attack detection and manages the local stack and the bouncer which is the IPS part that does the actual threat mitigation. The simplest bouncer to use is the iptables/nftables bouncer (we have both) but there's no Docker container of that (not entirely true, we have a home assistant add-on (which is also Docker) but I don't know how much can be reused. Here's the link to our Docker repo. As you can see there's also a bunch of other bouncers available as docker containers that could probably be converted easy is my guess. Regarding the firewall bouncer it obviously needs to be running as root on the Unraid host which is in itself not a big deal and pretty easy to do so I don't think there's too much work in this. We'll be happy to collaborate and do what we can to help out. Please join our Discord at https://discord.gg/crowdsec and ping me there if you're interested. I'll be happy to convey contact with our dev team. Let me know what you think
    1 point
  5. What do you get from command line with this? ls -la /mnt/user
    1 point
  6. GASP. That was it! Thank you thank you thank you both for stepping through this with me and thank you for your great catch and instructions, JorgeB! Went from this: root@bryNAS:~# ls -la /mnt/user total 40 drwx------ 1 nobody users 206 Sep 21 16:46 ./ drwxr-xr-x 13 root root 260 Sep 21 16:36 ../ to this... root@bryNAS:~# ls -la /mnt/user total 40 drwxrwxrwx 1 nobody users 206 Sep 21 16:46 ./ drwxr-xr-x 13 root root 260 Sep 21 17:19 ../ I too am mystified by it working in 6.9 but not 6.10. Again, thank you both. I'll of course mark the correct solution and leave it up for anyone else who might be wandering through the internet with the same issue to hopefully find their solution. Now it's time to explore what this 6.10.3 unRAID server can do
    1 point
  7. With great troubleshooting help from @ljm42 we narrowed the issue down to an incorrect DNS setting which was likely causing a timeout. Once the DNS servers were all responding, the issue was resolved.
    1 point
  8. Your server doesn't have a NIC that use the tg3 driver, so it wouldn't be affected, also since v6.10.3 that issue is resolved for all servers, update to v6.10.3, enable the syslog server and post that after a crash.
    1 point
  9. Hey thanks for the response, So i figured it out. I was using a different appData folder. I thought that's how I was supposed to do it. Once I pointed it to the same appData, it seems to have fixed the issues I was having. Both servers now start up and I am able to connect to both.
    1 point
  10. Hey everyone, we've released My Servers plugin version 2022.09.21.0905, go to the Plugins tab and check for updates! This release improves the installation routine. If you are on a flaky network and a download fails you can simply try the upgrade again, nothing on your system is modified until all files have been downloaded and verified. We continue to track down flash backup issues. The system will now automatically try to recover from issues after being in an error state for three hours, rather than waiting to be triggered by another file getting updated. And we've got another round of changes to help the Unraid API make and maintain connections to Mothership. Also, a gentle reminder that we'll soon be dropping support older versions of Unraid, please upgrade to 6.10.3 or 6.11.0 ASAP: https://forums.unraid.net/topic/128328-my-servers-dropping-support-for-older-versions-of-unraid/ ## 2022.09.21 ### This version resolves: - Installation issues related to failed downloads. Now no changes are made until everything has been downloaded and verified. - Unraid API connection issues. It should now reconnect more reliably if it gets disconnected from Mothership. - Removes some instances of the UPC reporting "unexpected owner" ### This version adds: - Flash backup status is submitted by UpdateDNS to help track down issues - Flash backup will try recovering from an error state automatically every three hours
    1 point
  11. Thank you sir. I will set this up and make a new post when I get the data!
    1 point
  12. I need to get some time to look at theses, but life has just been too hectic. Changes in the server releases sometimes require changes on my end to accommodate. No two servers install the same.
    1 point
  13. Thank you @ich777 Do you mean the PS part, that changing DVMT per-allocation settings makes no difference?
    1 point
  14. Please look at the post from @Andrea3000 over here and the conclusion at the bottom:
    1 point
  15. Naja zur Kontrolle, war ja in der Vergangenheit so, das die teilweise (nach Reboot) nur mit 480 Speed initialisiert waren. Ich lösche das Plugin dann. danke Mit lsusb -t kann ich das ja wunderbar auch prüfen. Danke Dir !!!
    1 point
  16. Warum brauchst du das wenn ich fragen darf? Sicher funktioniert das, mach ein terminal auf und gib das ein: lsusb -t Ja.
    1 point
  17. I recreated the VM + image with SATA for drives, and it seems to work properly.
    1 point
  18. Thank you @JorgeB I removed the CPU heatsink, fully cleaned and reseated it with good quality paste and it seems to be working again. So appeared to be CPU overheating which makes sense. thanks again.
    1 point
  19. Yes, update went smoothly and your example for the port assignment helped. Took almost no time. Thanks for an excellent container!
    1 point
  20. This is being dealt with. https://forums.unraid.net/topic/127949-support-for-adding-optional-usb-devices-in-vm-manager-form-view/?do=findComment&comment=1170902
    1 point
  21. @Twinkie0101 Just like we've been doing the whole time, via Roon Client. You will have to add some port forwarding details to your docker template. I modified post #1 to reflect those changes. Do a little digging on roonlabs.com, and their forums, then post here if you have any issues. Related to Roon ARC.
    1 point
  22. Hatte Nfs nur bei den einzelnen Shares deaktiviert... Habe das aber jetzt in den Network Setting auch deaktiviert. Jetzt steht in der Config auch "no" drin. Danke schonmal, hoffe das Problem ist damit schon behoben.
    1 point
  23. Unraid does not mind if you leave gaps in the assigned drives. Many people find it triggers their OCD, but as long as it does not worry you can leave the gaps in the assignments. When you start the array only the disk slots that have drives assigned will be shown. Just in case it is relevant it is worth pointing out that Unraid does not care how/where the drive is connected as drives are identified by their serial number. There is therefore no reason that the assignments to disk slots have to match the physical layout if that is not convenient.
    1 point
  24. Oh ok, that file is in the cocker image and needs to be patched with your changes every time you update the container. I was thinking of scripting the change via “extra parameters” but after some research it appears that is not available. See this thread for background and potential workaround using user scripts. /topic/58700-passing-commandsargs-to-docker-containers-request/?do=findComment&comment=670979 Deluge daemon needs to be started with the —logrotate option for it to work. And it’s started by one of binhex’s scripts that is part of the image. So you’re in the same situation as you log modifications. Either binhex Updates the image to support logrotate, or you need to patch that script yourself For persistent logs, I think logrotate would be the better option, but there are other ways. Here are some random thoughts, I’m no particular order of suitability or ease to implement… - user script parses the logs on a schedule and writes the required data into a persistent file outside the container - user script simply copies the whole log file into persistent storage (you’ll end up with lots of duplication though) - write your own deluge plug-in to export the data to a persistent file - identify another trigger to script your own log file, e.g. are the torrents added by radarr that might have better script support? Sent from my iPhone using Tapatalk
    1 point
  25. Currently the methods that govern how file placement / movement happens in relation to the cache are; I would like to request 2 additions > size - Files greater than the indicated size are placed on the specified cache pool < size - Files lower than the indicated size are placed on the specified cache pool use case scenario: like most of of you, having a variety of files of different sizes, it makes a lot of sense to leverage the super fast access times and availability of flash memory. Keeping all files under 2mb or less on exclusively flash memory will serve to increase our arrays significantly. As the cache is outside the array, duplication could be involved too alongside hardlinking but thats additional complexity and I would be happy if just the basic implementation could be looked at. i'm reasonably certain it isn't possible to do this currently, is that the case?
    1 point
  26. My thoughts... 1) Figure out if the parity error is parity or data. Don't leave users to figure out if they should restore the data from parity, repair parity, or recover from a backup. There's tools today to automate this. 2) Snapshot RAID (like SnapRAID) is phenomenal. I'm really liking that I can simply snapshot a folder, not an entire drive. I can also do snapshots at different intervals. With snapshots, I can also undelete a file. This detects checksum errors, does a "backup" via parity calculation, saves storage for multiple drives (via parity calculation, not compression), and can do a restore of a file, even at a point of time depending on your snapshot date. You can also store your parity offline, external, or cloud. I'm surprised this technology is not more popular or a plugin. <--- This really NEEDS to become a plugin! 3) Silent error detection (bitrot or whatever causes files to change). 4) I have frequent false positive with Dynamix File Manager plugin when doing manual checks. So now I'll use the plugin to give me the error, but then manually check my other checksums for verification. I get enough false parity errors and false DFI errors that I no longer consider them credible. How do I know? Because my external BLAKE3 checksums validate, so do BTRFS scrubs. 5) BTRFS scrubs only detect a problem, but do not correct. 6) Don't like that Unraid is weak at correcting problems. 7) Want something at the file level, not block level. Unraid + SnapRAID would be a great integrated platform.
    1 point
  27. It's in CA. The original won't show because it's now marked as incompatible, and the fork shows (GuildNet/Squid as author) when running 6.10+
    1 point
  28. For anyone looking for this in the future, I had to modify the formatting of the script to be the following. #!/bin/bash nvidia-smi --persistence-mode=1 nvidia-persistenced fuser -v /dev/nvidia* @SimpleDino and @SpaceInvaderOne Can you all confirm on a system on 6.10.0 RC4?
    1 point
  29. Quick update and thanks to everyone who replied. Since everything possibly connecting to UnRaid was shut down, I figured it must have been UnRaid itself. I did some checking / tinkering with UnRaid's Network Settings and disabled both Bonding and Bridging. I think both of these settings were enabled by default after install as I kept most settings as they were. I only have 1 network adapter on this machine, so figured it was not necessary. After disabling both Bonding and Bridging settings, Inbound went down to 19 Kbps and Outbout to 4 Kbps. I'm assuming these non zero values could be attributed to Web UI being opened - pl. correct me if I'm wrong here and if Inbound & Outbound should be zero or close to it. Posting my Network Settings below for reference - pl. let me know if anything else should be changed / optimized. I'm using MELLANOX CONNECT X-2 PCI-E 10G SFP+ network card.
    1 point