Leaderboard

Popular Content

Showing content with the highest reputation on 09/19/22 in all areas

  1. Tone mapping is still missing. I still have it enabled. For my setup, it will use the CPU to tone map HDR content and the iGPU for the rest. Correct. Run the upgrade compatibility check first. I agree; no need to make it unnecessarily mysterious. UNRAID: install intel GPU TOP and GPU statistics. GPU stats will show up on the dashboard page, click on the gears and chose intel. PLEX container: Add "--device=/dev/dri:/dev/dri" to the extra parameters section if you're using the linuxserver plex container. Test out some items that regularly transcode. You will see (hw) on the plex dashboard and GPU activity on the dashboard.
    3 points
  2. I am head of community at CrowdSec (https://crowdsec.net) and although a bit biased (but also based on users requesting this on our Discord) I'll suggest support for CrowdSec on Unraid. In practice it would mean making Unraid-containers out of the existing ones. For those unfamilar to CrowdSec it consists of two parts: an agent who does log parsing and attack detection and manages the local stack and the bouncer which is the IPS part that does the actual threat mitigation. The simplest bouncer to use is the iptables/nftables bouncer (we have both) but there's no Docker container of that (not entirely true, we have a home assistant add-on (which is also Docker) but I don't know how much can be reused. Here's the link to our Docker repo. As you can see there's also a bunch of other bouncers available as docker containers that could probably be converted easy is my guess. Regarding the firewall bouncer it obviously needs to be running as root on the Unraid host which is in itself not a big deal and pretty easy to do so I don't think there's too much work in this. We'll be happy to collaborate and do what we can to help out. Please join our Discord at https://discord.gg/crowdsec and ping me there if you're interested. I'll be happy to convey contact with our dev team. Let me know what you think
    2 points
  3. I followed instructions given by SimonF. I use USB manager but did not know that new feature has been added to pass device to VM in serial mode. So instead of modifying XML file, all you need to do is: 1) install USB manager plugin 2) Do not attach any device in the VM properties. Instead go to USB manager menu, locate the device you want to pass to VM, and click on device settings here: 3) Enable "connect as serial only" You can also select auto connect if you want to. Side note - if you did pass the device via USB manager before, then you probably did not use serial only mode. By checking this, mode will change. 4) Shut down your VM and start it again, it should attach device to your selected VM in serial mode: Once that is done, all you need to do is change the path to device. I use Z2M - in Z2M configuration, I just changed path from /dev/serial/by-id/conbeeXYZ to /dev/serial/by-id/qemuXYZ That is it. My Zigbee network works without any additional intervention needed, and I successfully upgraded to 6.10.3 More info can be found here as well: https://community.home-assistant.io/t/solved-conbee-2-fails-to-connect-to-zha-ha-in-unraid-vm/431276/8 One user reported that if you are using ZHA instead of Z2M, you need to change the config (within the VM terminal) to ~/config/.storage/core.config_entries and provide new device path
    2 points
  4. Thanks I think the space is the issue. I was also setting it to large values just in case the -1 wasn't working as expected.
    2 points
  5. Die einzigen "Smart Home" Berührungspunkte habe ich bisher bei meiner Fritz!Box und den DECT 200. Leider kann man da nur 10 Steckdosen verwenden und Master / Slave Regeln kann man damit gar nicht umsetzen. Aus dem Grund habe ich mir nun den neuen SONOFF Zigbee 3.0 USB Dongle Plus-E und einen Aqara Smart Plug gekauft. Ziel soll sein, dass ich meinen Laserdrucker bei x Minuten Standby abschalte (ich brauche also eine Steckdose mit Verbrauchserkennung) und zum Einschalten brauche ich dann noch irgendwas, was jeder im Haus bedienen kann. Keine Ahnung ob es ein Tablet für an die Wand wird oder ein smarter Schalter oder so. Jedenfalls hab ich: den Stick in den Unraid Server gesteckt dann per "ls -go /dev/serial/by-id" nach dem Pfad des Sticks gesucht dann beim Home Assistant Container das Device mit folgendem Wert hinzugefügt: /dev/serial/by-id/usb-ITEAD_SONOFF_Zigbee_3.0_USB_Dongle_Plus_V2_20220xxxxxxx-if00:/dev/ttyACM0 den Container gestartet gestaunt, dass das einfach so geklappt hat ^^ Also mal im Ernst. Schon cool, dass der sowohl den Stick, als auch Sonos usw einfach schon mal direkt erkennt und dann auf einem Dashboard anzeigt Das Pairing der Steckdose war auch super simpel. Nur 1x lange gedrückt und schon war der gepairt. Richtig krass finde ich auch die Reaktionszeit der Steckdose. Von den Fritz DECT 200 bin ich eine Gedenksekunde gewohnt, aber wenn ich in der GUI den Schalter betätige, schaltet der absolut instant um. Leider zeigt er aber nicht den Verbrauch des Druckers an. Da muss ich jetzt mal recherchieren, ob das grundsätzlich nicht geht, weil es Aqara exklusiv ist oder ob ich irgendwas anpassen muss. Ich habe auf jeden Fall schon bei Ali noch Steckdosen von Tuya bestellt. Damit sollte es in jedem Fall gehen. Sollte das klappen, wird direkt mal überlegt was alles abgeschaltet wird, wenn es nicht gebraucht wird 😁
    1 point
  6. PS: 前提使用的是BTRFS文件系统 首先抛出问题: 添加校验盘,性能下降严重 不用校验盘,数据安全无法保证 无法选择哪些数据需要保护,一些不重要数据做冗余浪费空间 解决方案: 停止阵列 制作一个重要数据磁盘(原始磁盘为sda1,校验磁盘为sdb1) mkdir /raid mount /dev/sda1 /raid btrfs device add /dev/sdb1 /raid btrfs balance start -dconvert=raid1 -mconvert=raid1 /raid 工具——新配置,创建没有校验盘的新阵列配置。除了不添加校验盘,其它与原来一样(原来校验盘那块磁盘不要添加进去)。 启动阵列,在共享页将需要重要数据添加到sda,不重要数据排除sda。 至此方案完成且无数据损失。性能恢复正常,且使用最安全的raid1冗余保护重要数据空间也基本足够。不需要额外修改,也不影响添加固态cache进一步提高性能。
    1 point
  7. Thanks for making this plugin. I now have cpu temps & also fan speeds working on MSI MAG B560M MORTAR WIFI Motherboard
    1 point
  8. I have the same problem and tried the fixes suggested. My upload script deletes empty folders, mount script is set to run every 10 minutes and when it creates the folders again they have owner root. Not very elegant but I just run a user script to change those permissions on the local folder daily. Set to a couple of hours after the upload script has done it's thing
    1 point
  9. I have no access to my Unraid system right now, but I can do that in a few days. But at least in my case there was nothing in Unraid logs and I tried with standard configuration. (I am a new Unraid user, already had that problem right after setting up the server, after the initial Timemachine backup.)
    1 point
  10. I plan on pushing an update soon<tm> with the latest version of Lucee which will resolve this.
    1 point
  11. What kind of information do you need? (There is another thread here in the forum with many logs…)
    1 point
  12. You can still have multiple servers, however as you just have 1 port 80 available (externally) you can only have 1 NPM running (on that port). Either: run other NPM on other port or Have just 1 NPM and have that also proxy the traffic for the other servers
    1 point
  13. That suggests you forgot to disable the VM service before running mover. Mover will not move open files, and the libvirt.img file would be kept open by the VM service if it is running.
    1 point
  14. Or (if you just have 2 Unraid servers) run NPM on 1 and add your hosts for unraid #2 in there so: service hosted on unraid 1 example.com -> localhost:1234 service hosted on unraid 2 otherexample.com -> ip.of.other.unraid:2345
    1 point
  15. Yes and No, I am getting the idea that you want to create some kind of redundancy. The only way to accomplish this, as far as I am aware, is to setup some kind of load balancer that can be configured to redirect traffic when one source is down. I am not aware of how PFSense works, I'm a Unifi guy. But, there may be something built into PFSense that can provide this functionality (https://www.howtoforge.com/how-to-use-pfsense-to-load-balance-your-web-servers). If you are going for redundancy, you then get into the challenges of making sure that box1 and box2 containers/VM's are sync'd in real-time in some way. If they are not then you won't have redundancy. If the two servers don't have the exact same services configured then it won't matter about having a redundant NPM as the services on the server that is down won't respond anyway. Also, in a load balance situation, each NPM is going to have different domain/IP mappings so you can't just duplicate the second NPM with the settings on the first NPM. Getting load balancing running os not a simple feat and requires a lot of planning. Why do you need this kind of set up?
    1 point
  16. Sorry for the late reply I was busy at work. I tried to setup again using the host mode and changing the port to something else. Maybe he doesn't like 808 XD. Anyway now looks like it's working. I'll do more tests and if anything strange comes up I'll report back. To solve this just change the view from basic to advanced and modify the value "WebUI: http://[IP]:[PORT:80]/". Change 80 to the port you use..in your example 9080. Thanks again for your time and help. Have a nice day!
    1 point
  17. Sigh.... PEBKAC. That'll teach me to actually test in future. Thanks for replying!
    1 point
  18. It's not hanging it just started, if it's not restarting it is working. You will get only error messages if one happens.
    1 point
  19. You can't forward the same port to two different IP's on your LAN. I'm surprised your router allowed you to even enter this config. Just do all the NPM forwarding on box1 to all the services that are on box2 with the appropriate IP's/ports. So what I read from this is you are double NAT'ed. That's a nightmare. There should be a way you can configure your providers modem/router to operate in bridge mode. That essentially disables the built in router and allows your PFSense to act as the primary (and only) firewall/router. This should simplify managing the system and clear up a lot of port forward/conflict issues.
    1 point
  20. IBRACORP already made a template for Unraid, see here: https://docs.ibracorp.io/crowdsec/crowdsec/unraid @Sycotix I mark you here too. I would also recommend that you reach out to the developers via the contact form over here: Click
    1 point
  21. Click on settings, then docker and VM Manager settings and disable both, then run the mover, bellow stop array button.
    1 point
  22. I mean that the Unraid gui freezes when trying to download the diagnostics. Having said that, I went to another machine and applied the GPU settings.. and it worked? I have no idea how or why and am quite confused. But it works now.
    1 point
  23. It's an option in the Unraid boot menu.
    1 point
  24. You should really get rid of the old Nvidia card and use the iGPU. The current card doesn't support modern codecs and the iGPU should run circles around it for that purpose.
    1 point
  25. Post read can fail because of bad RAM, run memtest.
    1 point
  26. I found the following on the Emby forum https://emby.media/community/index.php?/topic/79091-make-sure-i-understand-size-requirements-for-the-transcoding-directory/#comment-804641 In a post from July 8 the admin said the following: "If the server detects low disk space then it will clean up transcoding segments on the fly as you go in order to conserve space." So if it works as intended, the script should no longer be necessary right?
    1 point
  27. may take a look at your Variable again, looks like you have a extra space there also its not setted to -1
    1 point
  28. I updated from 6.11.0-rc3 to 6.11.0-rc5. I use a number of PERL modules. After the install of 6.11.0-rc5 I tried updating the PERL modules and it was getting the following error: fatal error: sys/types.h: No such file or directory I finally chased down the problem to glibc. I know that it was working before. While on 6.11.0-rc2 I had reloaded all the PERL modules from scratch and did not get this error. The reason I had reloaded all the PERL modules from scratch was part of the process of figuring out which packages I needed to put in /boot/extra that were being provided by NerdPack and DevPack. glibc was not one of the needed packages. Since you told me where to find the installed packages, I removed glibc-2.36-x86_64-3.txz from /boot/extra and rebooted to see if glibc is there. I see that glibc-2.36-x86_64-3_LT is installed. I went ahead and reloaded my PERL modules from scratch and I get the same error: fatal error: sys/types.h: No such file or directory So, I guess glibc-2.36-x86_64-3_LT is not loading sys/types.h correctly. I put glibc-2.36-x86_64-3.txz back into /boot/extra, rebooted, and now everything is working fine again. I have been running the following script from the go file for years with no problems. One of the things it does is appends to .bash_profile. After logging into Unraid using SSH I tried to run an alias I set in .bash_profile called bin and it said command not found. I typed alias and saw it wasn't set. I checked .bash_profile and it had been overwritten. I finally figured out that sometime during the boot process .bash_profile is now being overwritten. #!/bin/bash if [ -f /root/.bash_profile.orig ] then cp /root/.bash_profile.orig /root/.bash_profile else cp /root/.bash_profile /root/.bash_profile.orig fi ( cat <<"EOF" # alias alias bin="/usr/bin/ls -al /root/bin" # Perl PERLHOME="/root/perl5"; export PERLHOME; PERL5LIB="$PERLHOME/lib/perl5"; export PERL5LIB; PERL_MB_OPT="--install_base $PERLHOME"; export PERL_MB_OPT; PERL_MM_OPT="INSTALL_BASE=$PERLHOME"; export PERL_MM_OPT; PERL_LOCAL_LIB_ROOT="$PERLHOME"; export PERL_LOCAL_LIB_ROOT; # Path PATH=.:/root/bin:$PERLHOME/bin:$PATH EOF ) >> /root/.bash_profile ln -s /boot/config/perl5 /root/perl5 rsync -hrltgoD --delete --chown=root:root --chmod=ugo=rwx "/boot/config/bin/" /root/bin modprobe i915 chmod -R 777 /dev/dri I fixed it by adding a sleep 10 to the beginning of the script. It would have been nice to know this change in the release notes.
    1 point
  29. Thank you JorgeB. I ran extended tests on disk 13 again and it started failing, disk 5 also started throwing read errors and was disabled. Disk 5 also failed extended tests. I ended up replacing both disk 5 and 13. The data rebuild completed on the new 5 and 13 disks. I am up and running with no issues! Thank you for the help!!
    1 point
  30. When you add extra setting to the - SETTINGS/SMB/SMB Extras Are these being saved to the /etc/samba/smb.conf or /boot/config/smb-extra.conf ? What settings should be used as starting point? Is the list below a good starting point? #unassigned_devices_start #Unassigned devices share includes include = /tmp/unassigned.devices/smb-settings.conf #unassigned_devices_end [global] vfs objects = catia fruit streams_xattr fruit:nfs_aces = no fruit:zero_file_id = yes fruit:metadata = stream fruit:encoding = native [share_name] path = /mnt/user/"share_name" spotlight = on ------------------------------ *Please note: do not use spaces in share name or mover will not move the files from the array back to the cache pool.
    1 point
  31. 1 point
  32. Docker > Add Container und das Formular selbst ausfüllen. Siehe auch:
    1 point
  33. Since this is a server case, it is designed with server hardware in mind. A server motherboard or RAID card would have SAS/Mini-SAS connectors. So in this case, you would have 2 data cables and 2 Molex power cables, vs 8+8 cables you would have in a consumer computer with 8 drives. A Mini-SAS to 4 SATA drives cable is common. But I do not know if it would work going from 4 to 1. My best guess is it doesn't, and it defeats the purpose of how this case is designed - to reduce cabling and the number of required connections.
    1 point
  34. Yes, firmware is the same as the 9207-8i, you have both the IT and IR firmware on Broadcom's site
    1 point
  35. Changing modes requires different firmware which means cross-flashing to the required firmware which is something other the what is on the chip/card. This information from the unRAID Wiki might be helpful and has a link to the larger discussion in the forums.
    1 point
  36. IT Mode is ideal. But IR mode should work but you'd also need to create the array in the BIOS of the card (as JBOD, or multiple arrays with only a single drive)
    1 point
  37. There is a plugin in CA for docker compose as well. I use it and the docker folder plugin. They work well together as you can put all the docker images created by compose into a folder to keep things tidy.
    1 point
  38. Just to let you guys know, you can now fully run Distributions which are using systemd (Ubuntu, Debian Bookworm+,...) on Unraid since v6.11.0-rc4: cgroup v2: Distributions which use systemd (Ubuntu, Debian Bookworm+,...) will not work unless you upgrade to Unraid v6.11.0-rc4+ and append this to your syslinux.conf: unraidcgroup2 (Unraid supports cgroup2 since version v6.11.0-rc4 and you have to upgrade if you want to use this feature) Please be aware that of time of writing this is still an experimental feature but I run this without any issues on my main server now for about 2 months, with this you will be now also be able to run Docker with all features in for example Debian based LXC containers. Simply append this:
    1 point
  39. As mentioned by @Holmesware This solution conflicts with local user permissions, I kindly ask that you modify your solution. I don't know which ranges are the best, this may vary per user, and I don't know what ranges are used exactly by Unraid but i think the following values are safe So your config would look something like this: When you create local unraid users, it maps those users starting from 1000, that I know for certain. You can also check the id for any user (active directory or local) by the id command in the unraid terminal Example: "id [username/groupname]" Edit: I only now realized you corrected yourself, I mistakenly read your solution as ready and therefore implemented it myself, without reading any after-marks. Should've read completely the first time around
    1 point
  40. so are you saying all sessions in that file can safely be deleted on startup? there are no cases where previous session info in the web.conf need to exist?, just asking as i dont actively use deluge any more so need somebody with first hand experience of this. if this is the case then i can put in some code to clear session info down on startup.
    1 point
  41. I've just seen exactly this behaviour (takes many minutes to progess from "[info] Writing changes to Deluge config file '/config/core.conf'...", WebUI nonresponsive, and deluge-web pinning one cpu core at 100%), and have found what was causing it for me. '\appdata\binhex-delugevpn\web.conf' had grown to over 40MB causing both the config-parser script and deluge-web to choke trying to parse it. Looking at the file this was because there were thousands of 'session' entries; I'm guessing that it caches active sessions to the webui in the web.conf, but for some reason never clears them, causing the file to grow out of control. Stopping the image, deleting all of these entries (bringing the file down to a much more reasonable 620bytes), and restarting the image made everything start up quickly with the webui working as normal. Hopefully that's helpful to someone!
    1 point
  42. It appears I'm having this exact same issue. Hangs at the same point and the WebUI doesn't load. 2022-05-07 11:09:10,133 DEBG 'watchdog-script' stdout output: [info] Deluge key 'listen_interface' currently has a value of '10.67.228.49' [info] Deluge key 'listen_interface' will have a new value '10.67.228.49' [info] Writing changes to Deluge config file '/config/core.conf'... 2022-05-07 11:09:10,211 DEBG 'watchdog-script' stdout output: [info] Deluge key 'outgoing_interface' currently has a value of 'wg0' [info] Deluge key 'outgoing_interface' will have a new value 'wg0' [info] Writing changes to Deluge config file '/config/core.conf'... Sometimes, after leaving it for a few hours, it seems to move to the next step: 2022-05-07 11:09:10,133 DEBG 'watchdog-script' stdout output: [info] Deluge key 'listen_interface' currently has a value of '10.67.228.49' [info] Deluge key 'listen_interface' will have a new value '10.67.228.49' [info] Writing changes to Deluge config file '/config/core.conf'... 2022-05-07 11:09:10,211 DEBG 'watchdog-script' stdout output: [info] Deluge key 'outgoing_interface' currently has a value of 'wg0' [info] Deluge key 'outgoing_interface' will have a new value 'wg0' [info] Writing changes to Deluge config file '/config/core.conf'... 2022-05-07 14:06:11,257 DEBG 'watchdog-script' stdout output: [warn] Deluge config file /config/web.conf does not contain valid data, exiting Python script config_deluge.py... 2022-05-07 14:06:11,592 DEBG 'watchdog-script' stdout output: [info] Deluge process started [info] Waiting for Deluge process to start listening on port 58846... 2022-05-07 14:06:11,802 DEBG 'watchdog-script' stdout output: [info] Deluge process listening on port 58846 2022-05-07 14:06:13,029 DEBG 'watchdog-script' stdout output: [info] No torrents with state 'Error' found 2022-05-07 14:06:13,029 DEBG 'watchdog-script' stdout output: [info] Starting Deluge Web UI... [info] Deluge Web UI started However, the WebUI is not responsive and won't load. I've made no changes to my configuration and it was working for many months before yesterday when this issue popped up. Edit: I was planning to follow the steps listed here today, but it seems to have started on it's own over night and is running fine now. I'll update if that changes.
    1 point
  43. How relevant is Post #1 in 2022 on 6.9.2 though? I'm just curious as I got all cores/threads checked for my 5950X and dont see any kind of performance issues?
    1 point
  44. Leaving this for posterity. Per the norm, as soon as I actually post this I find the issue. @Squid did in fact on two occasions call out folder caching and fixed similar issue. He was right, I disabled folder caching and the rhythmic usage immediately disappeared. /facepalm
    1 point
  45. Has the plan for VM snapshots gone away?
    1 point
  46. If as you try to access unsecure unRAID, you see this panel, insert \ backslash for user ID and click OK, your in.
    1 point
  47. Edit the mover script and add the --progress option to each rsync command, then display the output in the webGUI by tailing a file? Then you can see specifically which file it's moving, how fast, and ETA to completion. Would be better detail than a progress bar and easier than trying to make a progress bar around each rsync command.
    1 point