casperse

Members
  • Posts

    800
  • Joined

  • Last visited

Everything posted by casperse

  1. Thanks this post really helped me! Disabled while transferring between servers stopped this. Using the guide to use Luckybackup moving TB of data
  2. Okay I went back to the PC mappings and found that it used the windows user account matching an old Unraid one (Old setup local VM PC) but after using this instead of the administrator account it connected. I also made sure that the two local root shares was renamed. - Since server p2 is a clone of the old UNraid server it had the same old accounts. I did found that the root share pool-shares did NOT list on any server. I just had to write it manually in the UI of UAD. Looking at the logs I get this now: But its working: I haven't found the difference between the two accounts? both have access to the same directories. Only difference is that the old one was created when I started using Unraid - cant be sure I didnt do something to this account so long time ago. Thanks again for your help this was a strange one ๐Ÿ™‚
  3. Not sure I follow you? Yes the share is enabled in the settings of the mount And root access is possible from a windows PC, some other place?
  4. You said you could map the rootshare in Unraid? But the rootshare path is never listed in UAD only all the normal shares? I can confirm that UAD will list all the individual shares one by one, but what eludes me is the rootshare mapping between servers inside Unraid, that isnt listed as a option for a mapping shared! UAD will never list the root access share will it? I have created a separate user (administrator) for every shared folder, like you suggested, but I dont know how to enable this new user access to the rootshare mapping? (Some command in a terminal?) Again thanks so much for helping out having two servers is allot of work when you cant move things more easy between them ๐Ÿ™‚
  5. So the only difference then is that if I in the future install a new docker and keep/forget to change the path of /mnt/user/appdata The exclusive share option will make sure its running without FUSE
  6. Hi All I have read the nice write-up on the new "exclusive shares" feature here: https://reddthat.com/post/224445 I have forgotten to change the path sometimes when installing a new docker, so I would actually like to setup the "Exclusive share" for my appdata cache. (I Have plenty of spare cache space, and my cache pool is mirrored and setup to use snapshot with ZFS to the array) What I am missing is do I have to change all my dockers back from the mnt/[poolname]/appdata/ back to the mnt/user/appdata/.... And is there any other difference in enabling exclusive shares vs having changed all the path to mnt/[poolname]/appdata/ ? Hope someone can help answer my question, I didn't really get this reading around the forum
  7. I might be doing it wrong, then? But doing the same on Unraid, does not work. Do I need the full path? //192.168.0.14/mnt/rootshare/Shares-Pools I get error when trying to mount the share Diagnostic attached. diagnostics-20240326-1245.zip
  8. Thanks! I did this and all my shares are listed. But I cant map the smb rootshare I still get an error when trying the path of : Servername\Shares-Pools and I dont want to map each 22 mapped shares 1 by 1 Is it only from a Windows SMB you can map the rootshare and not between Unraid servers? Maybe some linux magic you can do to enable a mount between them? I am also experimenting with 2 x 10Gbit lan card with direct connection between eth2 - eth2 But for some reason I am also running into eroors when trying this?
  9. Hi I really need some input on how to accomplish this using UAD. I already have enabled a rootshare on each of my Unraid servers I really want a rootshare mapped between my 2 Unraid servers on UAD (So far I keep getting errors) I really would like this to utilize the direct 10Gb LAN connection between them 192.168.11.6 & 192.168.11.14 if possible? SMB or NFS? So far I haven't been able to accomplish this using the UAD, do I need some Linux terminal commands to accomplish this? I am planning to use the "luckybackup" docker to move large amount of data between them, but having a rootshare mount between them would be really helpful UPDATE: Mapping the two rootshares on windows works! But trying to mount a SMB rootshare between the Unraid servers does NOT work? Same SMB part \\SERVERNAME\Shares-Pools or \\IP\Shares-Pools Unraid errors
  10. So my troubleshooting have located my problem. The Bios is always set to use the internal iGPU (Correct) and I get all the startup during boot to my monitor but when it should start the GUI I get a prompt in the upper left corner. Removing my Nvidia card and placing my HBA controller in the first PCIe slot WORKED! (Removing all other cards) and after some time I finally got the GUI on my monitor. JUHU! I then tried placing my Nvidia GPU (NVIDIA GeForce RTX 3060) in the 3 Pci slot (8x) and after boot I am back to the prompt and no UI? Something breaks during boot (The GPU have no output to my monitor). Any suggestion on what I should do next? UPDATE: I found a new Bios from 2024 (My board is from 2019) I updated the Bios and set everything up from scratch. Same thing. Cursor in top left corner of the monitor no UI.
  11. SOLVED: I am an IDIOT.... Checking the network settings on the ubuntu: sudo nano /etc/netplan/00-installer-config.yaml Somehow the gateway was wrong? Also installing the Guest service: Step 1: Log in using SSH You must be logged in via SSH as sudo or root user. Please read this article for instructions if you donโ€™t know how to connect. Step 2: Install qemu guest agent apt update && apt -y install qemu-guest-agent Step 3: Enable and Start Qemu Agent systemctl enable qemu-guest-agent systemctl start qemu-guest-agent Step 4: Verify Verify that the Qemu quest agent is running systemctl status qemu-guest-agent
  12. Hi All All my VM's work except the AMP VM for my gaming server? I can see it doesn't get any IP? (This configuration is the same as my Windows VM and that worked perfectly after moving?) Configuration: XML: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>AMP_Game_server</name> <uuid>106257ad-bf64-1305-df79-880b565808af</uuid> <description>Ubuntu server 20.04LTS</description> <metadata> <vmtemplate xmlns="unraid" name="Ubuntu" icon="/mnt/user/domains/AMP/amp.png" os="ubuntu"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>10</vcpu> <cputune> <vcpupin vcpu='0' cpuset='12'/> <vcpupin vcpu='1' cpuset='13'/> <vcpupin vcpu='2' cpuset='14'/> <vcpupin vcpu='3' cpuset='15'/> <vcpupin vcpu='4' cpuset='16'/> <vcpupin vcpu='5' cpuset='17'/> <vcpupin vcpu='6' cpuset='18'/> <vcpupin vcpu='7' cpuset='19'/> <vcpupin vcpu='8' cpuset='20'/> <vcpupin vcpu='9' cpuset='21'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/106257ad-bf64-1305-df79-880b565808af_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='5' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/AMP/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:84:10:41'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='da'> <listen type='address' address='0.0.0.0'/> </graphics> <audio id='1' type='none'/> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </memballoon> </devices> </domain> I can connect to the Webserver UI? But it have no internet connection, so strange
  13. This is actually strange, cloned my Unraid flash for a second server (New license). On the new server it now works (With the settings above), I get the GUI output on the iGPU HDMI port (MB) BUT! Same cloned USB on my old server gives me a prompt with a "_" blinking screen in the top left corner of the monitor? On this MB its a Displayport again with iGPU output (Displayport) and I get the full boot on the screen right up to the end? Both servers have the iGPU as the primary and only output!
  14. My problem is that i occurs every 10-12 hours so with the amount of dockers I have this would be very hard to do. Update: So this could be caused by a single docker with a memory limit that breaks it? Anyway to identify the docker, from the error message?
  15. Please can anyone help me? I have installed the swapfile plugin I have set a memory limit on all my dockers to 1G (If all dockers 0bey the rules of the limit then I shouldn't see anymore errors?) I have tried stopping all dockers and only some dockers but I still get the Memory error? Is there anyway to find out what is causing this? Would syslog be able to find out? This happened again today: Systems are still running, but the error is resulting in Unraid killing random processes?
  16. I followed the guide for the i7 (I believe) and the only difference is that the efficient cores are all auto. Doing the Passmark I can see that I pretty much have 50% of the score with these settings, witch is fine (Its a beast!).
  17. After Plex removed the Sync feature and replaced it with the new Download feature, its not so bad, and the RamScratch gets empty pretty quickly. Some restraint to the amount and size of files should be limited, but my initial tests have been okay. But I am not using this now, I am focusing on eliminating the memory error - so it has no impact on the errors I currently get. I have now set memory limit for all my dockers, hope to see a difference. No morelogs since 16:00 what does this mean?
  18. Yes, from the above error I can see a docker ID starting with c9e4ebfe searching for this I get the culprit? : But I can see that this docker already have a limit of 1G: --memory=1G --no-healthcheck --log-opt max-size=50m The dev/shm will always be set to 50% of the available memory right? So any input on what to do next? any other settings to limit memory for specific dockers
  19. I can confirm this works! I just had to update the format of my old cache drive before starting the server - because I converted it to ZFS after cloning the USB for the backup server! Worked great! Both the Appdata, Domains, Systems and the docker VM worked and started up without errors. I just whish I had reformatted all the older drives before adding them to the new array. But I just used the filemanager to delete old files. And I am now adding parity drives so this is great! EVERYONE talks about how easy True Nas is moved between servers, but Unraid is better! Here I rebuilt my array and keept every App & settings on a "new" server with ease!
  20. Ok so my trouble shooting continues ๐Ÿ™‚ I Have installed the swapfile plugin (You recommended above) successfully (Standard size setting is around 2G) I have moved the RamScratch settings to each of the dockers and also set memory limits (PLEX) --no-healthcheck --log-opt max-size=50m --log-opt max-file=1 --restart unless-stopped --mount type=tmpfs,destination=/tmp/PlexRamScratch,tmpfs-size=68719476736 --memory=64G (EMBY) --no-healthcheck --log-opt max-size=50m --log-opt max-file=1 --restart unless-stopped --mount type=tmpfs,destination=/tmp/EmbyRamScratch,tmpfs-size=8589934592 --memory=8G (JELLYFIN) --no-healthcheck --log-opt max-size=50m --log-opt max-file=1 --restart unless-stopped --mount type=tmpfs,destination=/tmp/JellyRamScratch,tmpfs-size=8589934592 --memory=8G After doing this I get a docker Warning: Your kernel does not support swap limit capabilities? (Running the RamScratch as a script I never saw any warnings like below, did I do something wrong?) I can see that the memory limit is implemented on the docker webpage: But I still see this: And again today: If these are to be just ignored then it would be nice not to have them in RED letters ๐Ÿ™‚
  21. Hi All What I am trying to do: Build a backup server and keep my old settings Shares, Users, Config & old cache drives with my Appdata/Domains/System/Dockers/Plugin BUT I want to build a new array with all new drives? SO FAR: I have succesful cloned my old Unraid USB and bought a new pro license. changed IP and server name in config. I can boot and I have all the old drives listed as missing I would like to keep my old cachedrives (Appdata/Domains/System/Dockers) and all my shared folder settings. But I would like to build a completely new array with new drives. (I already moved everything to my new server) Is this at all possible and how would I go about this? Currently I can boot up and it remembers all the old drives and see all my new drives. The "New config" under settings looks like the right way to do this? - But then I will loose all my old cache drives? Or can Unraid "see" the old formatted cache drive and the naming of the old orig. cachepools - if I just plug them in? (NVME drives) Sorry if this ia a stupid Q, but I want to be sure before pressing the "New config" button ๐Ÿ™‚
  22. I just discovered I have both the script and the advanced settings for RAMscratch --no-healthcheck --log-opt max-size=50m --log-opt max-file=1 --restart unless-stopped --mount type=tmpfs,destination=/tmp,tmpfs-size=8589934592 And the above scripts, to create the RAMscratch ๐Ÿ™‚ Any recommendation to use one over the other? (I cant remember if one solution is better to cleanup than the other in RAM?) And did you want me to do a "--memory=8G" for the Plex docker? Any size recommendation for the swapfile? Sorry I have read many of the posts (Really old) and I am curious to if this have any effect on my memory problems? (I also now have ZFS drives and I can see they also allocate more RAM)
  23. Thanks JorgeB, I can see my Plex is one update short. Will update right away! mgutt helped me long time ago setting up at RamScratch folder for Plex at boot (Script), but I guess you talk about the docker advanced settings memory limit? I was told that it would be best to remove them but that was in 2022 ๐Ÿ™‚ #!/bin/bash mkdir /tmp/PlexRamScratch chmod -R 777 /tmp/PlexRamScratch mount -t tmpfs -o size=40g tmpfs /tmp/PlexRamScratch (The 40g size is to accommodate the download feature in Plex). I did install the swapfile plugin and created it on a single U3 cache drive with btrfs - any recommendation on the size? I went with the default values (Size: 20G) I still think its strange that after upgrading from 64G to 128G of newer and faster RAM I have these low on RAM problems? Is it fragmentation or is this some Kernel Memory Limit causing the OOM on Docker hosting?
  24. The btrfs question was related to the plugin you suggested for the swapfile looks like it needs btrfs formatted drives? Also got a new error, I havent seen before but resulting in the same memory error message? I have not installed any new dockers? (Just moved everything to the new server now with ZFS cache?)
  25. Yes of course: # /usr/sbin/ipmi-sensors --output-sensor-thresholds --comma-separated-output --output-sensor-state --no-header-output --interpret-oem-data 3,CPU Temperature,Temperature,Nominal,31.00,C,N/A,N/A,N/A,98.00,99.00,100.00,'OK' 5,MB Temperature,Temperature,Nominal,24.00,C,N/A,N/A,N/A,60.00,70.00,95.00,'OK' 6,TR1 Temperature,Temperature,N/A,N/A,C,N/A,N/A,N/A,100.00,100.00,100.00,N/A 7,TR2 Temperature,Temperature,N/A,N/A,C,N/A,N/A,N/A,100.00,100.00,100.00,N/A 8,TR3 Temperature,Temperature,N/A,N/A,C,N/A,N/A,N/A,100.00,100.00,100.00,N/A 9,CPU Package Temp,Temperature,Nominal,35.00,C,N/A,N/A,N/A,95.00,100.00,105.00,'OK' 10,VRM Temperature,Temperature,Nominal,41.00,C,N/A,N/A,N/A,110.00,110.00,110.00,'OK' 11,PSU1 Temperature,Temperature,Nominal,26.00,C,N/A,N/A,N/A,50.00,63.00,63.00,'OK' 12,PSU2 Temperature,Temperature,Nominal,37.00,C,N/A,N/A,N/A,50.00,63.00,63.00,'OK' 15,12V Voltage,Voltage,Nominal,12.12,V,9.60,10.20,10.80,13.20,13.80,14.40,'OK' 16,3.3V Voltage,Voltage,Nominal,3.34,V,2.64,2.80,2.98,3.63,3.79,3.97,'OK' 17,3.3VSB Voltage,Voltage,Nominal,3.36,V,2.64,2.80,2.98,3.63,3.79,3.97,'OK' 18,5VSB Voltage,Voltage,Nominal,5.11,V,4.01,4.25,4.51,5.50,5.76,6.00,'OK' 19,5V Voltage,Voltage,Nominal,5.04,V,4.01,4.25,4.51,5.50,5.76,6.00,'OK' 20,CPU Core Voltage,Voltage,Nominal,1.00,V,0.00,0.00,0.00,2.11,2.21,2.30,'OK' 21,DRAM VDDQ Volt.,Voltage,Nominal,1.33,V,0.80,0.85,0.90,2.20,2.30,2.40,'OK' 23,CPU Input Volt.,Voltage,Nominal,1.79,V,1.20,1.26,1.34,2.32,2.42,2.53,'OK' 24,PSU1 Voltage,Voltage,Nominal,227.00,V,N/A,N/A,N/A,N/A,N/A,N/A,'OK' 25,PSU2 Voltage,Voltage,Nominal,227.00,V,N/A,N/A,N/A,N/A,N/A,N/A,'OK' 28,PSU1 Current,Current,Nominal,1.20,A,N/A,N/A,N/A,N/A,N/A,N/A,'OK' 29,PSU2 Current,Current,Nominal,0.00,A,N/A,N/A,N/A,N/A,N/A,N/A,'OK' 32,CPU_FAN,Fan,Nominal,2040.00,RPM,0.00,360.00,360.00,N/A,N/A,N/A,'OK' 33,CPU_OPT_FAN,Fan,Nominal,2040.00,RPM,0.00,360.00,360.00,N/A,N/A,N/A,'OK' 34,MB_CHA_FAN1,Fan,N/A,N/A,RPM,0.00,360.00,360.00,N/A,N/A,N/A,N/A 35,MB_CHA_FAN2,Fan,Nominal,1800.00,RPM,0.00,360.00,360.00,N/A,N/A,N/A,'OK' 36,MB_CHA_FAN3,Fan,N/A,N/A,RPM,0.00,360.00,360.00,N/A,N/A,N/A,N/A 37,MB_CHA_FAN4,Fan,N/A,N/A,RPM,0.00,360.00,360.00,N/A,N/A,N/A,N/A 38,MB_CHA_FAN5,Fan,N/A,N/A,RPM,0.00,360.00,360.00,N/A,N/A,N/A,N/A 43,MB_AIO_PUMP,Fan,N/A,N/A,RPM,0.00,360.00,360.00,N/A,N/A,N/A,N/A 44,CHA_FAN1,Fan,Nominal,2400.00,RPM,0.00,360.00,360.00,N/A,N/A,N/A,'OK' 45,CHA_FAN2,Fan,Nominal,2160.00,RPM,0.00,360.00,360.00,N/A,N/A,N/A,'OK' 46,CHA_FAN3,Fan,Nominal,2040.00,RPM,0.00,360.00,360.00,N/A,N/A,N/A,'OK' 47,CHA_FAN4,Fan,Nominal,1920.00,RPM,0.00,360.00,360.00,N/A,N/A,N/A,'OK' 48,CHA_FAN5,Fan,Nominal,1920.00,RPM,0.00,360.00,360.00,N/A,N/A,N/A,'OK' 49,CHA_FAN6,Fan,Nominal,1680.00,RPM,0.00,360.00,360.00,N/A,N/A,N/A,'OK' 50,CHA_FAN7,Fan,Nominal,1680.00,RPM,0.00,360.00,360.00,N/A,N/A,N/A,'OK' 51,CHA_FAN8,Fan,Nominal,1920.00,RPM,0.00,360.00,360.00,N/A,N/A,N/A,'OK' 52,PSU1 FAN,Fan,Nominal,6840.00,RPM,N/A,N/A,N/A,N/A,N/A,N/A,'OK' 53,PSU2 FAN,Fan,Nominal,0.00,RPM,N/A,N/A,N/A,N/A,N/A,N/A,'OK' 56,PSU1 Power In,Power Supply,Nominal,264.00,W,N/A,N/A,N/A,N/A,N/A,N/A,'OK' 57,PSU2 Power In,Power Supply,Nominal,8.00,W,N/A,N/A,N/A,N/A,N/A,N/A,'OK' 60,PSU1 Power Out,Power Supply,Nominal,240.00,W,N/A,N/A,N/A,1896.00,2000.00,2000.00,'OK' 61,PSU2 Power Out,Power Supply,Nominal,0.00,W,N/A,N/A,N/A,1896.00,2000.00,48.00,'OK' 64,PSU1 Over Temp,Temperature,Nominal,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,'transition to OK' 65,PSU2 Over Temp,Temperature,Nominal,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,'transition to OK' 68,PSU1 AC Lost,Power Supply,Nominal,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,'Presence detected' 69,PSU2 AC Lost,Power Supply,Nominal,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,'Presence detected' 72,PSU1 Slow FAN1,Fan,Nominal,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,'transition to OK' 73,PSU2 Slow FAN1,Fan,Nominal,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,'transition to OK' 76,PSU1 PWR Detect,Power Supply,Nominal,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,'Presence detected' 77,PSU2 PWR Detect,Power Supply,Nominal,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,'Presence detected' 80,PSU1 Over Curr,Current,Nominal,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,'transition to OK' 81,PSU2 Over Curr,Current,Nominal,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,'transition to OK' 85,VERSION_ERR,Version Change,Nominal,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,'OK' 87,Watchdog2,Watchdog 2,Nominal,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,'OK' The email notification in the IPMI would require me to have my own e-mail server (Google have to many special steps for the IPMI to work) Sorry I thought there was a integration between this IPMI and the Unraid Notification Settings. I can see that its a event notification (My mistake) UPDATE: After running the command I now have it Green?