alexhalbi

Members
  • Posts

    11
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

alexhalbi's Achievements

Noob

Noob (1/14)

0

Reputation

  1. You could try to buy a cooler for your m.2 drive. I have an icy box passive cooler attached to my Samsung 970 Evo Plus and it is around 40°C
  2. I did not find anything about that when reserching for my hardware. I mainly chose them to have more Sata Ports available for storage upgrades.
  3. The log I posted is from the drive that is still alive, obviosly. It was swapped out in March, so it was not in use for the initial transfer of alll my files to the server (~10TB) like the two dead drives. But I think the usage of the last 1.5 months is representative of the usage in the time before, except for the initial setup. If I can get a refund for the dead drive instead of a swap, I will buy another brand for sure. Do you have any specific settings on your unriad that are optimized for nvme/ssd drives?
  4. Hello, I bought a completely new server in december for unraid use with some dockers and mainly as a network storage. I included two Adata XPG 256GB drives as cache drives. Due to a shipment failure I got two different models, one ADATA ASX8200PNP-256GT-C XPG SX8200 and one Adata XPG Gammix, but since the heatspreader is mainly the difference, I just installed them without complaining to get the correct model. I finished building it in the beginning of January. In the first week of March, the first drive failed. I returned it via the seller and got a new one to install, no problems there. I thought it was just a bad unit where I had bad luck. Today, the other drive failed! So I am assuming unriad is doing something the drives probably do not like? Which configurations would be important to diagnose that? Could you please help me in diagnosing that issue, so I do not have to swap drives out that often. Especially when warranty is over, they should last a bit longer. Hardwarewise one of the drives is installed on the mainboard with the included heatspreader. It was running at about 40° in idle and 50-55°C on large transfers. This one died today. The other drive is installed on a riser card and is running at about 30°C and 40°C when used. This one died in March and is now replaced. Attached is the Smart report of that disk, I will try to get the other one now, by swapping it to a different slot to get it running for some minutes again. Did not work sadly... According to that, I do not think that heat should be what is killing the drives? So I am assuming software is the problem here? Thanks for your help, AlexADATA_SX8200PNP_2J0420041579-20190427-1036.txt
  5. I have got a problem with this docker container. On one of my unraid servers I am unable for days now to reach the web UI with chrome (and any other browser) on my PC, but I can reach it with my phone without problems via wifi. It just redirects to /gui and asks for the password and then loads indefinately. I do not know if it would be very helpful if I upload my docker log, since it is spammed with all info messages from the resilio client. (It created multiple GB logfiles over two days, so I rotate it now to 10MB, this should be fixed too, since the unraid resilio container does not do this, I am using it on another server) Could someone please assist me in what information you would need to diagnose that problem. This is a part of my docker log file, from the time I tried to access the page. I do not know if it is relevant at all, since it writes multiple 100 lines with information about my folders per minute... I removed the Hex code in the brackets, I did not know if it is personal data or not. [20190215 21:25:50.106] PD[5BDB] [DDBA] (a:5): checking tunnel[0x1] connection to 192.168.1.20:27757/TCP [20190215 21:25:51.106] 21TunnelCheckConnection[0x2]: raised error SE_NET_TIMEOUT TunnelCheckConnection: timeout [20190215 21:25:51.106] PD[E773] [B249] (a:2): failed to open tunnel[0x2] to 172.17.0.3:55555/TCP - error: SE_NET_TIMEOUT TunnelCheckConnection: timeout enc: SRP, tunnels: 1 [20190215 21:25:51.106] 21TunnelCheckConnection[0x2][<NULL>] [B249]: destroyed [20190215 21:25:51.106] 21TunnelCheckConnection[0x3]: raised error SE_NET_TIMEOUT TunnelCheckConnection: timeout [20190215 21:25:51.106] PD[590B] [B249] (a:2): failed to open tunnel[0x3] to 172.17.0.3:55555/TCP - error: SE_NET_TIMEOUT TunnelCheckConnection: timeout enc: SRP, tunnels: 1 [20190215 21:25:51.106] 21TunnelCheckConnection[0x3][<NULL>] [B249]: destroyed [20190215 21:25:51.106] 21TunnelCheckConnection[0x4]: raised error SE_NET_TIMEOUT TunnelCheckConnection: timeout [20190215 21:25:51.106] PD[2481] [B249] (a:2): failed to open tunnel[0x4] to 172.17.0.3:55555/TCP - error: SE_NET_TIMEOUT TunnelCheckConnection: timeout enc: SRP, tunnels: 1 [20190215 21:25:51.106] 21TunnelCheckConnection[0x4][<NULL>] [B249]: destroyed [20190215 21:25:51.106] 21TunnelCheckConnection[0x5][<NULL>] [B249]: created outgoing [20190215 21:25:51.106] 21TunnelCheckConnection[0x5][<NULL>] [B249]: Connect to 172.17.0.3:55555 via TCP [20190215 21:25:51.106] PD[E773] [B249] (a:2): checking tunnel[0x5] connection to 172.17.0.3:55555/TCP [20190215 21:25:51.106] 21TunnelCheckConnection[0x6][<NULL>] [B249]: created outgoing [20190215 21:25:51.106] 21TunnelCheckConnection[0x6][<NULL>] [B249]: Connect to 172.17.0.3:55555 via TCP [20190215 21:25:51.106] PD[590B] [B249] (a:2): checking tunnel[0x6] connection to 172.17.0.3:55555/TCP [20190215 21:25:51.106] 21TunnelCheckConnection[0x7][<NULL>] [B249]: created outgoing [20190215 21:25:51.106] 21TunnelCheckConnection[0x7][<NULL>] [B249]: Connect to 172.17.0.3:55555 via TCP [20190215 21:25:51.106] PD[2481] [B249] (a:2): checking tunnel[0x7] connection to 172.17.0.3:55555/TCP [20190215 21:25:54.613] 16TunnelConnection[0x8]: received ping [20190215 21:25:54.614] 16TunnelConnection[0x9]: received ping This is my config:
  6. Sure, With not working I mean I am unable to start my VM with the Soundcard attached to it. I get following error (with and without the syslinux configuration): Execution error internal error: process exited while connecting to monitor: 2019-02-14T08:18:27.585203Z qemu-system-x86_64: -device vfio-pci,host=00:1f.3,id=hostdev0,bus=pci.0,addr=0x6: vfio error: 0000:00:1f.3: group 13 is not viable Please ensure all devices within the iommu_group are bound to their vfio bus driver. But I cannot add all devices in that IOMMU Group to the VM afaik. Right now I have a workaround running with an USB Sound Card on an USB Controller [1b21:2142] bound to my VM, but I would really like to use the onboard Audio since it is surely better. I also added this to the syslinux file and it works fine. BTW: Is it normal, that the PCI Device to add to the VM does not show up (in the selection thing in the vm options in the screenshot above) after adding the vfio-pci.ids statement to the syslinux file and rebooting? Or is that an unraid bug? After I added it manually in the xml file it is now showing up... syslinux file.txt vm_config.xml
  7. I am currently also trying to remap my onboard audio to a Windows VM, but the way you described does not seem to work in my case. I have got an "ASRock - Z370 Professional Gaming i7" with Creative onboard Audio with an i7-8700K. I tried PCIe ACS Override "Downsteam" and "Both" My append line is: append pcie_acs_override=downstream,multifunction vfio-pci.ids=8086:a2f0 modprobe.blacklist=i2c_i801,i2c_smbus initrd=/bzroot And the corresponding IOMMU Group is: IOMMU group 13: [8086:a2c9] 00:1f.0 ISA bridge: Intel Corporation Z370 Chipset LPC/eSPI Controller [8086:a2a1] 00:1f.2 Memory controller: Intel Corporation 200 Series/Z370 Chipset Family Power Management Controller [8086:a2f0] 00:1f.3 Audio device: Intel Corporation 200 Series PCH HD Audio [8086:a2a3] 00:1f.4 SMBus: Intel Corporation 200 Series/Z370 Chipset Family SMBus Controller I hope anyone is able to help me with this, if it is even possible. Thanks to everyone who can help me. IOMMU.txt
  8. In the mean time it seems, that the problem fixed itself. I was just able to install a docker Update and could add a Container via the UI. The Main difference was, that I did it from my phone via WiFi, but I tried a different Browsers when the issue first arrised. Thanks for the help, this topic can be closed.
  9. It still happens in safemode. I also deleted the docker.img file which removed all my docker containers I already had. /mnt/user/system/docker/docker.img But the webui is still crashing when adding a new docker container and no image gets downloaded.
  10. Hello, I am right now trying to setup my second unraid server and sync them with each other (via the linuxserver/resilio docker). After some issues with the cache drive (I changed settings of some big share to Prefer instead of Yes) my server is running fine again. Furthermore, I installed "CA Auto Turbo Write Mode" and "CA Mover Tuning" today when experiencing issues with the mover. I realized that my resilio docker got unresponsive in the web gui, but on the other server, it still transferred data. So I waited until it finished, but the web UI was still not responding (even after docker and system restart). After removing the docker via the commandline, I realized that I am unable to add ANY docker container via the Web UI to my server. I tried multiple restarts of the server and killing all other dockers. Furthermore it is also not possible to change settings when clicking on a container and adjusting values there. The behavior is always the same, as soon as I press apply in the configuration menu, the webpage loads very long and then shows that the connection was reset. I tried to track down the process which should get started with the command line, but was unsuccessful. I also was not able to find anything in the diagnosis zip, but I attached it anyways. I also did not see that, the image would have been downloaded by docker, so the command doesn't even get executed! When I add Docker containers via the command line, the image gets pulled and the docker gets set-up. I tried: https://hub.docker.com/r/tobilg/mini-webserver/ I hope you can help me getting my docker back up and running. Kind regards, Alex kueh-diagnostics-20190208-1937.zip