tjsyl

Members
  • Posts

    26
  • Joined

  • Last visited

About tjsyl

  • Birthday November 13

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

tjsyl's Achievements

Noob

Noob (1/14)

0

Reputation

2

Community Answers

  1. So far I've went down a rabbit hole and found the PLEX DB had some errors. Full rebuild of the Plex DB via (https://www.reddit.com/r/PleX/comments/z7i4va/repair_a_corrupted_database_on_unraid_updated/) this helpful guide, I couldn't get ChuckPaPlex's script to play nice with UR, I think I may have been executing it from the wrong directory. After manual full repair I am 80% through the parity check and no issues. Just kidding. I was checking on it as I was typing this and I see something is running a muck on the RAM (98% of 160GB). I had UNMANIC set to use the ramdisk long ago, (/tmp/xxxxxx) but I am 99% sure when I added the 4- 1TB ssd's (4-6 months ago?) changed it to one of the 2, 2TB cache pools I have set up. I noticed Unmanic running out of space on very large Linux ISO's. I had the syslog writing to my other UR server and it looked useless, after working on some stuff for work I came back to it and noticed the server was responsive and it had the syslog had jumped from 14kb to 80+kb. Apparently Unmanic was running a muck. It would seem I don't have enough patients to wait it out the last few times this happened and took action before UR fixed itself. For now I shut down Unmanic and disabled autostart, will try to see what's going on with that container. Maybe move it to another server. Anything else helpful in this log? I know exactly what the "smb_panic" was about. I also omitted a few things *****. Not sure why the tab open in chrome on my phone feels the need to log back in every 15 minutes but, meh.. I think all is well now but let me know if anything looks strange please. I will disable writing the syslog to the flash drive for now but keep it mirrored to my other UR server. syslog-10.xxx.xxx.xxx.log
  2. X9DRi-F 2xE5-2690v2 160GB DDR3 15 drives with 0 errors. I was running 6.12.6 and lost my "MAIN" display, other tabs would load but "MAIN" wouldn't show the drives. After 20+ days since the last reboot and seeing a couple posts about 6.12.8 fixing that issue I figured why not, I had already updated my X11DPH-T server and it has been smooth for a few weeks. This issue seems to have started a few days after updating. The Web UI and SMB shares become unresponsive but I can still pull up my console via IPMI or directly, I can type the user name but timeout before it ever shows the password prompt. After trying that for a few times it becomes unresponsive and it may or may not drop a line when hitting return. I was in the middle of a plex stream when it happened this time and the array was in process of a parity check after the last occurance (12TB x2 parity). I can also see via the LED's on my LSI SAS controllers that the drives look to still be busy with what I assume to be the parity check. I have yet to try pinging but If it happens again I will see if I get a reply. The system seems to take input when instructed to shutdown gracefully (FROM IPMI) but it complains about the hung process id 4597 (See Screenshot). Is there anything in the diagnostics that can tell me what process that was/is? I haven't made any changes to my bios settings but I am attaching some screenshots in an effort to verify if something is not playing nice with 6.12.8. ur0-diagnostics-20240316-0118.zip ur0-syslog-20240316-0816.zip
  3. I was able to remove the container from the folder, move it below that folder (so it changes the unraid start order) then add it back to the folder.
  4. Mixes up the cache disks with the data disks.
  5. Found a bug with the main page mixing up the drives. It shows correct under all the other tabs. See screenshots.
  6. So I found one of the 4TB drives has increasing UDMA CRC error count even after moving to a different slot. I am going to stop the array and replace that drive, even though CRC errors shouldn't be the drive. 4.5 years old, of my 4TB drives its one of the newer ones... Oh well, that's why I've been buying spares for cheep when I find them.
  7. I rebooted and docker came back up but now getting an intermittent timeout /lag issue. I've ruled out the switch and ethernet cables but now trying to determine what's dragging the whole system down. The only other thing that is different from the last few months this system has been solid is the 4 brand new (TeamGroup 3DNAND SLC) 1TB SSD cache drives connected directly to the MOBO SATA. I tired using the integrated SAS (MINI SAS to SATA) but UnRaid would not see the drives. After months of running on only the array (cacheless) I got a chance to shutdown and plug in the 120-128GB drives I had in there and one basically needed a diaper. Amazon next morning the 4-1TB drives. Issue started and traced down a bad or not cleanly connected (crapp) Ethernet cable. Now it's intermittent with the (constant ping running) timeouts ping and web GUI. If I should start a new topic for this please let me know. But reboot fixed the initial issue (not that the damn Quadro K620 will do me any good, found it only does x264 so there is a Quadro P400 on the way). ur0-diagnostics-20230911-1224.zip
  8. After throwing a Quadro K620 in this machine for Unmanic I booted up, everything started up normally and I have Auto start on for the array. I had a bad ethernet cable causing issues and did an unclean shutdown yesterday but canceled the parity check so it kicked off on after the array started, I don't know if that is somehow not helping the situation. I installed the Nvidia driver, went into plugins and got the UID from the Nvidia Driver, went to docker and stopped all the containers, set one of them to not auto start, went to settings, docker and disabled, applied, then enabled and applied. Now its Stopped and I don't see anything in the system log stating why. I'm attaching the Diag if someone can please give me a hand. If I just need to reboot it's going to be a while for the 12TB Parity Check to finish. I don't want to stop it this time unless someone says it's not going to do any harm. I've never had any issues with these disks. Last week replaced the 2 8TB parity drives (one had thousands of errors) with 2 12TB and added the good 8tb into the array. ur0-diagnostics-20230910-2200.zip
  9. Run a VM and install a server. https://medevel.com/10-open-source-pacs-dicom/
  10. Following rootisgod.com guide I got esxi 7.0.u1 running on UR 6.12.3 without issue. Here is my VM XML. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='6'> <name>ESXi-7.0.1</name> <uuid>da949910-4a06-9bdc-2af9-0f9f29fe7cb3</uuid> <metadata> <vmtemplate xmlns="unraid" name="Linux" icon="linux.png" os="linux"/> </metadata> <memory unit='KiB'>33554432</memory> <currentMemory unit='KiB'>33554432</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>16</vcpu> <cputune> <vcpupin vcpu='0' cpuset='8'/> <vcpupin vcpu='1' cpuset='24'/> <vcpupin vcpu='2' cpuset='9'/> <vcpupin vcpu='3' cpuset='25'/> <vcpupin vcpu='4' cpuset='10'/> <vcpupin vcpu='5' cpuset='26'/> <vcpupin vcpu='6' cpuset='11'/> <vcpupin vcpu='7' cpuset='27'/> <vcpupin vcpu='8' cpuset='12'/> <vcpupin vcpu='9' cpuset='28'/> <vcpupin vcpu='10' cpuset='13'/> <vcpupin vcpu='11' cpuset='29'/> <vcpupin vcpu='12' cpuset='14'/> <vcpupin vcpu='13' cpuset='30'/> <vcpupin vcpu='14' cpuset='15'/> <vcpupin vcpu='15' cpuset='31'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-5.1'>hvm</type> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='8' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> <feature policy='require' name='vmx'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/domains/Linux/vdisk1.img' index='3'/> <backingStore/> <target dev='hdc' bus='usb'/> <serial>vdisk1</serial> <boot order='1'/> <alias name='usb-disk2'/> <address type='usb' bus='0' port='1'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/domains/Linux/vdisk2.img' index='2'/> <backingStore/> <target dev='hdd' bus='sata'/> <serial>vdisk2</serial> <alias name='sata0-0-3'/> <address type='drive' controller='0' bus='0' target='0' unit='3'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/VMware-VMvisor-Installer-7.0U1-16850804.x86_64.iso' index='1'/> <backingStore/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='sata' index='0'> <alias name='sata0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:da:f9:0e'/> <source bridge='br0'/> <target dev='vnet5'/> <model type='vmxnet3'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/3'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/3'> <source path='/dev/pts/3'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-6-Linux/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='2'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5900' autoport='yes' websocket='5700' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <audio id='1' type='none'/> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain>
  11. Well, found out one of the 16GB ram sticks was indeed bad. $14.13 later and its been solid for well over 24 hours. One parity sync down and 2.2 of 5.5 TB moved so far.
  12. Got the diag from the most recent incident. ur0-diagnostics-20230707-1136.zip