tjsyl

Members
  • Posts

    26
  • Joined

  • Last visited

Everything posted by tjsyl

  1. So far I've went down a rabbit hole and found the PLEX DB had some errors. Full rebuild of the Plex DB via (https://www.reddit.com/r/PleX/comments/z7i4va/repair_a_corrupted_database_on_unraid_updated/) this helpful guide, I couldn't get ChuckPaPlex's script to play nice with UR, I think I may have been executing it from the wrong directory. After manual full repair I am 80% through the parity check and no issues. Just kidding. I was checking on it as I was typing this and I see something is running a muck on the RAM (98% of 160GB). I had UNMANIC set to use the ramdisk long ago, (/tmp/xxxxxx) but I am 99% sure when I added the 4- 1TB ssd's (4-6 months ago?) changed it to one of the 2, 2TB cache pools I have set up. I noticed Unmanic running out of space on very large Linux ISO's. I had the syslog writing to my other UR server and it looked useless, after working on some stuff for work I came back to it and noticed the server was responsive and it had the syslog had jumped from 14kb to 80+kb. Apparently Unmanic was running a muck. It would seem I don't have enough patients to wait it out the last few times this happened and took action before UR fixed itself. For now I shut down Unmanic and disabled autostart, will try to see what's going on with that container. Maybe move it to another server. Anything else helpful in this log? I know exactly what the "smb_panic" was about. I also omitted a few things *****. Not sure why the tab open in chrome on my phone feels the need to log back in every 15 minutes but, meh.. I think all is well now but let me know if anything looks strange please. I will disable writing the syslog to the flash drive for now but keep it mirrored to my other UR server. syslog-10.xxx.xxx.xxx.log
  2. X9DRi-F 2xE5-2690v2 160GB DDR3 15 drives with 0 errors. I was running 6.12.6 and lost my "MAIN" display, other tabs would load but "MAIN" wouldn't show the drives. After 20+ days since the last reboot and seeing a couple posts about 6.12.8 fixing that issue I figured why not, I had already updated my X11DPH-T server and it has been smooth for a few weeks. This issue seems to have started a few days after updating. The Web UI and SMB shares become unresponsive but I can still pull up my console via IPMI or directly, I can type the user name but timeout before it ever shows the password prompt. After trying that for a few times it becomes unresponsive and it may or may not drop a line when hitting return. I was in the middle of a plex stream when it happened this time and the array was in process of a parity check after the last occurance (12TB x2 parity). I can also see via the LED's on my LSI SAS controllers that the drives look to still be busy with what I assume to be the parity check. I have yet to try pinging but If it happens again I will see if I get a reply. The system seems to take input when instructed to shutdown gracefully (FROM IPMI) but it complains about the hung process id 4597 (See Screenshot). Is there anything in the diagnostics that can tell me what process that was/is? I haven't made any changes to my bios settings but I am attaching some screenshots in an effort to verify if something is not playing nice with 6.12.8. ur0-diagnostics-20240316-0118.zip ur0-syslog-20240316-0816.zip
  3. I was able to remove the container from the folder, move it below that folder (so it changes the unraid start order) then add it back to the folder.
  4. Mixes up the cache disks with the data disks.
  5. Found a bug with the main page mixing up the drives. It shows correct under all the other tabs. See screenshots.
  6. So I found one of the 4TB drives has increasing UDMA CRC error count even after moving to a different slot. I am going to stop the array and replace that drive, even though CRC errors shouldn't be the drive. 4.5 years old, of my 4TB drives its one of the newer ones... Oh well, that's why I've been buying spares for cheep when I find them.
  7. I rebooted and docker came back up but now getting an intermittent timeout /lag issue. I've ruled out the switch and ethernet cables but now trying to determine what's dragging the whole system down. The only other thing that is different from the last few months this system has been solid is the 4 brand new (TeamGroup 3DNAND SLC) 1TB SSD cache drives connected directly to the MOBO SATA. I tired using the integrated SAS (MINI SAS to SATA) but UnRaid would not see the drives. After months of running on only the array (cacheless) I got a chance to shutdown and plug in the 120-128GB drives I had in there and one basically needed a diaper. Amazon next morning the 4-1TB drives. Issue started and traced down a bad or not cleanly connected (crapp) Ethernet cable. Now it's intermittent with the (constant ping running) timeouts ping and web GUI. If I should start a new topic for this please let me know. But reboot fixed the initial issue (not that the damn Quadro K620 will do me any good, found it only does x264 so there is a Quadro P400 on the way). ur0-diagnostics-20230911-1224.zip
  8. After throwing a Quadro K620 in this machine for Unmanic I booted up, everything started up normally and I have Auto start on for the array. I had a bad ethernet cable causing issues and did an unclean shutdown yesterday but canceled the parity check so it kicked off on after the array started, I don't know if that is somehow not helping the situation. I installed the Nvidia driver, went into plugins and got the UID from the Nvidia Driver, went to docker and stopped all the containers, set one of them to not auto start, went to settings, docker and disabled, applied, then enabled and applied. Now its Stopped and I don't see anything in the system log stating why. I'm attaching the Diag if someone can please give me a hand. If I just need to reboot it's going to be a while for the 12TB Parity Check to finish. I don't want to stop it this time unless someone says it's not going to do any harm. I've never had any issues with these disks. Last week replaced the 2 8TB parity drives (one had thousands of errors) with 2 12TB and added the good 8tb into the array. ur0-diagnostics-20230910-2200.zip
  9. Run a VM and install a server. https://medevel.com/10-open-source-pacs-dicom/
  10. Following rootisgod.com guide I got esxi 7.0.u1 running on UR 6.12.3 without issue. Here is my VM XML. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='6'> <name>ESXi-7.0.1</name> <uuid>da949910-4a06-9bdc-2af9-0f9f29fe7cb3</uuid> <metadata> <vmtemplate xmlns="unraid" name="Linux" icon="linux.png" os="linux"/> </metadata> <memory unit='KiB'>33554432</memory> <currentMemory unit='KiB'>33554432</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>16</vcpu> <cputune> <vcpupin vcpu='0' cpuset='8'/> <vcpupin vcpu='1' cpuset='24'/> <vcpupin vcpu='2' cpuset='9'/> <vcpupin vcpu='3' cpuset='25'/> <vcpupin vcpu='4' cpuset='10'/> <vcpupin vcpu='5' cpuset='26'/> <vcpupin vcpu='6' cpuset='11'/> <vcpupin vcpu='7' cpuset='27'/> <vcpupin vcpu='8' cpuset='12'/> <vcpupin vcpu='9' cpuset='28'/> <vcpupin vcpu='10' cpuset='13'/> <vcpupin vcpu='11' cpuset='29'/> <vcpupin vcpu='12' cpuset='14'/> <vcpupin vcpu='13' cpuset='30'/> <vcpupin vcpu='14' cpuset='15'/> <vcpupin vcpu='15' cpuset='31'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-5.1'>hvm</type> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='8' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> <feature policy='require' name='vmx'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/domains/Linux/vdisk1.img' index='3'/> <backingStore/> <target dev='hdc' bus='usb'/> <serial>vdisk1</serial> <boot order='1'/> <alias name='usb-disk2'/> <address type='usb' bus='0' port='1'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/domains/Linux/vdisk2.img' index='2'/> <backingStore/> <target dev='hdd' bus='sata'/> <serial>vdisk2</serial> <alias name='sata0-0-3'/> <address type='drive' controller='0' bus='0' target='0' unit='3'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/VMware-VMvisor-Installer-7.0U1-16850804.x86_64.iso' index='1'/> <backingStore/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='sata' index='0'> <alias name='sata0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:da:f9:0e'/> <source bridge='br0'/> <target dev='vnet5'/> <model type='vmxnet3'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/3'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/3'> <source path='/dev/pts/3'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-6-Linux/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='2'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5900' autoport='yes' websocket='5700' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <audio id='1' type='none'/> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain>
  11. Well, found out one of the 16GB ram sticks was indeed bad. $14.13 later and its been solid for well over 24 hours. One parity sync down and 2.2 of 5.5 TB moved so far.
  12. Got the diag from the most recent incident. ur0-diagnostics-20230707-1136.zip
  13. So, I've been going at this for 3 weeks now and my poor disks are on their (IDK for sure) 10th parity check... After doing all I can gather from forums, checking hardware and changing from MACVLAN to IPVLAN the issue still persists. I've moved to new hardware (HP-Z820 to a Supermicro X9DRi-F) and replaced the flash drive. New flash drive started with 6.12.0 and upgraded to .1 and .2 to see if helps resolve this issue but to no avail. Original config was 9 drives. 7- 4TB drives in the array and 2- 8TB drives in a mirror. I've replaced the 2 parity drives with 8TB drives and 2 of the 4TB drives with 8tb drives(one at a time letting it rebuild the drive). So as it sits I've got 11 drives in a 44tb array and the 2 original 8tb drives in the mirror still. I am trying to use UNBALANCE(stopped all docker containers) to move the data from the mirror to the array then plan on adding those 2 drives to the array. It keeps locking up at some point during the process of the move and I am forced to hard reboot. I am still able to access via IPMI but at the physical console or IPMI console Unraid is unresponsive. I would love to see this issue get figured out but I am considering copying the config folder and reloading the flash drive with 6.11.5. I just enabled syslog and mirror to flash and now going to start the copy again. Yesterday I came back from a lock up and hard reboot and about 45% into the parity sync it happened again. I've got the Diag from one of the instances but they don't have shit, I dug through each file and found nothing while I was waiting for someone to chime in on another topic I created and now found this one to tag along in. If and when it happens again I'll post the diagnostics. Only docker container I have running is the TwinGate connector. It's really useful with this because I get the notification when it goes offline.
  14. So, I've been going at this for 3 weeks now and my poor disks are on their (IDK for sure) 10th parity check... After doing all I can gather from forums, checking hardware and changing from MACVLAN to IPVLAN the issue still persists. I've moved to new hardware (HP-Z820 to a Supermicro X9DRi-F) and replaced the flash drive. New flash drive started with 6.12.0 and upgraded to .1 and .2 to see if helps resolve this issue but to no avail. Original config was 9 drives. 7- 4TB drives in the array and 2- 8TB drives in a mirror. I've replaced the 2 parity drives with 8TB drives and 2 of the 4TB drives with 8tb drives(one at a time letting it rebuild the drive). So as it sits I've got 11 drives in a 44tb array and the 2 original 8tb drives in the mirror still. I am trying to use UNBALANCE(stopped all docker containers) to move the data from the mirror to the array then plan on adding those 2 drives to the array. It keeps locking up at some point during the process of the move and I am forced to hard reboot. I am still able to access via IPMI but at the physical console or IPMI console Unraid is unresponsive. I don't know if it would help I am considering copying the config folder and reloading the flash drive with 6.11.5. I've attached anon diagnostics. ur0-diagnostics-20230705-1104.zip
  15. Ty, I guess I knew that. I guess I just want to do both at once to cut down on the rebuild time.
  16. Found and read a 2020 topic about this but my situation is a bit different so I figured I would start a new topic. 2- 4TB parity drives, 5- 4TB in the array, 2- 8TB in a Mirror for all my Media. I have 4 "new" 8TB drives that I am preclearing on my test UR server at the moment. I am physically moving the drives from a Z820 to an entirely different PC but plan on just moving my flash drive over. I want to replace the parity drives (2x4TB) with 2 8TB drives. After booting up on the new hardware do I need to do both parity drives at the same time or one at a time? After the parity rebuild is done I will transfer the Media data off the Mirror array and rope those 2 8TB drives into the Unraid Array. I am going to add 2 of those 8TB drives to the "in case shit" box with the other spare 4TB spare drives I have.
  17. I would like to see multiple arrays and cache pools. I would like to keep my 5 3TB drives for media, 7 4TB for my main storage and my 3 8TB drives for backups and whatever temp storage. I currently have the 5 3TB drives in another machine running OpenSUSE but I just put together a system with a much larger case and would like to move it all to one and consolidate running Unraid.
  18. Didn't notice the user shares, enabled it and started back up with success! Thank you.
  19. Supermicro X8DTH-6F. When I first booted 6.9.2 it wouldn't recognize either NIC, tried 6.10-rc1 and she comes up just fine but... After assigning the drives (All SAS - 1- 1TB parity, 2- 1TB 4-600GB) it went through and formatted and shows correctly under MAIN but under shares I have disk1, disk2, disk3 through disk6 individually, it does not appear to be creating an array but just mounting the drives and doing whatever with the parity drive. Docker and VMS(libvirt) service fail to start. Settings shows "/mnt/user/xxxx" paths do not exist. I have gone through and started a "New Config" but end result is the same. Any help would be greatly apricated. If this is a rc1 bug and anyone has advice on getting the SMC NIC's recognized by 6.9.2 I am more than willing to reinstall. This is a second Unraid server with a trial license at this point, I have 6.9.2 running beautifully on my Z820 so I'm wanting to get this running more as a test box and maybe move away from ESXi running on my Z440. I have another 1TB SATA drive that's showing up as 3GB but that's the damn integrated SAS controller. Trying to figure out how the update that firmware, or putting a PCI-e SAS controller in that I already have laying around. supermicro-nas-diagnostics-20210909-2144.zip