mrdally204 Posted February 10, 2017 Share Posted February 10, 2017 For the second day in a row my shares and web ui have stopped responding. Both times I was copying some files over smb to an external HDD on my laptop running Windows 10. After SSHing into the box I ran top and noticed SMBD is pegged at 100. I am going to attach the system log, but I don't see anything useful in there. What can I do to capture this error as I'm sure it will happen again? Unraid 6.3.0 I looked for tools/diagnostics as a forum header but came up empty, assuming this is something within the web ui which is unreachable http://pastebin.com/QCDt1ZrQ Quote Link to comment
gubbgnutten Posted February 10, 2017 Share Posted February 10, 2017 Since you can SSH to the box you can try to grab diagnostics from the command line by executing diagnostics The diagnostics zip should then be saved to your flash drive. Quote Link to comment
mrdally204 Posted February 10, 2017 Author Share Posted February 10, 2017 Thankyou for the information. I have since rebooted, but I'm sure this will occur again. I was unsuccessful in using a "shutdown now" command from ssh as well, needed to use the reset button Quote Link to comment
trurl Posted February 10, 2017 Share Posted February 10, 2017 Thankyou for the information. I have since rebooted, but I'm sure this will occur again. I was unsuccessful in using a "shutdown now" command from ssh as well, needed to use the reset button poweroff or reboot are the correct commands with recent versions. Straight from the horse's mouth. Quote Link to comment
JonathanM Posted February 10, 2017 Share Posted February 10, 2017 Are your unraid array drives ReiserFS? Quote Link to comment
gubbgnutten Posted February 10, 2017 Share Posted February 10, 2017 Are your unraid array drives ReiserFS? From syslog in the first post, yes. Quote Link to comment
JonathanM Posted February 10, 2017 Share Posted February 10, 2017 Are your unraid array drives ReiserFS? From syslog in the first post, yes. Didn't look in syslog, just making an educated guess based on symptoms. It's looking more and more like ReiserFS is going to be a problem from now on for some people, maybe it's time for limetech to make an official statement about migrating off of ReiserFS sooner rather than later. Quote Link to comment
gubbgnutten Posted February 10, 2017 Share Posted February 10, 2017 Didn't look in syslog, just making an educated guess based on symptoms. It's looking more and more like ReiserFS is going to be a problem from now on for some people, maybe it's time for limetech to make an official statement about migrating off of ReiserFS sooner rather than later. I think you are right. It used to be that I looked for the combination ReiserFS+nearly full disk as a potential problem, but nowadays I cringe just seeing ReiserFS disks present. Simply too many reported cases of symptoms magically disappearing after migrating to another FS... Quote Link to comment
trurl Posted February 10, 2017 Share Posted February 10, 2017 Didn't look in syslog, just making an educated guess based on symptoms. It's looking more and more like ReiserFS is going to be a problem from now on for some people, maybe it's time for limetech to make an official statement about migrating off of ReiserFS sooner rather than later. I think you are right. It used to be that I looked for the combination ReiserFS+nearly full disk as a potential problem, but nowadays I cringe just seeing ReiserFS disks present. Simply too many reported cases of symptoms magically disappearing after migrating to another FS... Probably some people are still on ReiserFS because changing is just too scary. We will have support problems whether they stay on ReiserFS or decide to convert. Quote Link to comment
mrdally204 Posted February 10, 2017 Author Share Posted February 10, 2017 I am not opposed to migrating to a different file system. I use ReiserFS because, well unraid made me at one time Is there an official way to accomplish this? Will I need extra storage while the migration takes place? Quote Link to comment
trurl Posted February 10, 2017 Share Posted February 10, 2017 I am not opposed to migrating to a different file system. I use ReiserFS because, well unraid made me at one time Is there an official way to accomplish this? Will I need extra storage while the migration takes place? There is a sticky at the top of this very subforum which discusses this. It's kind of long so I suggest you start at the end and work back. Quote Link to comment
SnickySnacks Posted February 10, 2017 Share Posted February 10, 2017 My main concern about migrating off of ReiserFS is that, traditionally, when people have had filesystem issues the response on the forums has been that ReiserFS's recovery tools were better than say XFS's recovery tools. It's anecdotal, but.... http://lime-technology.com/forum/index.php?topic=55845.msg533055#msg533055 http://lime-technology.com/forum/index.php?topic=53553.msg514049#msg514049 http://lime-technology.com/forum/index.php?topic=49774.msg477405#msg477405 Sure it's been mentionioned that as of 6.2 xfs_repair is a lot better, but many of us have a lot of inertia and hate to take the chance with changing something that's always worked. Quote Link to comment
mrdally204 Posted February 12, 2017 Author Share Posted February 12, 2017 Locked up again, this time shfs was pegged. got me a diagnostics zip this time. Really need some help stabilizing this mess. unraid-diagnostics-20170212-0008.zip Quote Link to comment
trurl Posted February 12, 2017 Share Posted February 12, 2017 Nothing in your diagnostics. Your disks aren't even very full. Quote Link to comment
mrdally204 Posted February 13, 2017 Author Share Posted February 13, 2017 Another day, another server hang. This time it is SMBD pegged again. Diagnostics logs attached. This really feels like 6.3 is not for me. Looking for direction from the almighty users on this forum to get me stable again. Really hating this experience right now. unraid-diagnostics-20170213-1821.zip Quote Link to comment
mrdally204 Posted February 17, 2017 Author Share Posted February 17, 2017 Bumping, anyone have any thoughts? I'm considering making the file system switch when I get the chance to buy another drive. Quote Link to comment
1812 Posted February 17, 2017 Share Posted February 17, 2017 upgrade to 6.3.2 I mean, yeah, it could get worse even though it probably won't get better. But there are merits to being fresh and clean.... Quote Link to comment
mrdally204 Posted February 22, 2017 Author Share Posted February 22, 2017 (edited) While I continue to expierence issues with my server, I figured it would be a good time to post me VM xml. I created this way back in 6.0RC days and it has functioned as I would hope. Keep in mind I have no clue what I was doing, piecing things together from this site as I went along. Is all of this still accurate? I also noticed in my array a new folder called "system/libvitr" with libvirt.img file in it (1gb) that was created in September and last modified yesterday. Not sure what that is about. Anyway, I'm searching for my instability and might be way off track. Thanks for looking VM has 1 ssd drive and 1tb drive (ata-Hitachi_HDT721010SLA360_STF610MH3DUM6K) my go script # Start the Management Utility /usr/local/sbin/emhttp & #mount VM HDD mkdir /mnt/vm mount /dev/disk/by-id/ata-Corsair_Force_LS_SSD_14428168000101670551 /mnt/vm smbcontrol smbd reload-config My VM xml <domain type='kvm' id='1'> <name>Windows 8</name> <uuid>a73bc503-28bb-7805-4a67-ef262dab5818</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>5242880</memory> <currentMemory unit='KiB'>5242880</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>3</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='2'/> <vcpupin vcpu='2' cpuset='3'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-2.1'>hvm</type> </os> <features> <acpi/> <apic/> </features> <cpu> <topology sockets='1' cores='3' threads='1'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/vm/Windows 8/Windows 8.qcow2'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source dev='/dev/disk/by-id/ata-Hitachi_HDT721010SLA360_STF610MH3DUM6K'/> <backingStore/> <target dev='hdd' bus='virtio'/> <alias name='virtio-disk3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:b2:c4:af'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-1-Windows 8/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </hostdev> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </memballoon> </devices> <seclabel type='none' model='none'/> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> Edited February 22, 2017 by mrdally204 Quote Link to comment
mrdally204 Posted May 30, 2017 Author Share Posted May 30, 2017 BUMP. My server is nearly unusable at this point. If I interact with my shares there is a great chance that my server is going to hang. I need support, sooner rather than later. Being a paid user what are my support options because the forums, while great, are not solving my underlying issue Quote Link to comment
bonienl Posted May 30, 2017 Share Posted May 30, 2017 3 minutes ago, mrdally204 said: BUMP. My server is nearly unusable at this point. If I interact with my shares there is a great chance that my server is going to hang. I need support, sooner rather than later. Being a paid user what are my support options because the forums, while great, are not solving my underlying issue Limetech offers support services, see here Quote Link to comment
JorgeB Posted May 30, 2017 Share Posted May 30, 2017 Did you convert to xfs like suggested above? Usually fixes this Quote Link to comment
mrdally204 Posted May 30, 2017 Author Share Posted May 30, 2017 Asking for more $$ to get it to work the way it should does not sound like a good solution. I'll try here again in the hopes that someone can help me solve this before I have to move on to another solution. Attached is my log file from my current hanging server. Any thoughts? SMBD is hanging at 100% (can I just restart SMBD?) right now making my web ui and shares unreachable. I'm able to ssh in, but poweroff command never works so I must reboot the system manually which makes me cringe ever other day I have to do it https://dl.orangedox.com/TDVufYbfyLnvcfF8yW Quote Link to comment
saarg Posted May 30, 2017 Share Posted May 30, 2017 6 minutes ago, mrdally204 said: Asking for more $$ to get it to work the way it should does not sound like a good solution. I'll try here again in the hopes that someone can help me solve this before I have to move on to another solution. Attached is my log file from my current hanging server. Any thoughts? SMBD is hanging at 100% (can I just restart SMBD?) right now making my web ui and shares unreachable. I'm able to ssh in, but poweroff command never works so I must reboot the system manually which makes me cringe ever other day I have to do it https://dl.orangedox.com/TDVufYbfyLnvcfF8yW Did you do what @johnnie.black suggested? Have you tried sending an email to limetech? [email protected] Quote Link to comment
mrdally204 Posted May 30, 2017 Author Share Posted May 30, 2017 Converting all disks to xfs was not done because of the comment that followed... Quote Your disks aren't even very full. If this will solve the problem I would go buy another hdd today and get moving in that direction... Quote Link to comment
JorgeB Posted May 30, 2017 Share Posted May 30, 2017 21 minutes ago, mrdally204 said: If this will solve the problem I would go buy another hdd today and get moving in that direction... We can't say it will fix it for sure, but it's the #1 cause for this problem, and IMO you should convert anyway as reiserfs in badly maintained and has terrible performance in some situations, so nothing to lose really. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.