rwickra

Members
  • Posts

    146
  • Joined

  • Last visited

Everything posted by rwickra

  1. I have an unRaid 6.11.5 running on a i5-12600K with 64GB of DDR3 RAM and the system is periodically giving me a kernel panic and crashing. This is happening whenever there is a CPU/disk intensive task. I'm at my wits end trying to figure out whether there is a piece of software causing this? I only have Plex installed and this still happens. I tried removing Plex and it still happens. There are no VMs. I've attached the output from the log -- any idea where the issue could be? Bad memory vs Bad CPU or Motherboard? Feb 5 18:27:08 Bovitiya kernel: <TASK> Feb 5 18:27:08 Bovitiya kernel: __schedule+0x596/0x5f6 Feb 5 18:27:08 Bovitiya kernel: schedule+0x8e/0xc3 Feb 5 18:27:08 Bovitiya kernel: __down_write_common+0x45a/0x4e9 Feb 5 18:27:08 Bovitiya kernel: ? writeback_single_inode+0x130/0x13e Feb 5 18:27:08 Bovitiya kernel: fuse_flush+0xa6/0x199 Feb 5 18:27:08 Bovitiya kernel: filp_close+0x39/0x6d Feb 5 18:27:08 Bovitiya kernel: put_files_struct+0x63/0xa4 Feb 5 18:27:08 Bovitiya kernel: do_exit+0x37b/0x8e5 Feb 5 18:27:08 Bovitiya kernel: ? ksys_pwrite64+0x64/0x84 Feb 5 18:27:08 Bovitiya kernel: make_task_dead+0xba/0xba Feb 5 18:27:08 Bovitiya kernel: rewind_stack_and_make_dead+0x17/0x17 Feb 5 18:27:08 Bovitiya kernel: RIP: 0033:0x1512793596c3 Feb 5 18:27:08 Bovitiya kernel: RSP: 002b:00007ffe487a2c48 EFLAGS: 00000202 ORIG_RAX: 0000000000000012 Feb 5 18:27:08 Bovitiya kernel: RAX: ffffffffffffffda RBX: 0000000000100000 RCX: 00001512793596c3 Feb 5 18:27:08 Bovitiya kernel: RDX: 0000000000100000 RSI: 000056348eda6100 RDI: 0000000000000027 Feb 5 18:27:08 Bovitiya kernel: RBP: 00000004b6600000 R08: 00000004b6600000 R09: 0000000000000000 Feb 5 18:27:08 Bovitiya kernel: R10: 00000004b6600000 R11: 0000000000000202 R12: 0000000000100000 Feb 5 18:27:08 Bovitiya kernel: R13: 000056348eda6100 R14: 0000000000000027 R15: 00007ffe487a2ce0 Feb 5 18:27:08 Bovitiya kernel: </TASK>
  2. @JorgeB Thank you for the reply. Only issue is that I am not using qbittorrent or any torrent client for that matter. In the server that this happens to there is only one docker container (tdarr) nothing else. No VMs. The rest is just three hard drives (including parity) since it's meant to be a backup server. I'm looking through the log and I'm not finding any reference to libtorrent anywhere. The crash was at the end of the diagnostics. Wanted to know if you (or anyone here) had any ideas about what I can do. Could this be a hardware issue (Memory or CPU)? reason I ask is that I built two servers at the same time -- with very similar hardware. The other unRaid server has had absolutely no crashes. It's also running 4-5 docker containers. The one that is crashing only has one.
  3. Hi guys, I put together a second unraid server with a i5-1200K CPU, Gigabyte Z690 Gaming X motherboard with a few spinner drives. I was using as a node in my tdarr transcoding and I've noticed that this unraid system periodically just hangs and becomes unresponsive needing a hard reboot. I've disabled XMP, tried every other trick I can think of but it still happens. Finally, I set up a remote syslog server on my primary unraid server to capture the output. I've attached the diagnostics here. I've attached it here. Detecting a lot of the following: Dec 15 23:35:10 Bovitiya kernel: rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { 0-... } 477721 jiffies s: 309 root: 0x1/. Dec 15 23:37:12 Bovitiya kernel: rcu: INFO: rcu_preempt self-detected stall on CPU Dec 15 23:37:12 Bovitiya kernel: rcu: #0110-....: (1 GPs behind) idle=d6d/1/0x4000000000000000 softirq=92164/92165 fqs=146887 I can't tell is this something to do with tdarr or if there something wrong with my new CPU or motherboard. Reason I ask is that I'm still within the return period if there is something wrong. I'm a bit of a newbie when it comes to understanding these errors so I'm hopeful one of you can help me understand what maybe happening. Full syslog captured in the remote syslog server for this system is attached. syslog-192.168.1.50.log
  4. I've had an UnRaid server running on a SuperMicro X10-SL7-F-0 motherboard with an Intel Xeon E3-1230v3 processor for close to 10 years now. It's worked fine, but the server has begun to show its age slowing down more than I would like. I'm thinking of rebuilding the server focusing on minimizing the number of drives (perhaps with a 18 TB parity and 10-11 other 18 TB data drives). This is because I think I would like to really reduce the power consumption. I'm looking for a suggestion of a good motherboard/CPU combo that could accomplish that. I don't use VMs, but do plan to run a few docker containers for various apps. Main use will be as a NAS with great expandability. I am planning on installing an nVIDIA GPU for HW transcoding via Plex for 4K HDR. The focus is going to be maximizing space and speed, updating to newer hardware with reduced power consumption. My Current Setup: Intel Xeon E3-1230v3 LGA 1150 socket 3.3 GHz Processor SuperMicro X10-SL7-F-0 Motherboard (6x SATA, 8x SAS ports) SuperMicro AOC-SAS2LP-MV8 8-SATA port RAID controller Any recommendations will be appreciated.
  5. @Frank1940 I don't think I can, because I'm trying to get this to work so that it connects with the backplane of a Norco 4220 server chassis. My guess is that I have to go the Kapton tape route.
  6. OMG, you are amazing. Thank you. I've already ordered the Kapton tape. Weirdly, I searched various permutations of this question on the forum assuming that someone would have encountered this before, but never encountered the post you've referenced. Thank you again!
  7. Hey guys, I'm having a really weird problem. I recently purchased 2x 10 TB WD Hard Drives. They were on sale as part of the 10 TB EasyStore external drives, which I shucked and used in my server. I have a 10 TB white label drive as my parity drive and it was recognized immediately without an issue, but when I try to add these as data drives into the server, the WebUI doesn't show them as existing -- not even as unassigned drives. I rebooted the system several times, upgraded the OS to 6.6.7 but still no bueno. Any help someone can give to point me in the right direction would be much appreciated. As a fact, I did externally power up these drives to ensure that they are both working -- and it works. They were preformatted as NTFS. I also put them in different drive slots (I have a Norco 4220 20-bay chassis that I am using) and still the same. They don't get recognized. ***** SOLUTION ****** Just wanted to let everyone know that this is resolved. The solution, as mentioned below is to use Kapton tape to cover up pin #3 of the power connector. I purchased Kapton tape from Amazon and did this change and voila, everything works. For review, a tutorial on how to do this is posted on this youtube tutorial I am linking below (mods, please delete if not allowed)
  8. I'm getting errors stating that "call traces have been found on your server". I'm not sure what that means but have noticed that the server has gotten slower. I'm wondering whether this is an outside hack attempt or whether there is something internally going on. My only ports open outside is for PLEX and I use SSH access via specific port from time to time. I also use a VPN for remote access. Thanks so much. araliya-diagnostics-20180120-1547.zip
  9. Thanks itimpi, but if the docker was misconfigured, it should not affect the ability for me to access the array right? Unless the docker is locking up the array. By the way, I used the common instructions here to configure the docker. I placed the .img file in the cache drive, which in hindsight I wonder was a mistake? So where do you guys recommend that the docker.img file be placed? I've placed that file in the cache and I think I had the application data written into one of the protected shares in the array (/mnt/user/appdata). Is there a source here where I can get info on proper docker configuration?
  10. I'm beginning to start having problems with my server. It's been a faithful ally and has served me well, but I'm beginning to notice SMART errors in Disk 1 and Cache. I didn't think I need to replace them right away until suddenly something went wrong. All of my Dockers (which are located on my Cache drive) acted up and shut down on me (when I try to restart them I get the docker.img full error). I could not get them to start on the console. Then after a few days, I cannot even access my shares. When I SSH into the box, and try to browse the shares, I get an input/output error. So I stopped the array, took the cache drive out of the system and restarted. So far things look OK. When I look at "fix common problems" -- I have this error stating that /var/log is getting full. However, the array was restarted just 3 days ago. I'm now trying to troubleshoot whether I need to replace my data drives or whether the problem is in my cache drive completely or whether it is in the USB drive running the OS (which is also a few years old). I've attached my diagnostics here. Can someone please take a look and let me know where they think my problem is. I would be ever so grateful. araliya-diagnostics-20171118-0829.zip
  11. I've got Couchpotato (CP) /SickRage dockers (Needo) installed on unRaid v.6 and a separate Ubuntu 14.04 VM running as a server. I wanted to run a script based on INCRON that monitors a folder (blackhole) for any .torrent files that are added by Couchpotato so that as soon as a .torrent file is downloaded by couchpotato, this script is triggered by the Incron daemon and the script essentially uses RSYNC to upload the .torrent file to my Seedbox which can then download the actual media file. However, I've hit a major problem. When I did multiple test runs with CP running on an Ubuntu VM and executing the script, the whole thing worked very well without a glitch. CP downloaded the .torrent files to the blackhole folder, and incron quickly picked up the addition of the file and used that to trigger the bash script I wrote that uploads the file to my seedbox. The problem is that when I am running CP in a docker, and the script/incron on my Ubuntu VM, this doesn't work as well. After multiple tests, I've narrowed down the problem. The problem is that incron tables are user-specific -- so if I create an incrontab for me (user: rwickra), then when couchpotato running as a docker adds a file that the Ubuntu VM sees it does so as user:99 (nobody). However, this process does not trigger the incron daemon. I've tested this exhaustively. The script runs beautifully offline. The problem is that the incron daemon does not trigger the script, because from what I can gather, the .torrent file added by CP is seen as a file added by another user and incron is not initiated. Am I missing something here or is there a better way to do this?
  12. Thanks so much! I am embarrassed that it was so simple. I turned off DHCP but I never entered the GATEWAY and DNS-SERVERS into the /etc/networking/interfaces. I am such an idiot. Did what you suggested and immediately I had a static IP address with no problems. Thank you again!
  13. Hey Luca2, PLEASE PLEASE PLEASE let me know if you figure this out. I've been wracking my brains over this all day. I have an Ubuntu Server on KVM on an unRaid host. Networking is set via bridge mode. On the Ubuntu VM, I don't see a "br0" device. All I see are loopback (lo) and eth0. By default, on my etc/network/interfaces, the setting for eth0 is to DHCP. If I change this to static, I get a Static IP address, that I can SSH into, but everything else is broken. My DNS crawls to a halt -- I can't even ping www.google.com.. When I try to check out my Apache2 web interface, I get a Gateway Timeout error. Sometimes I can't even SSH into it. I would love to know if someone figured out how to do this.
  14. No, I didn't even touch the temperature settings. Only change I did immediately after installing the new unRaid is just setting the NTP servers to the North American ones.
  15. Hi Rob, I figured it out. Thank you. I went to the Settings --> Display Settings --> Temperature unit --> and changed the setting from [CELSIUS] to [FAHRENHEIT]. So far I have not gotten any further error messages. I think that may have done the trick, although it's a very very simple fix... Hopefully.
  16. Here is the error log (/var/log/syslog) when the overheating notices are sent. Is this a problem with BTRFS? /etc/rc.d/rc.docker start Apr 17 23:20:12 Araliya php: Creating new image file for Docker: /mnt/disk/vm/dockers/docker.img size: 10G Apr 17 23:20:12 Araliya php: Btrfs v3.18.2#012See http://btrfs.wiki.kernel.org for more information.#012#012Turning ON incompat feature 'extref': increased hardlink limit per file to 65536#012Turning ON incompat feature 'skinny-metadata': reduced-size metadata extent refs#012fs created label (null) on /mnt/disk/vm/dockers/docker.img#012#011nodesize 16384 leafsize 16384 sectorsize 4096 size 10.00GiB Apr 17 23:20:12 Araliya kernel: BTRFS: device fsid 7319d692-fa11-4138-af0d-4b0ab0208dbd devid 1 transid 4 /dev/loop1 Apr 17 23:20:12 Araliya kernel: BTRFS info (device loop1): disk space caching is enabled Apr 17 23:20:12 Araliya kernel: BTRFS: has skinny extents Apr 17 23:20:12 Araliya kernel: BTRFS: flagging fs with big metadata feature Apr 17 23:20:12 Araliya kernel: BTRFS: creating UUID tree Apr 17 23:20:12 Araliya php: Resize '/var/lib/docker' of 'max' Apr 17 23:20:12 Araliya php: starting docker ... Apr 17 23:20:12 Araliya kernel: BTRFS: new size for /dev/loop1 is 10737418240 Apr 17 23:22:01 Araliya sSMTP[4561]: Creating SSL connection to host Apr 17 23:22:01 Araliya sSMTP[4561]: SSL connection using ECDHE-RSA-AES128-GCM-SHA256 Apr 17 23:22:03 Araliya sSMTP[4561]: Sent mail for [email protected] (221 2.0.0 closing connection 104sm7454667qgj.43 - gsmtp) uid=0 username=root outbytes=749 Apr 17 23:22:03 Araliya sSMTP[4579]: Creating SSL connection to host Apr 17 23:22:04 Araliya sSMTP[4579]: SSL connection using ECDHE-RSA-AES128-GCM-SHA256 Apr 17 23:22:05 Araliya sSMTP[4579]: Sent mail for [email protected] (221 2.0.0 closing connection 187sm9531193qhr.24 - gsmtp) uid=0 username=root outbytes=725 Apr 17 23:22:05 Araliya sSMTP[4594]: Creating SSL connection to host Apr 17 23:22:05 Araliya sSMTP[4594]: SSL connection using ECDHE-RSA-AES128-GCM-SHA256 Apr 17 23:22:07 Araliya sSMTP[4594]: Sent mail for [email protected] (221 2.0.0 closing connection 187sm9531245qhr.24 - gsmtp) uid=0 username=root outbytes=725 Apr 17 23:22:07 Araliya sSMTP[4609]: Creating SSL connection to host Apr 17 23:22:07 Araliya sSMTP[4609]: SSL connection using ECDHE-RSA-AES128-GCM-SHA256 Apr 17 23:22:09 Araliya sSMTP[4609]: Sent mail for [email protected] (221 2.0.0 closing connection p73sm5178710qha.20 - gsmtp) uid=0 username=root outbytes=734 Apr 17 23:22:09 Araliya sSMTP[4625]: Creating SSL connection to host Apr 17 23:22:09 Araliya sSMTP[4625]: SSL connection using ECDHE-RSA-AES128-GCM-SHA256 Apr 17 23:22:11 Araliya sSMTP[4625]: Sent mail for [email protected] (221 2.0.0 closing connection y18sm9570486qgd.24 - gsmtp) uid=0 username=root outbytes=725 Apr 17 23:22:11 Araliya sSMTP[4656]: Creating SSL connection to host Apr 17 23:22:11 Araliya sSMTP[4656]: SSL connection using ECDHE-RSA-AES128-GCM-SHA256 Apr 17 23:22:13 Araliya sSMTP[4656]: Sent mail for [email protected] (221 2.0.0 closing connection q74sm9541116qha.4 - gsmtp) uid=0 username=root outbytes=725 Apr 17 23:22:13 Araliya sSMTP[4674]: Creating SSL connection to host Apr 17 23:22:13 Araliya sSMTP[4674]: SSL connection using ECDHE-RSA-AES128-GCM-SHA256 Apr 17 23:22:15 Araliya sSMTP[4674]: Sent mail for [email protected] (221 2.0.0 closing connection k71sm554322qhc.42 - gsmtp) uid=0 username=root outbytes=734 Apr 17 23:22:15 Araliya sSMTP[4699]: Creating SSL connection to host Apr 17 23:22:15 Araliya sSMTP[4699]: SSL connection using ECDHE-RSA-AES128-GCM-SHA256 Apr 17 23:22:17 Araliya sSMTP[4699]: Sent mail for [email protected] (221 2.0.0 closing connection 72sm9527804qhx.32 - gsmtp) uid=0 username=root outbytes=734 Apr 17 23:22:17 Araliya sSMTP[4712]: Creating SSL connection to host Apr 17 23:22:17 Araliya sSMTP[4712]: SSL connection using ECDHE-RSA-AES128-GCM-SHA256 Apr 17 23:22:19 Araliya sSMTP[4712]: Sent mail for [email protected] (221 2.0.0 closing connection c4sm2025276qge.32 - gsmtp) uid=0 username=root outbytes=734
  17. I just did a fresh install of unRaid v6 on my server and obviously it started to do a parity sync that I manually stopped. However, in the midst of the process, I started getting "DISK OVERHEATING" notifications (Temp>77F) for every one of my disks in the entire array. However, on the console (emhttp) I am not seeing any such overheating. I took a look at my server and all I did was installed new RAM and a new SSD to use as a SNAP-mounted disk for my VMs. Has anyone seen this behavior and know what is causing it?
  18. I don't know if you saw this thread: http://lime-technology.com/forum/index.php?topic=32590.0 I had a similar issue, but had to abandon it since it requires installing LFTP and FileBot on unraid, which was a PITA for me. I ended up getting it to work by running Ubuntu Server on a small Rapsberry Pi and having it use CIFS to mount the unraid shares via /etc/fstab. It works reasonably well. However, the good news is that if you are running unraid v6.0+ then you can simply use something like dmacias' KVM VM manager plugin to install a Ubuntu Server VM and then use 9p sharing to enable your unraid host-shares accessible to a VM guest. I've tried it and it works great, but I didn't actually have the time to fully implement it yet. Let me know if you need some help.
  19. Never mind. I figured it out myself. I just had to pick the Virtio Driver CD after a rescan, then browse through the appropriate version of Windows 8 I had installed and it recognized the drivers.
  20. So I've used the wonderful KVM VM manager plugin and managed to successfully install an Ubuntu VM. I then stepped it up to a Windows8.iso and have hit a real snag. I can get through several steps of the VM install and then I get to the install screen where I choose an install location and get this message that I have no drives! At first, I thought that this might be a problem with not passing the Virtio drivers, but I have the correct/latest drivers linked as you can see from my XML. Can anyone offer insight? <domain type='kvm'> <name>win8</name> <uuid>1604f46b-81c9-2451-90ed-e0b015f67675</uuid> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <vcpu placement='static'>2</vcpu> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-2.1'>hvm</type> <loader type='rom'>/usr/share/qemu/bios-256k.bin</loader> <boot dev='cdrom'/> </os> <features> <acpi/> <apic/> <pae/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> </hyperv> </features> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> <devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/mnt/cache/win8/win8.qcow2'/> <target dev='hdb' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/cache/vm/windows8/windows8.iso'/> <target dev='sdc' bus='sata'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/cache/vm/windows8/virtio.iso'/> <target dev='sdd' bus='sata'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='3'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='dmi-to-pci-bridge'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:26:8d:21'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> </interface> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='5701' listen='0.0.0.0'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='vmvga' vram='16384' heads='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/> </memballoon> </devices> </domain>
  21. I realize this is a newbie question. A while ago, I've used unRaids SNAP plugin to assign a drive to a non-array drive. I've got a similar requirement and now I'm wondering whether there might be an easier way to do this. Basically, I've got unRaid v6b14 running several Xen VMs. I'm trying to run all of my VMs from an SSD, but I don't want the SSD to be used as a Cache drive. How would I go about doing this? Also, how do I get the SSD formatted and mounted under the /mnt directory so that it can be accessible for unRaid to store VMs? Thank you.
  22. Thanks so much, that clarifies it.
  23. I've got unRaid 6b14 running on a test box with multiple dockers setup and running (as far as I can appreciate) relatively well. The dockers are installed on my cache drive which is formatted as XFS. I am reading on multiple posts that to implement docker, the cache drive (or drive that houses the dockers) should contain the BTRFS filesystem. Can someone tell me whether this was true in an old version of unRaid? Or am I going to get a nasty surprise sometime down the road with my XFS enabled cache drive with dockers installed?
  24. I currently have a sweet 17 TB unRAID setup running 5.05. I'm not a computer guy by training but have sort of picked up nuggets of helpful information along the way. Right now, while unRAID is rock solid, I am using a lot of plugins -- Sab, Sickbeard, Couchpotato, APCUPSd, Crashplan, PLEX -- and often, these plugins 'break' the stability of unRAID causing the server to crash. Not always, but sometimes. I've thought for a long time about whether I could test the stability by moving these plugins to a different environment, and I got myself a Dell PowerEdge 2950 off Craigslist, ran Ubuntu 14.04 server, and disabled all these plugins on unraid and ran them on the Ubuntu server with the unraid shares mounted on bootup via FSTAB. The thing works beautifully. Unfortunately, when I got this month's electricity bill, I realized that while the Dell server was elegant (but loud), it is an extremely power-hungry machine. I checked consumption with kill-a-watt and I was getting 350 W power use at idle on the Poweredge server alone! So I've been thinking about one of two options: 1. Use a Raspberry Pi to do exactly what I've been doing with the Poweredge server (I probably can't run PLEX but should be able to run the rest of the web services), with minimal power consumption. 2. Virtualize unraid -- I should say that I am COMPLETELY new to virtualization. Beyond running different VMs on a Virtualbox I have no experience over setting up Xenserver, KVM vs ESXi -- and I am unable to find a comprehensive tutorial explaining how to set up unraid in this on the forum (I wish someone would write one out from beginning), obviously any such set up should have pass-through capability. Can someone here who is much more experienced suggest whether this makes any sense? Or whether there are other solutions to this that I may not have thought through? Really appreciate any input.