unRAID Server Release 6.0-beta3-x86_64 Available


limetech

Recommended Posts

I'm having trouble with VMs getting hung after a while. I'm running some video encoding, so that might have something to do with it. For example, my Windows VM is wedged right now:

 

# xl list
Name                                        ID   Mem VCPUs	State	Time(s)
Domain-0                                     0  2048     8     r-----  457471.1
ubuntu.13-04.xfce.x86-64.20130424           52   512     1     -b----      80.1
windows7                                    53  4093     4     ------   59582.8

 

The state looks... Odd. I have no idea whether this is an unraid beta 3 problem or some problem with my VMs. Can someone point me in the right direction for debugging this? The dom0 /var/log/xen/*.log files don't seem to show much, and neither does /var/log/syslog.

 

Do I need to be careful about over subscribing the cpus or memory? The linux VM has:

 

vcpus = 1

memory = 512

 

and the windows one has:

 

vcpus = 4

memory = 4096

 

I have 16MB of system memory and a Xeon E3-1245 V2.

Link to comment
  • Replies 661
  • Created
  • Last Reply

Top Posters In This Topic

Yet another crash ......... This time no cache_dir and was using swap. Ran stable for about 25 hours.

 

20140206_192335.jpg

 

Here is my sysconfig:

label Xen/unRAID OS

  menu default

  kernel /syslinux/mboot.c32

  append /xen xsave=1 cpufreq=xen:performance dom0_max_vcpus=9 dom0_vcpus_pin iommu=1 --- /bzimage console=tty0 xen-pciback.hide=(00:09.0) --- /bzroot

 

Logs: Attached to post

 

Configs: Attached to post

 

Running:

Unraid - Plex, APC UPS Daemon, Swap File (4gb)

Windows 2 cpu and 1792 mb - utorrent, sb, cp, and vpn

Arch - 1 cpu and 512 mb with mumble server

 

Server specs:

  -CPU:  AMD Six-Core 2419 EE 1.8GHz (x2) = 12 Cores

  -MOBO: H8DME-2

  -RAM: 8GB

 

PS: .... I will continue to investigate but i really have no clue.... Now running just unraid and Arch Vm no windows vm. Should i run linux prime95 on it and try to reproduce a load and see if its happening during heavy transcoding only? Google searchs basically say it can be anything....

 

---Cry  :'( :'( :'( :'( :'( :'(

syslog.zip

syslog-20140205-053255.zip

arch.cfg

windows.cfg

Link to comment
PS: .... I will continue to investigate but i really have no clue.... Now running just unraid and Arch Vm no windows vm. Should i run linux prime95 on it and try to reproduce a load and see if its happening during heavy transcoding only? Google searchs basically say it can be anything....

 

---Cry  :'( :'( :'( :'( :'( :'(

 

I'd try to stress components to see if it happens - but first, run memtest.

Link to comment

Yet another crash ......... This time no cache_dir and was using swap. Ran stable for about 25 hours.

...

PS: .... I will continue to investigate but i really have no clue.... Now running just unraid and Arch Vm no windows vm. Should i run linux prime95 on it and try to reproduce a load and see if its happening during heavy transcoding only? Google searchs basically say it can be anything....

 

---Cry  :'( :'( :'( :'( :'( :'(

 

 

if it were me,  I would probably put the plexmedia server in it's own VM, perhaps the arch vm.

 

 


Feb  5 00:08:50 Tower kernel: mdcmd (22): spindown 1
Feb  5 00:08:51 Tower kernel: mdcmd (23): spindown 3
Feb  5 00:29:47 Tower kernel: Plex Media Serv invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
Feb  5 00:29:47 Tower kernel: Plex Media Serv cpuset=/ mems_allowed=0
Feb  5 00:29:47 Tower kernel: CPU: 7 PID: 2589 Comm: Plex Media Serv Not tainted 3.10.24p-unRAID #13
Feb  5 00:29:47 Tower kernel: Hardware name: Supermicro H8DM8-2/H8DM8-2, BIOS 080014  10/22/2009
Feb  5 00:29:47 Tower kernel:  00000000000126d8 ffff88004bba1ab8 ffffffff8149830e ffff88004bba1b38
Feb  5 00:29:47 Tower kernel:  ffffffff81494152 ffff88004bba1b08 ffffffff810805c3 0000000000000000
Feb  5 00:29:47 Tower kernel:  000000000002e3db 0000000000000047 ffff88004bba1d20 ffffffff8149ba72
Feb  5 00:29:47 Tower kernel: Call Trace:

 

 

Even when I looked at it earlier in this thread, I thought there were way too many memory hogging plex processes going.

Link to comment

So  cpus and mobo supports vt-d . passthrough works in Xenserver and vmware but when I try to pass something through on unraid it says vt-d is not enabled. 

 

Please create your own thread in the virtualization forum. This isn't relevant to the release.

 

Post your exact hardware configs, and the above info as a minimum. Also post the output of this command

 

grep -E "(vmx|svm)" --color=always /proc/cpuinfo

Link to comment

I am looking at adding a 3TB disk to my 6.0-beta 3 server. I had pre-cleared the disk, then stopped the array, added the disk, and restarted the array. When I did this in 5.0 it would do a quick format of the new disk and the array would start. In 6.0 it is clearing the disk, which has taken 2+ hours so far for 37%.

 

So I am looking at a 6 hour clearing process before I array restarts.

 

Is this normal/expected? Or did I mess something up?

Link to comment

I am looking at adding a 3TB disk to my 6.0-beta 3 server. I had pre-cleared the disk, then stopped the array, added the disk, and restarted the array. When I did this in 5.0 it would do a quick format of the new disk and the array would start. In 6.0 it is clearing the disk, which has taken 2+ hours so far for 37%.

 

So I am looking at a 6 hour clearing process before I array restarts.

 

Is this normal/expected? Or did I mess something up?

 

FYI, I also just had the same issue, but I'm still on 5.0.4 (so probably doesn't belong in this thread  :o).  When I assigned the new drive there was no option to format or start the array.  The only option was clear.  The only thing I did between preclearing the drive (2 passes) and adding it to the array was an hdparm -Tt /dev/sdx.  I did preclear the drive on another machine in an eSATA drive dock, but that shouldn't be an issue.

Link to comment

I am looking at adding a 3TB disk to my 6.0-beta 3 server. I had pre-cleared the disk, then stopped the array, added the disk, and restarted the array. When I did this in 5.0 it would do a quick format of the new disk and the array would start. In 6.0 it is clearing the disk, which has taken 2+ hours so far for 37%.

 

So I am looking at a 6 hour clearing process before I array restarts.

 

Is this normal/expected? Or did I mess something up?

 

FYI, I also just had the same issue, but I'm still on 5.0.4 (so probably doesn't belong in this thread  :o).  When I assigned the new drive there was no option to format or start the array.  The only option was clear.  The only thing I did between preclearing the drive (2 passes) and adding it to the array was an hdparm -Tt /dev/sdx.  I did preclear the drive on another machine in an eSATA drive dock, but that shouldn't be an issue.

 

Okay, glad it's not just me. :)

 

Tom - has this process changed for a reason? It sucks to have my array offline for 6 hours just because I want to add a new disk. Especially since it's now offline for the bulk of the afternoon. If I had known this I would have added the disk before going to bed tonight instead of killing UnRAID during daylight hours.

 

 

Link to comment

I am looking at adding a 3TB disk to my 6.0-beta 3 server. I had pre-cleared the disk, then stopped the array, added the disk, and restarted the array. When I did this in 5.0 it would do a quick format of the new disk and the array would start. In 6.0 it is clearing the disk, which has taken 2+ hours so far for 37%.

 

So I am looking at a 6 hour clearing process before I array restarts.

 

Is this normal/expected? Or did I mess something up?

 

FYI, I also just had the same issue, but I'm still on 5.0.4 (so probably doesn't belong in this thread  :o).  When I assigned the new drive there was no option to format or start the array.  The only option was clear.  The only thing I did between preclearing the drive (2 passes) and adding it to the array was an hdparm -Tt /dev/sdx.  I did preclear the drive on another machine in an eSATA drive dock, but that shouldn't be an issue.

 

Okay, glad it's not just me. :)

 

Tom - has this process changed for a reason? It sucks to have my array offline for 6 hours just because I want to add a new disk. Especially since it's now offline for the bulk of the afternoon. If I had known this I would have added the disk before going to bed tonight instead of killing UnRAID during daylight hours.

I'll look into this.

Link to comment

Peter,

I am in no way as knowledgeable as most of you guys here, but I did have a similar situation when testing beta 3.

In my case, the cache drive was still mounted in the console/terminal and unraid would not stop/shutdown.

I am only mentioning it in case it may help you or others.

 

Mark

 

When stopping the array with VM running, it's do not do a complete stop, I have no syslog, perhaps someone else can test? I forced to do a power down, and now doing a parity check.

 

//Peter

Link to comment

hi there I have started to use the xen guest with Arch yesterday, since I have done that every time I am transferring data to the drive while the xen is running the transfer speed is really bad, it's like 100kb, I have a gigabit Ethernet wired. Any reason this is happening. I have plex running in the VM, nothing else really.

Link to comment

hi there I have started to use the xen guest with Arch yesterday, since I have done that every time I am transferring data to the drive while the xen is running the transfer speed is really bad, it's like 100kb, I have a gigabit Ethernet wired. Any reason this is happening. I have plex running in the VM, nothing else really.

 

Are you using PV Drivers in the VM?

 

Did you "tweak" NFS or Samba on both unRAID and within the VM?

 

Lots of variables... You need to provide more info for us to assist.

Link to comment

hi there I have started to use the xen guest with Arch yesterday, since I have done that every time I am transferring data to the drive while the xen is running the transfer speed is really bad, it's like 100kb, I have a gigabit Ethernet wired. Any reason this is happening. I have plex running in the VM, nothing else really.

 

Are you using PV Drivers in the VM?

 

Did you "tweak" NFS or Samba on both unRAID and within the VM?

 

Lots of variables... You need to provide more info for us to assist.

 

I used the ready compiled VM from the Virtualization subforum, I didn't change anything, I went with the defaults, I am using NFS

 

The VM itself is running on a cache drive, but I am copying direct to the array.

 

Tasks: 149 total,  1 running, 148 sleeping,  0 stopped,  0 zombie

Cpu(s):  0.2%us,  0.6%sy,  0.0%ni, 35.4%id, 63.4%wa,  0.0%hi,  0.2%si,  0.3%st

Mem:  1942208k total,  1781260k used,  160948k free,    90684k buffers

Swap:        0k total,        0k used,        0k free,  1375252k cached

 

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND

21719 nobody    20  0  285m  15m  12m D    4  0.8  3:20.47 smbd

1780 root      20  0 1010m 9236  900 S    3  0.5  25:14.65 shfs

3575 root      20  0 2158m 6372 3940 S    1  0.3  2:09.56 qemu-system-i38

1618 root      20  0    0    0    0 S    1  0.0  2:44.29 unraidd

21718 nobody    20  0  283m 9832 7644 S    0  0.5  0:11.92 smbd

21873 root      20  0    0    0    0 D    0  0.0  0:01.12 kworker/2:4

21952 root      20  0    0    0    0 D    0  0.0  0:04.18 kworker/u8:1

22179 root      20  0 13272 1272  948 R    0  0.1  0:00.56 top

    1 root      20  0  4356  600  516 S    0  0.0  0:10.89 init

    2 root      20  0    0    0    0 S    0  0.0  0:00.04 kthreadd

    3 root      20  0    0    0    0 S    0  0.0  0:42.10 ksoftirqd/0

    5 root      0 -20    0    0    0 S    0  0.0  0:00.00 kworker/0:0H

    7 root      RT  0    0    0    0 S    0  0.0  0:00.31 migration/0

    8 root      20  0    0    0    0 S    0  0.0  0:00.00 rcu_bh

    9 root      20  0    0    0    0 S    0  0.0  0:08.70 rcu_sched

  10 root      RT  0    0    0    0 S    0  0.0  0:00.21 migration/1

  11 root      20  0    0    0    0 S    0  0.0  0:00.26 ksoftirqd/1

syslog_07022014.txt

Link to comment

ironbadger actually I now did the xl destroy command to stop xen but the performance is still really bad.

 

root@Tower:~# free

            total      used      free    shared    buffers    cached

Mem:      1940788    1638556    302232          0    119176    1207932

-/+ buffers/cache:    311448    1629340

Swap:            0          0          0

Tasks: 146 total,  1 running, 145 sleeping,  0 stopped,  0 zombie

Cpu(s):  0.3%us,  0.9%sy,  0.0%ni, 43.9%id, 54.7%wa,  0.0%hi,  0.1%si,  0.0%st

Mem:  1940788k total,  1815312k used,  125476k free,  120116k buffers

Swap:        0k total,        0k used,        0k free,  1385144k cached

 

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND

21719 nobody    20  0  287m  15m  12m D    4  0.8  5:11.75 smbd

1780 root      20  0 1009m 8424  900 S    3  0.4  26:21.21 shfs

1618 root      20  0    0    0    0 S    1  0.0  2:55.96 unraidd

21952 root      20  0    0    0    0 D    1  0.0  0:10.78 kworker/u8:1

    3 root      20  0    0    0    0 S    0  0.0  0:43.52 ksoftirqd/0

21718 nobody    20  0  285m 8044 5844 S    0  0.4  0:22.06 smbd

22465 root      20  0    0    0    0 D    0  0.0  0:01.94 kworker/0:0

    1 root      20  0  4356  600  516 S    0  0.0  0:10.91 init

    2 root      20  0    0    0    0 S    0  0.0  0:00.04 kthreadd

    5 root      0 -20    0    0    0 S    0  0.0  0:00.00 kworker/0:0H

    7 root      RT  0    0    0    0 S    0  0.0  0:00.31 migration/0

    8 root      20  0    0    0    0 S    0  0.0  0:00.00 rcu_bh

    9 root      20  0    0    0    0 S    0  0.0  0:09.31 rcu_sched

  10 root      RT  0    0    0    0 S    0  0.0  0:00.21 migration/1

  11 root      20  0    0    0    0 S    0  0.0  0:00.27 ksoftirqd/1

  13 root      0 -20    0    0    0 S    0  0.0  0:00.00 kworker/1:0H

  14 root      RT  0    0    0    0 S    0  0.0  0:00.26 migration/2

 

it has taken 39 min to copy a 1.4gb file, that must be a new record surely. any suggestions.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.