unRAID Server Release 6.0-beta6-x86_64 Available


Recommended Posts

I commented my smb share config, and uncommented the nfs config, and now the ArchVM won't start at all :(

 

Like so...

 

#
# /etc/fstab: static file system information
#
# <file system> <dir>   <type>  <options>       <dump>  <pass>
# /dev/xvda1
UUID=93ec2c22-36c1-487c-a888-adde602a16fe       /               ext4           $

#//media/adult /mnt/adult cifs auto,x-systemd.automount,guest,noperm,noserverino,uid=nobody,gid=users 0 0
#//media/backup /mnt/backup cifs auto,x-systemd.automount,guest,noperm,noserverino,uid=nobody,gid=users 0 0
#//media/documents /mnt/documents cifs auto,x-systemd.automount,guest,noperm,noserverino,uid=nobody,gid=users 0 0
#//media/downloads /mnt/downloads cifs auto,x-systemd.automount,guest,noperm,noserverino,uid=nobody,gid=users 0 0
#//media/music /mnt/music cifs auto,x-systemd.automount,guest,noperm,noserverino,uid=nobody,gid=users 0 0
#//media/photos /mnt/photos cifs auto,x-systemd.automount,guest,noperm,noserverino,uid=nobody,gid=users 0 0
#//media/video /mnt/video cifs auto,x-systemd.automount,guest,noperm,noserverino,uid=nobody,gid=users 0 0

//media:/mnt/user/adult /mnt/adult nfs auto,x-systemd.automount,guest,noperm,noserverino,uid=nobody,gid=users 0 0
//media:/mnt/user/backup /mnt/backup nfs auto,x-systemd.automount,guest,noperm,noserverino,uid=nobody,gid=users 0 0
//media:/mnt/user/documents /mnt/documents nfs auto,x-systemd.automount,guest,noperm,noserverino,uid=nobody,gid=users 0 0
//media:/mnt/user/downloads /mnt/downloads nfs auto,x-systemd.automount,guest,noperm,noserverino,uid=nobody,gid=users 0 0
//media:/mnt/user/music /mnt/music nfs auto,x-systemd.automount,guest,noperm,noserverino,uid=nobody,gid=users 0 0
//media:/mnt/user/photos /mnt/photos nfs auto,x-systemd.automount,guest,noperm,noserverino,uid=nobody,gid=users 0 0
//media:/mnt/user/video /mnt/video nfs auto,x-systemd.automount,guest,noperm,noserverino,uid=nobody,gid=users 0 0

Link to comment
  • Replies 336
  • Created
  • Last Reply

Top Posters In This Topic

A minor thing that I noticed on 6.0-beta5a and now 6.0-beta6 is that the "Last checked..." notification has the time zone duplicated.

 

Last checked on Tue 17 Jun 2014 02:06:09 AM EDT EDT (two days ago), finding 0 errors.

 

Jim

Link to comment

My Xen VM that I have been running for the last several betas starts up fine but after a period of time this occurs:

 

kernel BUG at drivers/net/xen-netback/netback.c:629!
invalid opcode: 0000 [#1] SMP
Modules linked in: ipt_MASQUERADE iptable_nat nf_conntrack_ipv4 nf_nat_ipv4 nf_nat md_mod iptable_filter ip_tables vhost_net vhost tun i2c_i801 igb ahci e1000e i2c_algo_bit libahci ptp pps_core mpt2sas raid_class scsi_transport_sas [last unloaded: md_mod]
CPU: 0 PID: 2395 Comm: vif1.0-guest-rx Not tainted 3.15.0-unRAID #4
Hardware name: Gigabyte Technology Co., Ltd. Z87X-UD5H/Z87X-UD5H-CF, BIOS F7 08/02/2013
task: ffff88040d1a64c0 ti: ffff88022ec28000 task.ti: ffff88022ec28000
RIP: e030:[<ffffffff81404fba>]  [<ffffffff81404fba>] xenvif_rx_action+0x484/0x7ff
RSP: e02b:ffff88022ec2bda0  EFLAGS: 00010202
RAX: 0000000000000013 RBX: 0000000000000012 RCX: ffffea0007647600
RDX: ffff88005655b2b8 RSI: ffff88000620dee0 RDI: 00000000003daff6
RBP: ffff88022ec2be70 R08: 0000000000000000 R09: 0000000000000001
R10: 0000160000000000 R11: ffff8802591d8000 R12: ffff88000620dee0
R13: 0000000000000011 R14: 0000100000000000 R15: ffff880056550800
FS:  0000000000000000(0000) GS:ffff880429e00000(0000) knlGS:ffff880429e00000
CS:  e033 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00002af488062c48 CR3: 000000040b168000 CR4: 0000000000042660
Stack:
ffff88022ec2bdf4 0000000000000000 ffff8804ffffffff ffff88022ec2bdd8
ffff88022ec2be08 000000110d960f80 ffff88005655b148 0000000000021e9a
ffff880056550800 0000100000000000 0000000000000001 ffff88022ec2bdf8
Call Trace:
[<ffffffff814068ab>] xenvif_kthread_guest_rx+0x108/0x1db
[<ffffffff8106e362>] ? __wake_up_sync+0xd/0xd
[<ffffffff814067a3>] ? xenvif_stop_queue+0x53/0x53
[<ffffffff81057fbb>] kthread+0xd6/0xde
[<ffffffff81057ee5>] ? kthread_create_on_node+0x162/0x162
[<ffffffff8157e94c>] ret_from_fork+0x7c/0xb0
[<ffffffff81057ee5>] ? kthread_create_on_node+0x162/0x162
Code: 8b 09 e8 25 f5 ff ff e9 0f ff ff ff 8b 45 b8 2b 85 6c ff ff ff 41 89 44 24 28 41 8b 87 34 a9 00 00 2b 85 68 ff ff ff 39 d8 76 02 <0f> 0b 48 8b 45 a0 48 8b 9d 50 ff ff ff 49 89 44 24 08 49 89 1c
RIP  [<ffffffff81404fba>] xenvif_rx_action+0x484/0x7ff
RSP <ffff88022ec2bda0>
---[ end trace 5051a419f0310811 ]---

Link to comment

I seem to have a problem with beta6/ArchVM.

 

I had just started the system (first time) with beta6 (updating from beta 5a).

 

I had opened one ssh session onto my archVM and had performed some systemctl status enquiries, restarted on daemon with systemctl, and viewed the associated .service file.  I opened a second ssh session and invoked vi on the .service file - it displayed the file contents but failed to accept any commands.  The Domu had stopped responding to anything - even ping.  Dom0 and the other Domu are still working.

 

The attached image shows what was visible on the IPMI console.

 

I tried to Shutdown the wayward Domu from the unRAID web interface - no go.

I tried to 'xl reboot' the Domu - no go.

Successive 'xl list' commands showed that the Domu was still consuming cpu resource.

'xl destroy' produced an error message:

root@Tower:~# xl list
Name                                        ID   Mem VCPUs	State	Time(s)
Domain-0                                     0  3687     2     r-----     201.4
ArchVMX                                      1  2048     6     -b----      54.0
archVM                                       2  2048     6     -b----     167.5
root@Tower:~# xl destroy 2
libxl: error: libxl_device.c:934:device_backend_callback: unable to remove device with path /local/domain/0/backend/vif/2/0
libxl: error: libxl.c:1457:devices_destroy_cb: libxl__devices_destroy failed for 2
root@Tower:~#

 

leaving a 'null'  entry in the 'xl list':

root@Tower:~# xl list
Name                                        ID   Mem VCPUs	State	Time(s)
Domain-0                                     0  3687     2     r-----     218.5
ArchVMX                                      1  2048     6     -b----      56.5
(null)                                       2     0     6     --p--d     167.5
root@Tower:~#

 

Sometime after this, the Domu showed as 'Shutdown' in the unRAID web interface, so I attempted to restart it - no go.

 

'xl create' produced the following:

root@Tower:~# xl create /mnt/user/cache_only/ArchVM/arch.cfg
Parsing config from /mnt/user/cache_only/ArchVM/arch.cfg
failed to free memory for the domain
root@Tower:~# 

 

So, although it's only a single Domu which is awol, I think that I'm going to have to reboot the entire system.

 

Edited to add:

 

Exactly the same thing happened after the reboot, without any ssh sessions being started .... back to beta5a!

 

... and this seems to be exactly the same problem that needo is reporting in the post above!

 

syslog added ....

unraidcrash.jpg.43f8e53e2433c448f81ff69010d098c8.jpg

syslog-20140619-144913.txt.zip

Link to comment

Edited to add:

Exactly the same thing happened after the reboot, without any ssh sessions being started .... back to beta5a!

 

Yeah this worked all day but suddenly stopped for me this evening. Unfortunately I have already converted my cache disk to btrfs so now my unRAID server is dead in the water due to not being able to run any VMs which I migrated too to keep unRAID stable. :) Irony!

Link to comment
Yeah this worked all day but suddenly stopped for me this evening.

 

Mine has failed within five minutes, both times.

 

Unfortunately I have already converted my cache disk to btrfs so now my unRAID server is dead in the water due to not being able to run any VMs which I migrated too to keep unRAID stable. :) Irony!

 

Have you tried editing /boot/config/domains/*.cfg to set "autostart="no"

 

Can you then convert your cache back to ReiserFS and revert to beta5a?

Link to comment

root@Tower:~# xl destroy 2
libxl: error: libxl_device.c:934:device_backend_callback: unable to remove device with path /local/domain/0/backend/vif/2/0
libxl: error: libxl.c:1457:devices_destroy_cb: libxl__devices_destroy failed for 2
root@Tower:~#

 

leaving a 'null'  entry in the 'xl list':

root@Tower:~# xl list
Name                                        ID   Mem VCPUs	State	Time(s)
Domain-0                                     0  3687     2     r-----     218.5
ArchVMX                                      1  2048     6     -b----      56.5
(null)                                       2     0     6     --p--d     167.5
root@Tower:~#

 

Likewise I'm experiencing this same problem with Arch. Although on reboot my VM will boot up fine through the powerdown Sxx.sh scripts for APCUPSD plugin.

 

My arch cfg file

name = "archVM"
bootloader = "pygrub"
memory = 2560
vcpus = '2'
disk = [ 
'phy:/mnt/cache/Domains/ArchVM/arch.img,xvda,w',
'file:/mnt/cache/Domains/ArchVM/data.img,xvdb,w'
#	'phy:/mnt/user/nameofshare,xvdb,w'
]
vif = [ 'mac=00:16:3e:27:11:22,bridge=xenbr0' ]
bootloader = "pygrub"

Link to comment
Likewise I'm experiencing this same problem with Arch. Although on reboot my VM will boot up fine through the powerdown Sxx.sh scripts for APCUPSD plugin.

 

... suggesting that this may be  caused by the positioning of the autostart procedure within the boot sequence?

Link to comment

why you have 2 times the bootloader ?

 

name = "archVM"

bootloader = "pygrub"

memory = 2560

vcpus = '2'

disk = [

'phy:/mnt/cache/Domains/ArchVM/arch.img,xvda,w',

'file:/mnt/cache/Domains/ArchVM/data.img,xvdb,w'

# 'phy:/mnt/user/nameofshare,xvdb,w'

]

vif = [ 'mac=00:16:3e:27:11:22,bridge=xenbr0' ]

bootloader = "pygrub"

Link to comment

Ok here we go some positive news too

 

Beta 6 so far has been running smooth

MUCH better then Beta3 ad beta 5A

cpu and memory uitlisation is much better then with both these releases...  i have 2 servers one is running plain unraid 6 and the other one unraid with Xen

and i just copied the files and rebooted....

VM came up automatically and like i said the CPU utilisation between this version and the prior ones is night and day difference

before top ran always around 4 + now it goes down to 1.44...

and i really didn't change anything else

 

only caveat i have is to zip my plex folder now and stop all my plugins and move everything from the cache drive for btrfs :(

i know it might look easy for development people but having 50.000 TV episodes and nearly 900 shows is giving plex a lot of media data ...

and moving that is not done in 10 minutes :(

 

hopefully Limetech will stick now with btrfs as not in the mood to do this too often, but i am really interested in docker ... seems like the ideal plugin system

Link to comment

i know it might look easy for development people but having 50.000 TV episodes and nearly 900 shows is giving plex a lot of media data ...

and moving that is not done in 10 minutes :(

 

hopefully Limetech will stick now with btrfs as not in the mood to do this too often, but i am really interested in docker ... seems like the ideal plugin system

 

OMG yes this ^^^ hahaha

 

I've had to copy my plex database a few times and it is always so painful.  I think the only thing MORE painful is having to chown/chmod the whole thing when I moved it off unraid and into archVM.  Even worse was the nagging suspicion that I wouldn't have had to do it had I not made an earlier error :(

Link to comment

root@Tower:~# xl destroy 2
libxl: error: libxl_device.c:934:device_backend_callback: unable to remove device with path /local/domain/0/backend/vif/2/0
libxl: error: libxl.c:1457:devices_destroy_cb: libxl__devices_destroy failed for 2
root@Tower:~#

 

leaving a 'null'  entry in the 'xl list':

root@Tower:~# xl list
Name                                        ID   Mem VCPUs	State	Time(s)
Domain-0                                     0  3687     2     r-----     218.5
ArchVMX                                      1  2048     6     -b----      56.5
(null)                                       2     0     6     --p--d     167.5
root@Tower:~#

 

Likewise I'm experiencing this same problem with Arch. Although on reboot my VM will boot up fine through the powerdown Sxx.sh scripts for APCUPSD plugin.

 

My arch cfg file

name = "archVM"
bootloader = "pygrub"
memory = 2560
vcpus = '2'
disk = [ 
'phy:/mnt/cache/Domains/ArchVM/arch.img,xvda,w',
'file:/mnt/cache/Domains/ArchVM/data.img,xvdb,w'
#	'phy:/mnt/user/nameofshare,xvdb,w'
]
vif = [ 'mac=00:16:3e:27:11:22,bridge=xenbr0' ]
bootloader = "pygrub"

 

I am also having issues with Arch, Seems to start, but once I start my Transmission daemon, everything dies within minutes, losing all access.

 

my config

 

name = "archVM"
bootloader = "pygrub"
memory = 4096
#vcpus = 4
disk = [ 
'phy:/mnt/cache/Apps/ArchVM/arch.img,xvda,w',
#	'phy:/mnt/cache/Apps,xvdb,w'
]
vif = [ 'mac=00:16:3e:xx:xx:xx,bridge=br0' ]
bootloader = "pygrub"

Link to comment

I seem to have a problem with beta6/ArchVM.

 

I had just started the system (first time) with beta6 (updating from beta 5a).

 

I had opened one ssh session onto my archVM and had performed some systemctl status enquiries, restarted on daemon with systemctl, and viewed the associated .service file.  I opened a second ssh session and invoked vi on the .service file - it displayed the file contents but failed to accept any commands.  The Domu had stopped responding to anything - even ping.  Dom0 and the other Domu are still working.

 

The attached image shows what was visible on the IPMI console.

 

I tried to Shutdown the wayward Domu from the unRAID web interface - no go.

I tried to 'xl reboot' the Domu - no go.

Successive 'xl list' commands showed that the Domu was still consuming cpu resource.

'xl destroy' produced an error message:

root@Tower:~# xl list
Name                                        ID   Mem VCPUs	State	Time(s)
Domain-0                                     0  3687     2     r-----     201.4
ArchVMX                                      1  2048     6     -b----      54.0
archVM                                       2  2048     6     -b----     167.5
root@Tower:~# xl destroy 2
libxl: error: libxl_device.c:934:device_backend_callback: unable to remove device with path /local/domain/0/backend/vif/2/0
libxl: error: libxl.c:1457:devices_destroy_cb: libxl__devices_destroy failed for 2
root@Tower:~#

 

leaving a 'null'  entry in the 'xl list':

root@Tower:~# xl list
Name                                        ID   Mem VCPUs	State	Time(s)
Domain-0                                     0  3687     2     r-----     218.5
ArchVMX                                      1  2048     6     -b----      56.5
(null)                                       2     0     6     --p--d     167.5
root@Tower:~#

 

Sometime after this, the Domu showed as 'Shutdown' in the unRAID web interface, so I attempted to restart it - no go.

 

'xl create' produced the following:

root@Tower:~# xl create /mnt/user/cache_only/ArchVM/arch.cfg
Parsing config from /mnt/user/cache_only/ArchVM/arch.cfg
failed to free memory for the domain
root@Tower:~# 

 

So, although it's only a single Domu which is awol, I think that I'm going to have to reboot the entire system.

 

Edited to add:

 

Exactly the same thing happened after the reboot, without any ssh sessions being started .... back to beta5a!

 

... and this seems to be exactly the same problem that needo is reporting in the post above!

 

syslog added ....

 

peter's post says it all really, i can power on my xen arch vm (using webui), i can connect to it for a very short period of time, after approx 2-3 mins from boot, it then becomes unresponsive, i cannot access the console of the vm, i cannot ssh in, i cannot access any apps running in the vm. if i then attempt to do an xl shutdown the vm doesnt shutdown, if i force a shutdown with a xl destroy the vm is still listed in xl list and you end up with a entry in xl list of "(null)", the only way to remove the null entry is to shutdown any other domU's and reboot the host to clear the entry.

 

the arch vm i have is NOT based on Ironic Badgers build so i dont think its related to that, my theory right now is that it maybe related to either having cores pinned for the vm and/or to do with having STP turned off, thats the only "odd" options i have set that i can think of, i will post my arch cfg file when i get home.

 

edit - sorry forgot to mention, i am using autofs for nfs only, no smb for me, so i dont believe its related to smb.

Link to comment

the arch vm i have is NOT based on Ironic Badgers build so i dont think its related to that, my theory right now is that it maybe related to either having cores pinned for the vm and/or to do with having STP turned off, thats the only "odd" options i have set that i can think of, i will post my arch cfg file when i get home.

 

edit - sorry forgot to mention, i am using autofs for nfs only, no smb for me, so i dont believe its related to smb.

 

I have not pinned any CPU's to anything, and I switched from SMB to NFS as a test, and mine still dies after a few minutes.

 

Hopefully this can be resolved soon.  I'll probably end up just moving to using docker for the things I'm currently using Arch for now, but still, I'd like to get this working.

Link to comment

the arch vm i have is NOT based on Ironic Badgers build so i dont think its related to that, my theory right now is that it maybe related to either having cores pinned for the vm and/or to do with having STP turned off, thats the only "odd" options i have set that i can think of, i will post my arch cfg file when i get home.

 

edit - sorry forgot to mention, i am using autofs for nfs only, no smb for me, so i dont believe its related to smb.

 

I have not pinned any CPU's to anything, and I switched from SMB to NFS as a test, and mine still dies after a few minutes.

 

Hopefully this can be resolved soon.  I'll probably end up just moving to using docker for the things I'm currently using Arch for now, but still, I'd like to get this working.

 

serious grasp at straws here  ;D, dont suppose you have STP turned off?

Link to comment

Anyone having parity check speed issues with beta6? I just upgraded from v5 to play with VMs and docker, and my parity check speeds dropped from ~100MB/s to just 1MB/s while runing just bare unraid, no plugins or VMs/docker containers. I'm thinking that can't be right, can it?

Link to comment

the arch vm i have is NOT based on Ironic Badgers build so i dont think its related to that, my theory right now is that it maybe related to either having cores pinned for the vm and/or to do with having STP turned off, thats the only "odd" options i have set that i can think of, i will post my arch cfg file when i get home.

 

edit - sorry forgot to mention, i am using autofs for nfs only, no smb for me, so i dont believe its related to smb.

 

I have not pinned any CPU's to anything, and I switched from SMB to NFS as a test, and mine still dies after a few minutes.

 

Hopefully this can be resolved soon.  I'll probably end up just moving to using docker for the things I'm currently using Arch for now, but still, I'd like to get this working.

 

serious grasp at straws here  ;D, dont suppose you have STP turned off?

I have STP turned on.....

Link to comment

the arch vm i have is NOT based on Ironic Badgers build so i dont think its related to that, my theory right now is that it maybe related to either having cores pinned for the vm and/or to do with having STP turned off, thats the only "odd" options i have set that i can think of, i will post my arch cfg file when i get home.

 

edit - sorry forgot to mention, i am using autofs for nfs only, no smb for me, so i dont believe its related to smb.

 

I have not pinned any CPU's to anything, and I switched from SMB to NFS as a test, and mine still dies after a few minutes.

 

Hopefully this can be resolved soon.  I'll probably end up just moving to using docker for the things I'm currently using Arch for now, but still, I'd like to get this working.

 

serious grasp at straws here  ;D, dont suppose you have STP turned off?

I have STP turned on.....

 

thanks for the comment!, hmm ok in that case im out of guesses as to why jonp didnt see this during testing, very wierd!

Link to comment

All,

 

We've been looking into this issue (Xen Net bug / skb rides the rocket) again since it was first reported after the launch of Beta 6.  We tested the ArchVM specifically and where it was crashing consistently in a previous internal beta on an older Linux kernel, it was not crashing in Beta 6.

 

Here's my current suggestion:

 

1)  Make a backup copy of your Arch VM image somewhere.

2)  Reboot into non-Xen mode.

3)  Attempt to run the VM in KVM mode with virsh and an XML configuration file.

 

I'm going to try and put one together today for this to see if I can get a "once-Xen" VM running under KVM without any major effort.  I suggest this because this error is specific to Xen and does not show itself with KVM.

Link to comment

the arch vm i have is NOT based on Ironic Badgers build so i dont think its related to that, my theory right now is that it maybe related to either having cores pinned for the vm and/or to do with having STP turned off, thats the only "odd" options i have set that i can think of, i will post my arch cfg file when i get home.

 

edit - sorry forgot to mention, i am using autofs for nfs only, no smb for me, so i dont believe its related to smb.

 

I have not pinned any CPU's to anything, and I switched from SMB to NFS as a test, and mine still dies after a few minutes.

 

Hopefully this can be resolved soon.  I'll probably end up just moving to using docker for the things I'm currently using Arch for now, but still, I'd like to get this working.

 

serious grasp at straws here  ;D, dont suppose you have STP turned off?

I have STP turned on.....

I have it turned off.  I'm not sure what it's supposed to do, and it seemed to all work fine on the last beta.

Link to comment

Anyone having parity check speed issues with beta6? I just upgraded from v5 to play with VMs and docker, and my parity check speeds dropped from ~100MB/s to just 1MB/s while runing just bare unraid, no plugins or VMs/docker containers. I'm thinking that can't be right, can it?

 

I'm getting ~125MB/s right now.

 

Link to comment

Anyone having parity check speed issues with beta6? I just upgraded from v5 to play with VMs and docker, and my parity check speeds dropped from ~100MB/s to just 1MB/s while runing just bare unraid, no plugins or VMs/docker containers. I'm thinking that can't be right, can it?

 

For how long has it been consistently at 1MB/s?  Have you tried accessing the webGUI from another device to see if it's a browser related issue?

Link to comment

All,

 

We've been looking into this issue again since it was first reported after the launch of Beta 6.  We tested the ArchVM specifically and where it was crashing consistently in a previous internal beta on an older Linux kernel, it was not crashing in Beta 6.

 

Here's my current suggestion:

 

1)  Make a backup copy of your Arch VM image somewhere.

2)  Reboot into non-Xen mode.

3)  Attempt to run the VM in KVM mode with virsh and an XML configuration file.

 

I'm going to try and put one together today for this to see if I can get a "once-Xen" VM running under KVM without any major effort.  I suggest this because this error is specific to Xen and does not show itself with KVM.

 

For me to try this, I'll have to 'convert' my windows VM also.

 

Can I ask, is KVM the most likely future of unRAID virtualization, or is XEN very likely to continue to be supported?  I ask for a couple of reasons.  In several of grumpybutfun's posts, he's explained that KVM will require more effort from LimeTech to keep current and updated, whereas XEN is built into Linux, so it will require less 'maintenance' from LT to keep working 'properly'.

 

I don't know enough about either to have an opinion of which is 'better' right now, but I'd like to put my time/energy into whichever is likely to be in unRAID next year, if it may end up only being one system.

 

I expect that I'll get docker running today, and intend to use that for SAB, SickRage, etc, so I don't think I'll need to have a linux VM on unRAID, but who knows how things will be in 6 more months :)

Link to comment

Anyone having parity check speed issues with beta6? I just upgraded from v5 to play with VMs and docker, and my parity check speeds dropped from ~100MB/s to just 1MB/s while runing just bare unraid, no plugins or VMs/docker containers. I'm thinking that can't be right, can it?

 

For how long has it been consistently at 1MB/s?  Have you tried accessing the webGUI from another device to see if it's a browser related issue?

 

Been running for about 30 min now, just checked from another machine and it's reporting 1.9MB/s and it's about 2GB into the parity check

Link to comment

Hmm, that is odd.  Mind sharing your syslog?  If you share your flash device over your network, you should be able to type this via an SSH session to your server after logging in:

 

cp /var/log/syslog /boot/syslog

 

Then browse to your flash device over your network, copy the syslog file inside, and post it either on pastebin or within a <code> snipped on the forums here.

Been running for about 30 min now, just checked from another machine and it's reporting 1.9MB/s and it's about 2GB into the parity check

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.