unRAID Server Release 6.2.0-beta21 Available


Recommended Posts

Mover no longer moving.

 

I had a bit of a hiccup with my server, I went to stop a docker and the gui became unresponsive, it so happened I was copying data to the array at the same time, data that was first going to my cache drive. When I stopped the data copying the gui became responsive again and  I was able to stop and restart the docker. I then recommenced the data copying, however the log does not reflect that the mover is actually doing anything, yet in the gui it says its moving and is greyed out. My issue is I am copying just over 1TB of data and risk filling up my cache drive if the mover is not working, of course I can stop the copy job, but I'd like to fix the mover if possible. Diags attached.

 

Oddly enough, as soon as I stopped the data copying the mover came to life, weird. I guess anyone can look at my diags anyway.

Moving and copying at the same time is not the way it was designed to work and will just make things thrash.
Link to comment
  • Replies 545
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

I reported an issue with NFS nounts in 6.2 beta20.  I am back to report it also in beta 21.  I mount a remote SMB share (on another computer) on the local unraid using UD and try to share it via NFS on the unraid server and I get the following errors in the log:

 

Apr 8 19:14:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:15:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:16:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:17:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:18:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:19:02 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:19:28 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:20:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export

 

It does export via NFS though.

 

/etc/exports file:

# See exports(5) for a description.
# This file contains a list of all directories exported to other computers.
# It is used by rpc.nfsd and rpc.mountd.

"/mnt/disks/HANDYMANSERVER_Backups" -async,no_subtree_check,fsid=200 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)
"/mnt/user/Computer Backups" -async,no_subtree_check,fsid=103 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)
"/mnt/user/Public" -async,no_subtree_check,fsid=100 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)
"/mnt/user/iTunes" -async,no_subtree_check,fsid=101 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)

 

The line I add to the /etc/exports file is:

"/mnt/disks/HANDYMANSERVER_Backups" -async,no_subtree_check,fsid=200 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)

 

My code reads the /etc/exports file into an array, I add my line to the array, and then write the array back to the /etc/exports file.  It should show up at the end of the file, not in the middle.  It appears that something is altering the /etc/exports file in the background causing me to get parts of the file at times.

 

Please note that the mount point is /mnt/disks/, not /mnt/user/.

 

When I mount an iso file with UD and share it via NFS, I do not see the errors in the log.

 

This did not show up in the early 6.2 beta because NFS was not working, but has shown up in all subsequent beta versions.

 

Diagnostics attached.

 

I've done some more experimenting with this problem.  If I export a NFS share using exportfs instead of using the /etc/exports file using:

/usr/sbin/exportfs -io async,sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash :/mnt/disks/mountpoint

 

I see the directory exported using 'exportfs' to display the NFS exports, but a short while later the NFS export for the /mnt/disks/mountpoint is missing.  It has been removed and is no longer exported.

 

I'd rather use the exportfs method of managing the UD NFS exports instead of changing the /etc/exports file, but it doesn't look like it is currently working like I expect.

 

I have done a little research and found that the log message 'Apr 8 19:14:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export' can show up with an encrypted file system and it can also occur with a FUSE file system.

 

LT: There appears to be a background task that is periodically updating the NFS exports from the /etc/exports file that is overwriting my entry and that is why I lose my entries using exportfs.  That would also explain why I get the log message constantly.

 

EDIT: The /etc/exports file is constantly being written to because I see the time on the file increment up.  Why does this file get written constantly?  I would not expect that to happen.

Link to comment

I reported an issue with NFS nounts in 6.2 beta20.  I am back to report it also in beta 21.  I mount a remote SMB share (on another computer) on the local unraid using UD and try to share it via NFS on the unraid server and I get the following errors in the log:

 

Apr 8 19:14:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:15:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:16:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:17:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:18:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:19:02 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:19:28 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:20:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export

 

It does export via NFS though.

 

/etc/exports file:

# See exports(5) for a description.
# This file contains a list of all directories exported to other computers.
# It is used by rpc.nfsd and rpc.mountd.

"/mnt/disks/HANDYMANSERVER_Backups" -async,no_subtree_check,fsid=200 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)
"/mnt/user/Computer Backups" -async,no_subtree_check,fsid=103 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)
"/mnt/user/Public" -async,no_subtree_check,fsid=100 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)
"/mnt/user/iTunes" -async,no_subtree_check,fsid=101 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)

 

The line I add to the /etc/exports file is:

"/mnt/disks/HANDYMANSERVER_Backups" -async,no_subtree_check,fsid=200 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)

 

My code reads the /etc/exports file into an array, I add my line to the array, and then write the array back to the /etc/exports file.  It should show up at the end of the file, not in the middle.  It appears that something is altering the /etc/exports file in the background causing me to get parts of the file at times.

 

Please note that the mount point is /mnt/disks/, not /mnt/user/.

 

When I mount an iso file with UD and share it via NFS, I do not see the errors in the log.

 

This did not show up in the early 6.2 beta because NFS was not working, but has shown up in all subsequent beta versions.

 

Diagnostics attached.

 

I've done some more experimenting with this problem.  If I export a NFS share using exportfs instead of using the /etc/exports file using:

/usr/sbin/exportfs -io async,sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash :/mnt/disks/mountpoint

 

I see the directory exported using 'exportfs' to display the NFS exports, but a short while later the NFS export for the /mnt/disks/mountpoint is missing.  It has been removed and is no longer exported.

 

I'd rather use the exportfs method of managing the UD NFS exports instead of changing the /etc/exports file, but it doesn't look like it is currently working like I expect.

 

I have done a little research and found that the log message 'Apr 8 19:14:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export' can show up with an encrypted file system and it can also occur with a FUSE file system.

 

LT: There appears to be a background task that is periodically updating the NFS exports from the /etc/exports file that is overwriting my entry and that is why I lose my entries using exportfs.  That would also explain why I get the log message constantly.

 

emhttp is managing '/etc/exports' and will rewrite it everytime you access the webgui or background task (e.g. SMART monitoring) is executed so that's why UD's changes are overwritten.  I believe you might be able to write your changes to '/etc/exports-' which emhttp uses as a seed file, appends its own NFS entries and saves it out to '/etc/exports'. 

 

 

Link to comment

I reported an issue with NFS nounts in 6.2 beta20.  I am back to report it also in beta 21.  I mount a remote SMB share (on another computer) on the local unraid using UD and try to share it via NFS on the unraid server and I get the following errors in the log:

 

Apr 8 19:14:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:15:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:16:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:17:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:18:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:19:02 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:19:28 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:20:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export

 

It does export via NFS though.

 

/etc/exports file:

# See exports(5) for a description.
# This file contains a list of all directories exported to other computers.
# It is used by rpc.nfsd and rpc.mountd.

"/mnt/disks/HANDYMANSERVER_Backups" -async,no_subtree_check,fsid=200 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)
"/mnt/user/Computer Backups" -async,no_subtree_check,fsid=103 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)
"/mnt/user/Public" -async,no_subtree_check,fsid=100 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)
"/mnt/user/iTunes" -async,no_subtree_check,fsid=101 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)

 

The line I add to the /etc/exports file is:

"/mnt/disks/HANDYMANSERVER_Backups" -async,no_subtree_check,fsid=200 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)

 

My code reads the /etc/exports file into an array, I add my line to the array, and then write the array back to the /etc/exports file.  It should show up at the end of the file, not in the middle.  It appears that something is altering the /etc/exports file in the background causing me to get parts of the file at times.

 

Please note that the mount point is /mnt/disks/, not /mnt/user/.

 

When I mount an iso file with UD and share it via NFS, I do not see the errors in the log.

 

This did not show up in the early 6.2 beta because NFS was not working, but has shown up in all subsequent beta versions.

 

Diagnostics attached.

 

I've done some more experimenting with this problem.  If I export a NFS share using exportfs instead of using the /etc/exports file using:

/usr/sbin/exportfs -io async,sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash :/mnt/disks/mountpoint

 

I see the directory exported using 'exportfs' to display the NFS exports, but a short while later the NFS export for the /mnt/disks/mountpoint is missing.  It has been removed and is no longer exported.

 

I'd rather use the exportfs method of managing the UD NFS exports instead of changing the /etc/exports file, but it doesn't look like it is currently working like I expect.

 

I have done a little research and found that the log message 'Apr 8 19:14:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export' can show up with an encrypted file system and it can also occur with a FUSE file system.

 

LT: There appears to be a background task that is periodically updating the NFS exports from the /etc/exports file that is overwriting my entry and that is why I lose my entries using exportfs.  That would also explain why I get the log message constantly.

 

emhttp is managing '/etc/exports' and will rewrite it everytime you access the webgui or background task (e.g. SMART monitoring) is executed so that's why UD's changes are overwritten.  I believe you might be able to write your changes to '/etc/exports-' which emhttp uses as a seed file, appends its own NFS entries and saves it out to '/etc/exports'.

 

Let me give that a try.  It makes sense of why I am having trouble.  I don't understand why /etc/exports needs to be constantly written to though.

Link to comment

I reported an issue with NFS nounts in 6.2 beta20.  I am back to report it also in beta 21.  I mount a remote SMB share (on another computer) on the local unraid using UD and try to share it via NFS on the unraid server and I get the following errors in the log:

 

Apr 8 19:14:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:15:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:16:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:17:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:18:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:19:02 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:19:28 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export
Apr 8 19:20:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export

 

It does export via NFS though.

 

/etc/exports file:

# See exports(5) for a description.
# This file contains a list of all directories exported to other computers.
# It is used by rpc.nfsd and rpc.mountd.

"/mnt/disks/HANDYMANSERVER_Backups" -async,no_subtree_check,fsid=200 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)
"/mnt/user/Computer Backups" -async,no_subtree_check,fsid=103 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)
"/mnt/user/Public" -async,no_subtree_check,fsid=100 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)
"/mnt/user/iTunes" -async,no_subtree_check,fsid=101 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)

 

The line I add to the /etc/exports file is:

"/mnt/disks/HANDYMANSERVER_Backups" -async,no_subtree_check,fsid=200 *(sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash)

 

My code reads the /etc/exports file into an array, I add my line to the array, and then write the array back to the /etc/exports file.  It should show up at the end of the file, not in the middle.  It appears that something is altering the /etc/exports file in the background causing me to get parts of the file at times.

 

Please note that the mount point is /mnt/disks/, not /mnt/user/.

 

When I mount an iso file with UD and share it via NFS, I do not see the errors in the log.

 

This did not show up in the early 6.2 beta because NFS was not working, but has shown up in all subsequent beta versions.

 

Diagnostics attached.

 

I've done some more experimenting with this problem.  If I export a NFS share using exportfs instead of using the /etc/exports file using:

/usr/sbin/exportfs -io async,sec=sys,rw,insecure,anongid=100,anonuid=99,all_squash :/mnt/disks/mountpoint

 

I see the directory exported using 'exportfs' to display the NFS exports, but a short while later the NFS export for the /mnt/disks/mountpoint is missing.  It has been removed and is no longer exported.

 

I'd rather use the exportfs method of managing the UD NFS exports instead of changing the /etc/exports file, but it doesn't look like it is currently working like I expect.

 

I have done a little research and found that the log message 'Apr 8 19:14:01 Tower root: exportfs: /mnt/disks/HANDYMANSERVER_Backups does not support NFS export' can show up with an encrypted file system and it can also occur with a FUSE file system.

 

LT: There appears to be a background task that is periodically updating the NFS exports from the /etc/exports file that is overwriting my entry and that is why I lose my entries using exportfs.  That would also explain why I get the log message constantly.

 

emhttp is managing '/etc/exports' and will rewrite it everytime you access the webgui or background task (e.g. SMART monitoring) is executed so that's why UD's changes are overwritten.  I believe you might be able to write your changes to '/etc/exports-' which emhttp uses as a seed file, appends its own NFS entries and saves it out to '/etc/exports'.

 

Let me give that a try.  It makes sense of why I am having trouble.  I don't understand why /etc/exports needs to be constantly written to though.

 

When I write my entries into the /etc/exports- file, the /etc/exports file gets corrupted.  Sometimes an entry ends up added to the comment line, and sometimes it is eliminated directly.

 

Should this discussion be moved to another thread?  Or should we PM to go over this? - Solved it.

 

EDIT: I think I may have a problem on my end and have a solution.

 

EDIT: The log message 'does not support NFS export' comes from sharing remote shares on the local computer via NFS.  NFS does not allow this and I can't believe 6.1.9 did not complain about it.  I'm sure this came from a newer Linux kernel.

Link to comment

Another minor little bug here.

 

dockerMan will allow you to choose a path for docker.img that contains a space.  eg: /mnt/cache/test share/docker.img

 

However, when starting docker you will get the following error:

 

Apr 9 20:33:01 Server_A root: /mnt/cache/test is not a file
Apr 9 20:33:01 Server_A emhttp: shcmd (25523): /etc/rc.d/rc.docker start |& logger
Apr 9 20:33:01 Server_A root: no image mounted at /var/lib/docker

 

Quick test looks like this also affects 6.1.9 where the error is

Apr 9 20:35:49 Server_B logger: Not starting Docker: mount error

Link to comment

Still can't use VMs that have at least one vDisk on a physical disk on the array.

Worked till 6.1.9, broken since first public beta.

 

Happens even in Safe Mode (no Plug-ins) with docker disabled and a clean "go" file.

 

Problem:

- VMs with with at least one vDisk on a physical disk on the array:

    - If the vDisk is the System Disk, VM boots, but never gets to the dektop.

    - If the vDisk is a second Disk, it boots/works fine until I/O is put on the vDisk, then it gets unresponsive

- Once the VM gets unresponsive, it can no longer be shut or even force shut ("resource busy")

    - After trying to force shut the VM, unraid WebGui gets unresponsive after accessing some pages (VM-Tab, Share-Details)

- After starting the VM, I have trouble accesing shares

    - Explorer hangs with "no response" after openig a SMB share

    - even MC through ssh locks up the whole ssh session, after trying to access /mnt/user/"share name"

- more details in my former posts regarding the issue: (I am not the only one with that issue it seems)

    - http://lime-technology.com/forum/index.php?topic=47744.msg457766#msg457766

    - http://lime-technology.com/forum/index.php?topic=47875.msg459773#msg459773

 

How to reproduce:

- Start a Windows VM with at least one vDisk on a physical disk on the array

- put some I/O on that vDisk (booting/copying files)

 

Diagnostics were taken after I tried to force shut the VM.

 

I physicaly removed the NVMe cache (and moved libvirt.img to the array), same issue.

unraid-diagnostics-20160410-1036.zip

Link to comment

Hi!

 

I'm trying to send the unRAID syslog on a Observium server following this guide https://www.observium.org/docs/syslog/

 

I need to enable the OMPROG module on rsyslog confirguration, but the module itself is not present, it looks like rsyslog need to be compiled with "--enable-omprog" option.

 

Do you think this would be doable ? And to make sure this solve the issue, is there any way to update/recompile rsyslog directly on my server ?

 

Thanks! :)

Link to comment

Over the weekend I am having some serious problems with access to shares suddenly being dropped, this can be during moving files into shares or simply trying to access folders / files in a share.

 

Terracopy will hang whilst copying files to a folder on a share, and then Windows Explorer will freeze too. Sometimes all mapped drives to shares will disappear too. In addtion, on trying to access folders and files using Windows Explorer, sometimes it will simply time out.

 

I had thought this was an issue I was having with the dynamix file integrity plugin, but I have now gone back to a bare metal install instead.

 

I have seen the aforementioned behavour with 6.2b18, and have since updated to 6.2b21 - where I am seeing same problem.

 

I have also noticed that the motherboard speaker is making the odd random  beep every now and then too?

 

I attach two diagnostic files, one (FIRST BOOT) is where 6.2b21 hung on me and the second (SECOND BOOT) is where I have just rebooted 6.2b21 again, which seems to be working OK.

 

I have found that on stopping the array, the GUI does not refresh to allow you to reboot. Instead I have to manually reboot the server using the array power button instead, however on reboot parity is valid and no parity check is carried out.

 

There also seems to be a small time delay in accessing folders and files too, whereas access used to be instant.

 

At this stage I am very tempted to go back to a 6.1.9 instal, however I would lose the second parity disk which I have installed.

tower-diagnostics-20160410-1640_FIRST_BOOT.zip

tower-diagnostics-20160410-1656_SECOND__BOOT.zip

Link to comment
I have also noticed that the motherboard speaker is making the odd beep every now and than too.
Are you SURE it's the motherboard speaker? I've had a failing hard drive make what I would swear was a MB beep. Try unplugging or covering the speaker with tape to see if it's actually the speaker making the noise.
Link to comment

I have also noticed that the motherboard speaker is making the odd beep every now and than too.
Are you SURE it's the motherboard speaker? I've had a failing hard drive make what I would swear was a MB beep. Try unplugging or covering the speaker with tape to see if it's actually the speaker making the noise.

 

mine is also beeping....

Link to comment

I've noticed that after a few minutes my unRAID box will become very slow to respond and the processor is pegged at 100%.  It takes a restart (power off with the power button) to get it to work again.  I've tried it with various combinations of dockers and VMs running, but can't find a docker or VM to blame it on.  Diags attached.  Thanks for the help.

 

I see you have the Dynamix File Integrity plugin installed. Could this be the cause of, and possible solution for, your problem? You say you've tried eliminating dockers and VMs, but have looked at plugins?

 

 

Link to comment

I have also noticed that the motherboard speaker is making the odd beep every now and than too.
Are you SURE it's the motherboard speaker? I've had a failing hard drive make what I would swear was a MB beep. Try unplugging or covering the speaker with tape to see if it's actually the speaker making the noise.

 

mine is also beeping....

Sure it's not the BIOS temperature / fan speed monitoring that's triggering it?  Disable the monitoring completely and see if the beeps continue.  If it stops, then adjust the trigger points accordingly.

 

Also is anything strange showing on the local monitor?  a ^G outputted to the screen will also result in an audible beep

 

Link to comment

I've noticed that after a few minutes my unRAID box will become very slow to respond and the processor is pegged at 100%.  It takes a restart (power off with the power button) to get it to work again.  I've tried it with various combinations of dockers and VMs running, but can't find a docker or VM to blame it on.  Diags attached.  Thanks for the help.

 

I see you have the Dynamix File Integrity plugin installed. Could this be the cause of, and possible solution for, your problem? You say you've tried eliminating dockers and VMs, but have looked at plugins?

 

Thanks for the reply John.  I eliminated all the plugins too and it doesn't exhibit that behavior anymore.

Link to comment

 

I think you are missing a part of the  equation.  It is not only the stress introduced by the testing, the elapsed time is an integral part of the entire process.

 

To be clear I am not suggesting that you can stress test a drive in like 5 min. What I am suggesting is that you can do way more in-depth and better testing by cutting out those phases and instead spend that time running better tests... tests that were actually designed with stress testing in mind... (Zeroing and post Zeroing reads were designed to avoid taking the array down for a long time, not designed to really stress test a disk... while that might be a side effect, I'm really suggesting when you don't have to worry about that part anymore, you can focus on designing better tests.)

 

Final Edit:

 

Joe's Pre-Clear script was designed to solve the problem of having to take your array down for a long time during the clearing process for new disks. It's being used for that purpose, but also being used for a purpose it wasn't originally designed for which is to stress test disks. The only point I'm making is now that the original problem isn't there anymore, we should perhaps look at designing a script that sets out to solve the stress-test problem, instead of using a script that can be used for that, but wasn't original designed with that in mind. That's all.

Did you took a real good look into Joe L. script? It's a stress test that writes a single pattern to the disk. This doesn't mean it's ineffective neither that it has a conceptual design flaw.

 

You have to keep in mind that hard disk drive surface tests were at filesystem realm in the past. In those days, tests like badblocks scanned the whole disk surface, exporting a file that contain all bad blocks in the disk. When you formatted a disk, you should load that file, so the filesystem was aware of the existing faulty sectors. Nowadays, bad sectors are handled by the disk firmware, and it's transparent to the filesystem. Modern firmwares map bad logical sectors to different physical ones, and that is called a reallocated sector.

 

That being said, I saw no case where a bad sector was caught using badblocks after a preclear on a healthy hard disk. And because of all firmware advanced algorithms, a successful surface scan is not a sufficient indicator of a disk's health. Besides that, no other test is designed to stress disk headers like preclear.

 

 

Link to comment

upgraded from 6.1.9 to this and now the VM Manager is gone..... I had VM's set up and now theres a new UI that wants me to specify new locations..?

 

Point the libvert storage location to your conf file (mine looks like this /mnt/cache/libvirt/qemu-conf) and you should see your VMs show up again.  Don't forget the VM guide changes from the B18 email before you try to start them.

Link to comment

I do not see any config file anymore. I thought the configs were with the img.

 

I have 2 VMS

 

/mnt/disk8/Windows_NoVPN_VirtDriver/vdisk1.img

/mnt/disk9/Windows_VPN_VirtDriver/vdisk1.img

 

Unless you mean

 

\flash\config\plugins\dynamix.kvm.manager\qemu

 

Link to comment
root@Icarus:/mnt/disk8/Windows_NoVPN_VirtDriver# find / . -type f -name "*.conf"
find: WARNING: Hard link count is wrong for `/proc/fs' (saw only st_nlink=10 but we already saw 8 subdirectories): this may be a bug in your file system driver.  Automatically turning on find's -noleaf option.  Earlier results may have failed to include directories that should have been searched.
find: `/proc/20742': No such file or directory
/boot/config/plugins/dynamix.system.temp/drivers.conf
/boot/config/plugins/dynamix.system.temp/sensors.conf
/usr/share/dbus-1/session.conf
/usr/share/dbus-1/system.conf
/usr/share/samba/setup/olc_syncrepl.conf
/usr/share/samba/setup/modules.conf
/usr/share/samba/setup/olc_mmr.conf
/usr/share/samba/setup/olc_syncrepl_seed.conf
/usr/share/samba/setup/mmr_serverids.conf
/usr/share/samba/setup/memberof.conf
/usr/share/samba/setup/slapd.conf
/usr/share/samba/setup/named.conf
/usr/share/samba/setup/mmr_syncrepl.conf
/usr/share/samba/setup/krb5.conf
/usr/share/samba/setup/refint.conf
/usr/share/samba/setup/olc_serverid.conf
/var/lib/netatalk/afp_signature.conf
/mnt/user/Data/Apps/Operating_Systems/Windows 7 USB Method/WAIK Files/UsbBootWatcher.conf
/mnt/user/Data/Apps/Buffalo drivers/tool/ClientMgr3/ConfProc.conf
/mnt/disk4/Data/Apps/Operating_Systems/Windows 7 USB Method/WAIK Files/UsbBootWatcher.conf
/mnt/disk2/Data/Apps/Buffalo drivers/tool/ClientMgr3/ConfProc.conf
/lib/modprobe.d/evbug.conf
/lib/modprobe.d/8139cp.conf
/lib/modprobe.d/watchdog.conf
/lib/modprobe.d/framebuffers.conf
/lib/modprobe.d/isdn.conf
/lib/modprobe.d/psmouse.conf
/lib/modprobe.d/hostap.conf
/lib/modprobe.d/isapnp.conf
/lib/modprobe.d/hw_random.conf
/lib/modprobe.d/scsi-sata-controllers.conf
/lib/modprobe.d/tulip.conf
/lib/modprobe.d/pcspkr.conf
/lib/modprobe.d/sound-modems.conf
/lib/modprobe.d/bcm43xx.conf
/lib/modprobe.d/usb-controller.conf
/lib/modprobe.d/oss.conf
/lib/modprobe.d/via-ircc.conf
/lib/modprobe.d/eepro100.conf
/lib/modprobe.d/eth1394.conf
/lib/dhcpcd/dhcpcd-hooks/50-ntp.conf
/lib/dhcpcd/dhcpcd-hooks/50-yp.conf
/lib/dhcpcd/dhcpcd-hooks/20-resolv.conf
/etc/cache_dirs.conf
/etc/cgconfig.conf
/etc/resolv.conf
/etc/host.conf
/etc/inetd.conf
/etc/logrotate.conf
/etc/php-fpm/php-fpm.conf
/etc/libvirt-/virtlogd.conf
/etc/libvirt-/virtlockd.conf
/etc/libvirt-/libvirt.conf
/etc/libvirt-/libvirt-admin.conf
/etc/libvirt-/qemu-lockd.conf
/etc/libvirt-/libvirtd.conf
/etc/libvirt-/qemu.conf
/etc/libvirt-/virt-login-shell.conf
/etc/dnsmasq.conf
/etc/serial.conf
/etc/mtools.conf
/etc/genpowerd.conf
/etc/apcupsd/apcupsd.conf
/etc/ntp.conf
/etc/ld.so.conf
/etc/openldap/ldap.conf
/etc/netatalk/afp.conf
/etc/netatalk/dbus-session.conf
/etc/netatalk/extmap.conf
/etc/sysctl.d/60-libvirtd.conf
/etc/sysctl.conf
/etc/sensors.d/sensors.conf
/etc/modprobe.d/kvm-intel.conf
/etc/modprobe.d/scsi-sata-controllers.conf
/etc/modprobe.d/kvm.conf
/etc/modprobe.d/kvm-amd.conf
/etc/ca-certificates.conf
/etc/cgrules.conf
/etc/dhcpcd.conf
/etc/dbus-1/session.conf
/etc/dbus-1/system.conf
/etc/dbus-1/system.d/avahi-dbus.conf
/etc/vsftpd.conf
/etc/sensors3.conf
/etc/cgsnapshot_blacklist.conf
/etc/nfsmount.conf
/etc/nscd.conf
/etc/rsyslog.conf
/etc/mke2fs.conf
/etc/smartd.conf
/etc/ssmtp/ssmtp.conf
/etc/sasl2/libvirt.conf
/etc/samba/smb-names.conf
/etc/samba/smb.conf
/etc/samba/smb-shares.conf
/etc/avahi/avahi-daemon.conf
/etc/request-key.conf
/etc/cgred.conf
/etc/udev/udev.conf
/etc/rc.d/rc.inet1.conf
/etc/nsswitch.conf
/etc/lvm/lvm.conf
/etc/lvm/lvmlocal.conf

Link to comment

I do not see any config file anymore. I thought the configs were with the img.

 

I have 2 VMS

 

/mnt/disk8/Windows_NoVPN_VirtDriver/vdisk1.img

/mnt/disk9/Windows_VPN_VirtDriver/vdisk1.img

 

Unless you mean

 

\flash\config\plugins\dynamix.kvm.manager\qemu

 

Mine was on my flash drive, so I'm guessing that's the right one.  I pointed the libvert file location at files until it worked.  :)

Link to comment
Apr 10 15:35:18 Icarus root: naming =version 2 bsize=4096 ascii-ci=0 ftype=1
Apr 10 15:35:18 Icarus root: log =internal bsize=4096 blocks=357698, version=2
Apr 10 15:35:18 Icarus root: = sectsz=512 sunit=0 blks, lazy-count=1
Apr 10 15:35:18 Icarus root: realtime =none extsz=4096 blocks=0, rtextents=0
Apr 10 15:35:18 Icarus emhttp: shcmd (75): sync
Apr 10 15:35:19 Icarus emhttp: shcmd (76): mkdir /mnt/user
Apr 10 15:35:19 Icarus emhttp: shcmd (77): /usr/local/sbin/shfs /mnt/user -disks 4094 -o noatime,big_writes,allow_other -o remember=0 |& logger
Apr 10 15:35:19 Icarus emhttp: shcmd (78): rm -f /boot/config/plugins/dynamix/mover.cron
Apr 10 15:35:19 Icarus emhttp: shcmd (79): /usr/local/sbin/update_cron &> /dev/null
Apr 10 15:35:19 Icarus root: /mnt/user/libvirt/ is not a file
Apr 10 15:35:19 Icarus emhttp: shcmd (90): /etc/rc.d/rc.libvirt start |& logger
Apr 10 15:35:19 Icarus root: no image mounted at /etc/libvirt
Apr 10 15:35:19 Icarus emhttp: nothing to sync
Apr 10 15:35:19 Icarus root: /mnt/user/libvirt/ is not a file
Apr 10 15:35:19 Icarus emhttp: shcmd (101): /etc/rc.d/rc.libvirt start |& logger
Apr 10 15:35:19 Icarus root: no image mounted at /etc/libvirt
Apr 10 15:35:27 Icarus root: /mnt/user/libvirt/ is not a file
Apr 10 15:35:27 Icarus emhttp: shcmd (112): /etc/rc.d/rc.libvirt start |& logger
Apr 10 15:35:27 Icarus root: no image mounted at /etc/libvirt
Apr 10 15:35:29 Icarus root: /mnt/user/libvirt/ is not a file
Apr 10 15:35:29 Icarus emhttp: shcmd (123): /etc/rc.d/rc.libvirt start |& logger
Apr 10 15:35:29 Icarus root: no image mounted at /etc/libvirt
Apr 10 15:35:31 Icarus root: /mnt/user/libvirt/ is not a file
Apr 10 15:35:31 Icarus emhttp: shcmd (134): /etc/rc.d/rc.libvirt start |& logger
Apr 10 15:35:31 Icarus root: no image mounted at /etc/libvirt
Apr 10 15:35:33 Icarus root: /mnt/user/libvirt/ is not a file
Apr 10 15:35:33 Icarus emhttp: shcmd (145): /etc/rc.d/rc.libvirt start |& logger
Apr 10 15:35:33 Icarus root: no image mounted at /etc/libvirt

Link to comment

Finally got it to work. What a PITA. Had to specify

 

/mnt/disk8/libvirt/libvirt.img (which did not exist until i gave it a valid location and filename)

 

Next problem it didnt want to boot my old VM's.... I knew better than to try beta... couldnt resist the upgrade though.... +( time to reinstall all VM's sometimes thats just easier / better.

Link to comment
Guest
This topic is now closed to further replies.