unRAID Server Release 6.2.0-beta20 Available


Recommended Posts

On the subject of parity checks, I notice that the Dashboard display gives slightly misleading information with 6.2.0-beta20. The last parity check is wrongly reported as the start time of the current parity check - while it's in progress, at any rate. The last parity check was actually a week or so ago.

 

Here are screenshots for 6.2.0-beta20 and 6.1.9 while the parity check is in progress.

Parity_Check_6.2.0-b20.png.cfa554b2fc7a50ea2f49e8733fe84eb9.png

Parity_Check_6.1.9.png.faa0fabefc20aa64d9a8a9567d8232fe.png

Link to comment

...If there is a P or Q error only then it's on the parity disk.

I wonder if this is really a valid assumption. The only time I have ever had parity errors, and something that always triggers a correcting parity check, is the so-called unsafe shutdown. If I understand correctly, in the case of single parity and unsafe shutdown, it is assumed that the data was written but parity didn't get written before power was cut, so parity is in error. In the case of dual parity and unsafe shutdown, couldn't it also be the case that data was written but neither parity got written before power was cut?

if one of the parity disks agrees with the parity calculated from all the data disks and the other does not I would have thought that it was odds on that the one not agreeing was wrong.  It would take multiple perfectly compensating errors for this not to be true.

 

I would say that the complicated case is when neither parity disk agrees with the data disks.  It seems strange that both parity disks are wrong, but if parity is only written after data then I guess this is possible.  If a data disk goes wrong then I guess in theory one could calculate whether there was a value for the data disks that would mean both parity disks matched, and in this case the data disk is probably wrong.    However since we do not know which data disk might have triggered such an error I can see it being computationally expensive to try and test all permutations of a data disk failure to try and identify which disk might have failed so that it can be corrected.  Assuming that is not already there then maybe it something that could be added as an enhancement in the future?

Link to comment

As it's the first of the month and I have a scheduled parity check underway with "Write corrections to parity disk" set to the default "Yes" in the scheduler, I was wondering if it will do literally that - now that I have dual parity - or will it treat any error in a more intelligent way? With only one parity disk the assumption was that any inconsistency was due to a parity error, not a data error. But with P and Q parity there's scope to use the parity check information more wisely. Has that actually been implemented?

 

If P+Q both mismatch, and you know there is only a single disk with bad data, then indeed it's possible to correct that single disk.  See section 4 of this paper:

https://www.kernel.org/pub/linux/kernel/people/hpa/raid6.pdf

 

In the same section, however we read:

 

Finally, as a word of caution it should be noted that RAID-6 by itself cannot (in the

general case) even detect, never mind recover from, dual-disk corruption. If two disks are

corrupt in the same byte positions, the above algorithm will (again, in the general case)

introduce additional data corruption by corrupting a third drive.

 

Since we have no way of knowing how many data disks have bad data in a given parity stripe, we err on the side of not corrupting other data - that would be the absolute worst thing to do.

 

In normal parity-check cases you almost always want to use correction.  Why is that?  The main thing you are guarding against with a parity protected array is catastrophic device failure: cases where a device has massive errors or maybe has dropped offline entirely.  In this case you always want parity consistent so that you can rebuild data on that drive.

 

For example, let's take a common case: You are writing a file and all of a sudden you lose power.  Server reboots and a parity check reveals, say a burst of 32 4K blocks of parity mismatch.  Maybe you know which disk you were writing, maybe you don't, but let's say it was disk1.  If you don't correct parity and then disk2 fails, not only do you have the same corruption on disk1, but you also cannot rebuild disk2 properly.  At least if you regenerated parity you would be able to rebuild disk2 properly in this case.  Also it's worth pointing out: even if a write did not complete, with today's journaling file systems, this will typically not result in file system corruption.

  • Thanks 1
Link to comment

Can you test the attached diagnostics script? It includes now the SMART configuration parameters. Thanks.

 

Thank you boniel - tried your script!

 

But still no valid SMART reports. Only the flash usb-drive is called (so a txt file only exists for the flash drive), but it gives the following output:

 

smartctl 6.4 2015-06-04 r4109 [x86_64-linux-4.4.6-unRAID] (local build)
Copyright (C) 2002-15, Bruce Allen, Christian Franke, www.smartmontools.org

/dev/sda: Unknown device type '3ware'
=======> VALID ARGUMENTS ARE: ata, scsi, sat[,auto][,N][+TYPE], usbcypress[,X], usbjmicron[,p][,x][,N], usbsunplus, marvell, areca,N/E, 3ware,N, hpt,L/M/N, megaraid,N, aacraid,H,L,ID, cciss,N, auto, test <=======

Use smartctl -h to get a usage summary

 

that is understandable, because the flash usb drive surely has no SMART - in the WebUI, it does not have.

But the other disks are not added, no txt files for any other disk...

 

Nevertheless, I am not sure if the SMART values have anything to do with the problem that the entire(!) filesystem stops responding. No disk is accessible anymore. Every Terminal, WebUI, Read/Write Command freezes when accessing a disk. Even when I copy files between two quite new disks without any SMART errors, everything stops responding. All disks have been passed in SMART. There have been the one or other problems in the past, but the disks did not change, only the unRAID system.

 

The problem is, that as soon as the system freezes while copying or doing a write and read at the same time (so one does not need to copy, it is enough that I read a file from one disk and write to another one at the same time!), the file that is been written to is immediatly corrupt as nothing gets written anymore.

 

As seen in the screenshot. File one is copied from disk 1, File 2 is copied to disk 2 (in this case old disks, never had any problem). Started only some seconds one after the other, system freezed until I push reset.

The SMART values of the three disks involved in this test are appended. Disk 1 is the Hitachi, Disk 2 the Samsung and Parity the WD RED. But as I already said, it does not matter which disks are involved (well parity obviously always is, but I do not have this problem when I only write and do not read at the same time from another disk - so it cannot be a disk problem as it only happens when at least two disks are involved with simultaneously writing AND reading. Parity check with 15 disks reading at the same time is no problem, works flawless until the end - so it cannot be a power or disk or cabling problem as the parity check would not work flawless for a whole day. There must be something wrong with the software.

 

Original report here: https://lime-technology.com/forum/index.php?topic=47875.msg460521#new

 

UPDATE: For a test I went back to a clean 6.1.9 install - everything works again. I can write at the same time and read, which always lead to the freeze in 6.2-beta and everything works. No hanging at all. I will try to update to 6.2-beta again with a clean install. So 6.1.9 works(!) flawless with my setup. It MUST be a software problem.

 

UPDATE 2: Fresh clean 6.2-beta seems to work. So it must have been something with the configuration. I will reconfigure now and try to find out if I can reproduce. In any case I will search for the differences between the new and the old config to find out what caused this issue.

 

UPDATE 3: 6.2-beta freezed again. Investigating now. I only changed the time zone yet. But it still does not work after putting it back - perhaps my previous test was too fast...

Deactivated everything not absolutely necessary, like Docker, VMs, Network Bonding/Bridge - still freezes with the exactly same symptoms in the same procedure. And as usually nothing to see in syslog.

 

UPDATE 4: Back to 6.1.9 which again works flawless. I will skip 6.2-beta and with a bit bad luck I have to skip every new release when it is something deep (like in kernel or parity write). Seems I have to stay on 6.1.9 from now on. As I always can reproduce the problem with 6.2 and have it working with 6.1 it is a software problem of 6.2. I have no idea which problem it could be as no log files are created about it. I had only 1 parity disk and no cache drives. Alltogether 15 active drives with parity included.

freeze.PNG.57cd0a6d70a29258c7c5908a0549d722.PNG

smart.zip

Link to comment

I just came home after a parity check to find 436 sync errors and part of my web gui broken

 

The main dashboard page will not load

The plugins page is missing a plugin

 

The dockers page is throwing these errors

Warning: file_put_contents(/boot/config/docker.cfg): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 48		
Fatal error: Uncaught exception 'UnexpectedValueException' with message 'RecursiveDirectoryIterator::__construct(/boot/config/plugins/dockerMan/templates-user): failed to open dir: No such file or directory' in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php:78
Stack trace:
#0 /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php(78): RecursiveDirectoryIterator->__construct('/boot/config/pl...', 4096)
#1 /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php(107): DockerTemplates->listDir('/boot/config/pl...', 'xml')
#2 /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php(207): DockerTemplates->getTemplates('all')
#3 /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php(324): DockerTemplates->getTemplateValue('emby/embyserver...', 'Registry')
#4 /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php(292) : eval()'d code(69): DockerTemplates->getAllInfo()
#5 /usr/local/emhttp/plugins/dynamix/include/DefaultPag in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 78

 

The VM tab is showing this error

Warning: file_put_contents(/boot/config/domain.cfg): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/dynamix.vm.manager/classes/libvirt_helpers.php on line 375

 

My syslog is spewed full of errors. The flash share is empty if i access it, and every share is publicly visible and accessible on my network even if they are set to hidden

 

Bit of a worry considering i have private data on these drives. I've got to do a reboot as this is too much of a risk to leave as is

archangel-diagnostics-20160401-1813.zip

Link to comment

I just came home after a parity check to find 436 sync errors and part of my web gui broken

 

The main dashboard page will not load

The plugins page is missing a plugin

 

The dockers page is throwing these errors

Warning: file_put_contents(/boot/config/docker.cfg): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 48		
Fatal error: Uncaught exception 'UnexpectedValueException' with message 'RecursiveDirectoryIterator::__construct(/boot/config/plugins/dockerMan/templates-user): failed to open dir: No such file or directory' in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php:78
Stack trace:
#0 /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php(78): RecursiveDirectoryIterator->__construct('/boot/config/pl...', 4096)
#1 /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php(107): DockerTemplates->listDir('/boot/config/pl...', 'xml')
#2 /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php(207): DockerTemplates->getTemplates('all')
#3 /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php(324): DockerTemplates->getTemplateValue('emby/embyserver...', 'Registry')
#4 /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php(292) : eval()'d code(69): DockerTemplates->getAllInfo()
#5 /usr/local/emhttp/plugins/dynamix/include/DefaultPag in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 78

 

The VM tab is showing this error

Warning: file_put_contents(/boot/config/domain.cfg): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/dynamix.vm.manager/classes/libvirt_helpers.php on line 375

 

My syslog is spewed full of errors. The flash share is empty if i access it, and every share is publicly visible and accessible on my network even if they are set to hidden

 

Bit of a worry considering i have private data on these drives. I've got to do a reboot as this is too much of a risk to leave as is

 

This sounds like your flash drive became unreadable. I would take it out and perform a disk check on it from a Windows machine.

 

Link to comment

Sorry to be harsh, but:

 

1. beta software

2. backups

I'm aware of this, all data is backed up, i was just stating that it is a bit of a flaw that if the usb is no longer readable all data on the server can be accessed publicly by anyone on the network. This would most likely be an issue on every version of unraid

No idea if there is a way around this but it was worth pointing out. I was not worried about data loss, i was more worried about the security issue this opens

 

This sounds like your flash drive became unreadable. I would take it out and perform a disk check on it from a Windows machine.

I've done that and it passed everything. Very weird issue but its the first time ive ever seen it so i'm not too worried about this happening again.

Just posting as much information as i can to see if there are any premature warnings for stuff like this or a resolution of the security issue it causes

Link to comment

This sounds like your flash drive became unreadable. I would take it out and perform a disk check on it from a Windows machine.

I've done that and it passed everything. Very weird issue but its the first time ive ever seen it so i'm not too worried about this happening again.

Just posting as much information as i can to see if there are any premature warnings for stuff like this or a resolution of the security issue it causes

 

Have you tried to use another USB port for your flash device?

Link to comment

This sounds like your flash drive became unreadable. I would take it out and perform a disk check on it from a Windows machine.

I've done that and it passed everything. Very weird issue but its the first time ive ever seen it so i'm not too worried about this happening again.

Just posting as much information as i can to see if there are any premature warnings for stuff like this or a resolution of the security issue it causes

 

Have you tried to use another USB port for your flash device?

Yep. The usb booted up straight away after a reboot so the USB is fine. My system is a  bit of an odd one where if i do a bios update it wont see my usb or any other boot usbs for ages then will randomly pick it up and work fine so it may be down to that but i'm not sure.

I just wanted to check nobody could see any issues in the syslog that i couldn't

Link to comment

This sounds like your flash drive became unreadable. I would take it out and perform a disk check on it from a Windows machine.

I've done that and it passed everything. Very weird issue but its the first time ive ever seen it so i'm not too worried about this happening again.

Just posting as much information as i can to see if there are any premature warnings for stuff like this or a resolution of the security issue it causes

 

Have you tried to use another USB port for your flash device?

Yep. The usb booted up straight away after a reboot so the USB is fine. My system is a  bit of an odd one where if i do a bios update it wont see my usb or any other boot usbs for ages then will randomly pick it up and work fine so it may be down to that but i'm not sure.

I just wanted to check nobody could see any issues in the syslog that i couldn't

 

Could that not manifest itself whilst the system is running bigjme and explain (some of) your longstanding issues mate?!

Link to comment

This sounds like your flash drive became unreadable. I would take it out and perform a disk check on it from a Windows machine.

I've done that and it passed everything. Very weird issue but its the first time ive ever seen it so i'm not too worried about this happening again.

Just posting as much information as i can to see if there are any premature warnings for stuff like this or a resolution of the security issue it causes

 

Have you tried to use another USB port for your flash device?

Yep. The usb booted up straight away after a reboot so the USB is fine. My system is a  bit of an odd one where if i do a bios update it wont see my usb or any other boot usbs for ages then will randomly pick it up and work fine so it may be down to that but i'm not sure.

I just wanted to check nobody could see any issues in the syslog that i couldn't

 

Your earlier error messages point to problems in reading the flash drive (/boot).

 

Link to comment

Could that not manifest itself whilst the system is running bigjme and explain (some of) your longstanding issues mate?!

 

Hey CHBMB

 

This was actually the main thing I thought was at fault with my system for a while.

 

I have mentioned this a number of times when talking to limetech that I think my usb was having issues but was always told it shouldn't cause it any issues as the system runs from memory once booted. Everything was still functioning like vms etc. I was just not able to use the Web gui properly and obviously the share privileges changed

 

My system crashing issue seems to have been resolved by killing all irqbalances before starting the array. Ive been running my system since jonp asked me to run the function without an actual system crash

 

I have been hitting it a lot harder then normal to be honest and it seems to be coping. I have even added my 750ti back into the system to see if i can force a system lockup with the seabios vm running (this vm always used to cause it without fault) but no luck yet

Link to comment

Starting VMs independent of the array is a feature that has been requested allot and one I'd love to facilitate running pfsense. Did this "feature" make it into 6.2?

This is not going to happen anytime soon, if ever.  It interferes with features we have planned for the future.  "Array Start/Stop" is really a misnomer.  In this context "array" refers to the entire set of devices attached to the server, not just the ones that are assigned to the parity-protected devices.

So do you have a solution in mind for those who wish to host a pfsense or other end point firewall appliance as a VM on unraid?

 

What is the issue exactly?

Well for me if I want to upgrade disks in the array from reiserfs and still run the VM (Windows in this case) I have to continually start and stop the VM as I start and stop the array to change the file system type.  Since my Windows VM is recording from TV tuners I have to wait for windows in the recording schedule before I can do that so it took a month and a half to convert to xfs.  My Windows VM has its own passed through controller for recording drives but the boot drive has to be an image (as far as I know) so I have to start and stop the VM with the array.  I can see other things like this in the future where it would be nicer to have an external drive for the VM that doesn't cause problems like putting the array in to maintenance mode to trouble shoot drive problems - do disk checks without having to shut down the VM.

 

That's exactly why you would use something like ESXi/Proxmox/KVM/etc on a different machine or virtualize unRaid and keep unRaid's container use to Dockers and not VMs. IMO, unRaid should be limited to media (tv, movies, etc) and backups. Anything else like PVR, Security Cameras, Routers, that need to have solid uptime should be on a solid platform. unRaid is a NAS that has recently grown into a services device. Until they provide the means for better uptime (not of just the OS, but also the array), unRaid does not fit that bill.  This is no different than someone trying to install DD-WRT on their Linksys WRT54g and asking it to do packet analysis, ntop, QoS all while on a 100mbps+ download stream; that hardware just isn't meant for all that. Use unRaid for what it is meant for and stop jamming everything into an all-on-one machine. It's all-in-one for media and backups, not critical services.

 

edit: furthermore, unRaid does not support VLAN tagging OR officially support multiple nics outside of bonding. Until it does that, it is NOT a candidate for something like a router that requires at least 2 nics.

 

edit2: It's late and I didn't think of pci passthrough for a dual port nic, but even still, unRaid is not the type of hypervisor that other virtualization platforms are. I love unRaid, I have a ton of docker containers for a bunch of things, but a router VM would never be considered.

Link to comment

Since trying 6.2 beta18, on each beta I have noticed a few strange folders being at the / directory upon startup. 

 

Folders being created are:

/Media

/Server/Logs

/Shows

/Support/Plex

/mnt/user/TV

 

I have tracked down the creation of these folders to the rc.docker script in /etc/rc.d/.  It seems the for loop in the function do_container_paths() does not like the spaces I have setup in my various docker paths.  The function is thinking these folders are missing & is causing these extra folders to be created.

"/mnt/user/appdata/Plex/config/Library/Application Support/Plex Media Server/Logs/" & "/mnt/user/TV Shows/" are the paths causing issues.  I am able to recreate the issue by deleting these folders and issuing the commands "/etc/rc.d/rc.docker stop" followed by "/etc/rc.d/rc.docker start".  Once command has completed, the folders will reappear.  The dockers that contain these paths are set to autostart.

 

I can adjust the paths of my dockers to avoid using spaces in the host directory paths to avoid this issue, but this seems like it would be a fairly trivial fix provided you have a decent understanding of linux scripting. However, I do not.  :)

Link to comment

Since trying 6.2 beta18, on each beta I have noticed a few strange folders being at the / directory upon startup. 

 

Folders being created are:

/Media

/Server/Logs

/Shows

/Support/Plex

/mnt/user/TV

 

I have tracked down the creation of these folders to the rc.docker script in /etc/rc.d/.  It seems the for loop in the function do_container_paths() does not like the spaces I have setup in my various docker paths.  The function is thinking these folders are missing & is causing these extra folders to be created.

"/mnt/user/appdata/Plex/config/Library/Application Support/Plex Media Server/Logs/" & "/mnt/user/TV Shows/" are the paths causing issues.  I am able to recreate the issue by deleting these folders and issuing the commands "/etc/rc.d/rc.docker stop" followed by "/etc/rc.d/rc.docker start".  Once command has completed, the folders will reappear.  The dockers that contain these paths are set to autostart.

 

I can adjust the paths of my dockers to avoid using spaces in the host directory paths to avoid this issue, but this seems like it would be a fairly trivial fix provided you have a decent understanding of linux scripting. However, I do not.  :)

 

Also seeing this exact behaviour, the problem is the for loop in the do_container_paths function is seeing spaces as the next variable to loop, I dont understand the scripts well enough to try fix it (Ive tried messing with IFS but had no luck)

Link to comment

Also seeing this exact behaviour, the problem is the for loop in the do_container_paths function is seeing spaces as the next variable to loop, I dont understand the scripts well enough to try fix it (Ive tried messing with IFS but had no luck)

 

Sounds like they need to switch from a for loop to a while read loop. I'm only running 6.1.8 right now and don't have a function by that name in my /etc/rc.d/rc.docker to compare it to.

 

Best practice, in any case, would be to use underscores instead of spaces in your folder names. I am guilty of using spaces myself though, especially in file names, simply because I don't like how the underscores look.

Link to comment

Since trying 6.2 beta18, on each beta I have noticed a few strange folders being at the / directory upon startup. 

 

Folders being created are:

/Media

/Server/Logs

/Shows

/Support/Plex

/mnt/user/TV

 

I have tracked down the creation of these folders to the rc.docker script in /etc/rc.d/.  It seems the for loop in the function do_container_paths() does not like the spaces I have setup in my various docker paths.  The function is thinking these folders are missing & is causing these extra folders to be created.

"/mnt/user/appdata/Plex/config/Library/Application Support/Plex Media Server/Logs/" & "/mnt/user/TV Shows/" are the paths causing issues.  I am able to recreate the issue by deleting these folders and issuing the commands "/etc/rc.d/rc.docker stop" followed by "/etc/rc.d/rc.docker start".  Once command has completed, the folders will reappear.  The dockers that contain these paths are set to autostart.

 

I can adjust the paths of my dockers to avoid using spaces in the host directory paths to avoid this issue, but this seems like it would be a fairly trivial fix provided you have a decent understanding of linux scripting. However, I do not.  :)

 

Very nice report man, thank you.  We have fixed this and are running some final tests.

Link to comment

@limetech @jnop please in kernel config enable finly megaraid sas drivers :(

i must any release compile myself kernel ... im using mazzeine card from Intel which using megaraid sas chipset.

 

issue in 6.2 all beta:

- for me when im try edit some VM from web, for me never save disk settings, everytime when i enter to edit i must setup again

 

- nice will be add possibility edit from web network device type (virtio, e1000 etc.)  still we must this do it manual, and after any change over web -  we must change manual again in XML becuase settings are overwriten

 

- nice will be add possibility map ROM to GPU - still we must this do it manual, and after any edit over web - we must map again manual in XML becuase settings are overwriten

 

Link to comment

Can you test the attached diagnostics script? It includes now the SMART configuration parameters. Thanks.

 

Thank you boniel - tried your script!

 

But still no valid SMART reports. Only the flash usb-drive is called (so a txt file only exists for the flash drive), but it gives the following output:

 

I understand you went back to version 6.1.9. Can you run the revised diagnostics script under this version?

 

Link to comment

Hello, on many VM *BSD, i have keyboard work but no mouse on PC-BSD or FreeBSD is it a problem with beta version or with older version ?

If you know a solution ? for now the only things who work is passthrough mouse on VM

Link to comment

Is anyone else noticing their drives getting hot and/or very busy with 6.2? On 6.19 my drives sat idle most of the time.. on 6.2 no matter how much I try to encourage them to spin down they keep coming back up..

 

Log excerpt:

Apr 3 23:41:01 VAULT kernel: usb 5-2.2: reset low-speed USB device number 4 using xhci_hcd

Apr 3 23:41:01 VAULT kernel: usb 5-2.2: ep 0x81 - rounding interval to 64 microframes, ep desc says 80 microframes

Apr 3 23:41:10 VAULT kernel: br0: topology change detected, propagating

Apr 3 23:41:10 VAULT kernel: br0: port 3(vnet0) entered forwarding state

Apr 3 23:41:12 VAULT sshd[13719]: Did not receive identification string from 222.255.174.138

Apr 3 23:41:15 VAULT sshd[13732]: fatal: mm_answer_moduli: bad parameters: 2048 2048 1024

Apr 3 23:41:50 VAULT emhttp: Spinning down all drives...

Apr 3 23:41:50 VAULT kernel: mdcmd (45): spindown 0

Apr 3 23:41:51 VAULT kernel: mdcmd (46): spindown 1

Apr 3 23:41:52 VAULT kernel: mdcmd (47): spindown 2

Apr 3 23:41:53 VAULT kernel: mdcmd (48): spindown 3

Apr 3 23:41:53 VAULT kernel: mdcmd (49): spindown 4

Apr 3 23:41:54 VAULT kernel: mdcmd (50): spindown 5

Apr 3 23:41:55 VAULT kernel: mdcmd (51): spindown 6

Apr 3 23:41:56 VAULT kernel: mdcmd (52): spindown 7

Apr 3 23:41:57 VAULT kernel: mdcmd (53): spindown 29

Apr 3 23:41:58 VAULT emhttp: shcmd (139): /usr/sbin/hdparm -y /dev/sdc &> /dev/null

Apr 3 23:41:58 VAULT emhttp: shcmd (140): /usr/sbin/hdparm -y /dev/sdf &> /dev/null

Apr 3 23:42:13 VAULT kernel: mdcmd (54): spindown 7

Apr 3 23:42:20 VAULT sshd[14418]: Did not receive identification string from 222.255.174.138

Apr 3 23:42:24 VAULT sshd[14442]: fatal: mm_answer_moduli: bad parameters: 2048 2048 1024

 

The spindown above was initiated by me and drives 3,6,7 spun back up.. my drive 7 seems to be on almost all the time and gets hot accordingly!!

Link to comment
Guest
This topic is now closed to further replies.