unRAID Server Release 6.0-beta14a-x86_64 Available


limetech

Recommended Posts

Download

 

Disclaimer: This is beta software.  While every effort has been made to ensure no data loss, use at your own risk!

 

Clicking 'Check for Updates' on the Plugins page should permit upgrade as well.  Note: Due to how we are managing dynamix in parallel with unRAID Server base OS, you might see that dynamix plugin has an update as well.  You do not need to install this update, just install new version of unRaid OS and you'll get the latest dynamix.  We are working on a fix for this behavior.

 

This is a patch release of -beta14, meaning just a couple bug fixes; primarily to let Pro support 25 devices again.  However, please do read following notes since the original post has been modified.

 

Some notes on this release:

  • Important: dlandon's excellent apcupsd plugin is now 'built-in' to the base webGui.  Please remove the apcupsd plugin if you are using it.
  • System notifications will be disabled after update and must be enabled on the Notifications Settings page.  The setting will be preserved across reboot.
  • The 'mover' will now completely skip moving files off the cache disk/pool for shares marked "cache-only".  However it will move files off the cache disk/pool for shares not explicitly configured "cache-only" in the share's Share Settings page; even for shares that only exist on the cache!  This is different behavior from previous releases!
  • We are still investigating cpu scaling issue with Haswell cpu's, and have removed a 'patch' that seems to have been responsible for some users reporting kernel crashes.
  • You may see a rather dramatic message pop up on the Docker page which reads: "Your existing Docker image file needs to be recreated due to an issue from an earlier beta of unRAID 6. Failure to do so may result in your docker image suffering corruption at a later time. Please do this NOW!"  This means your image file does not have NOCOW attribute set on it and may get corrupted if the device it's located on runs completely out of free space.  Refer a few posts down for further discussion of this.
  • As of beta14, pointing to a docker image through a user share is not supported.  Please update your docker image location field to point to the actual disk device used for your docker image file (e.g. /mnt/cache/docker.img or /mnt/disk#/docker.img; substitute # for the actual disk number that the image is on).

 

If you installed -beta13 and it resulted in clobbering your cache disk, well if you can restore the partition table you set up previously then it will restore proper operation.  For example, if you followed this guide to set up a btrfs cache disk back in -beta6:

http://lime-technology.com/forum/index.php?topic=33806.0

 

Then you should be able to restore the proper partition table using this command sequence:

 

# substitute your cache disk device for 'sdX' in these commands:
sgdisk –Z /dev/sdX
sgdisk –g –N 1 /dev/sdX

 

I want to extend a special Thank You! to bonienl for his continued improvements to dynamix and to eschultz for his programming refinements in multiple areas of the code.

 

Additional Notes from bonienl:

 

- Those using the dynamix band-aid plugin, should remove it BEFORE upgrading to B14. If removed AFTER the upgrade to B14 then a system reboot is required to unload the existing cron settings, which conflict with the new cron solution.

 

- All notifications are OFF by default, and these need to be enabled on the notification settings page. Once enabled they will survive a system reboot

 

- Settings for the scheduler (parity check) need to be re-applied to become active, this again is due to the new cron solution introduced in this version of unRAID

 

- And ... clear your browser's cache to ensure the GUI is updated completely

 

Additional notes from JonP

 

Regarding the Docker virtual disk image and needing to recreate it, there has been some confusion as to who will need to do this.  The only folks that will need to recreate their Docker image in Beta14 are those that have it stored on a BTRFS-formatted device.  If your image is stored on a device formatted with XFS or ReiserFS, you do NOT need to recreate your Docker image.

 

However, all users, regardless of filesystem type, will need to make sure their Docker image path in the webGui points to an actual disk device or cache pool, and not the unRAID user share file system.  This rule only applies to the path to your docker image itself.  Hope this clarifies things.

 

For a guide on how to recreate your Docker image without losing any application settings / data, please see this post.

 

Summary of changes from 6.0-beta14 to 6.0-beta14a
-------------------------------------------------
- plugin: fix issue where initial unRAIDServer update fails
- emhttp: fix issue where Pro would not start with 25 devices
- emhttp: change tunable poll_spindown to poll_attributes
- docker: handle case of fresh install (no docker.cfg file)

Link to comment

So now none of my disks will spin down. Got all the messages in the log (times seem spread out to me compared to spin down button, but that could be due to reboot and last access on each disk):

 

Feb 24 15:11:23 Tower kernel: mdcmd (41): spindown 0

Feb 24 15:11:24 Tower kernel: mdcmd (42): spindown 5

Feb 24 15:12:37 Tower kernel: mdcmd (43): spindown 1

Feb 24 15:12:55 Tower kernel: mdcmd (44): spindown 2

Feb 24 15:13:04 Tower kernel: mdcmd (45): spindown 3

Feb 24 15:13:15 Tower kernel: mdcmd (46): spindown 4

 

But all remain active in the UI. I am not at home to confirm that they were indeed spinning.

 

Spin down all button seems to reflect in UI:

Feb 24 15:15:39 Tower kernel: mdcmd (47): spindown 0

Feb 24 15:15:39 Tower kernel: mdcmd (48): spindown 1

Feb 24 15:15:39 Tower kernel: mdcmd (49): spindown 2

Feb 24 15:15:40 Tower kernel: mdcmd (50): spindown 3

Feb 24 15:15:40 Tower kernel: mdcmd (51): spindown 4

Feb 24 15:15:41 Tower emhttp: shcmd (65): /usr/sbin/hdparm -y /dev/sdb &> /dev/null

Feb 24 15:15:41 Tower kernel: mdcmd (52): spindown 5

 

I've spun them all up as a test. I'll try another reboot later to test as well.

Link to comment

So now none of my disks will spin down. Got all the messages in the log (times seem spread out to me compared to spin down button, but that could be due to reboot and last access on each disk):

 

Feb 24 15:11:23 Tower kernel: mdcmd (41): spindown 0

Feb 24 15:11:24 Tower kernel: mdcmd (42): spindown 5

Feb 24 15:12:37 Tower kernel: mdcmd (43): spindown 1

Feb 24 15:12:55 Tower kernel: mdcmd (44): spindown 2

Feb 24 15:13:04 Tower kernel: mdcmd (45): spindown 3

Feb 24 15:13:15 Tower kernel: mdcmd (46): spindown 4

 

But all remain active in the UI. I am not at home to confirm that they were indeed spinning.

 

Spin down all button seems to reflect in UI:

Feb 24 15:15:39 Tower kernel: mdcmd (47): spindown 0

Feb 24 15:15:39 Tower kernel: mdcmd (48): spindown 1

Feb 24 15:15:39 Tower kernel: mdcmd (49): spindown 2

Feb 24 15:15:40 Tower kernel: mdcmd (50): spindown 3

Feb 24 15:15:40 Tower kernel: mdcmd (51): spindown 4

Feb 24 15:15:41 Tower emhttp: shcmd (65): /usr/sbin/hdparm -y /dev/sdb &> /dev/null

Feb 24 15:15:41 Tower kernel: mdcmd (52): spindown 5

 

I've spun them all up as a test. I'll try another reboot later to test as well.

 

 

Link to comment

So now none of my disks will spin down. Got all the messages in the log (times seem spread out to me compared to spin down button, but that could be due to reboot and last access on each disk):

 

Feb 24 15:11:23 Tower kernel: mdcmd (41): spindown 0

Feb 24 15:11:24 Tower kernel: mdcmd (42): spindown 5

Feb 24 15:12:37 Tower kernel: mdcmd (43): spindown 1

Feb 24 15:12:55 Tower kernel: mdcmd (44): spindown 2

Feb 24 15:13:04 Tower kernel: mdcmd (45): spindown 3

Feb 24 15:13:15 Tower kernel: mdcmd (46): spindown 4

 

But all remain active in the UI. I am not at home to confirm that they were indeed spinning.

 

Spin down all button seems to reflect in UI:

Feb 24 15:15:39 Tower kernel: mdcmd (47): spindown 0

Feb 24 15:15:39 Tower kernel: mdcmd (48): spindown 1

Feb 24 15:15:39 Tower kernel: mdcmd (49): spindown 2

Feb 24 15:15:40 Tower kernel: mdcmd (50): spindown 3

Feb 24 15:15:40 Tower kernel: mdcmd (51): spindown 4

Feb 24 15:15:41 Tower emhttp: shcmd (65): /usr/sbin/hdparm -y /dev/sdb &> /dev/null

Feb 24 15:15:41 Tower kernel: mdcmd (52): spindown 5

 

I've spun them all up as a test. I'll try another reboot later to test as well.

 

my log is showing spindown for drives and the webui is showing them as active.

 

i tried videoing the screen but it's shite quality for some reason.

Link to comment

So now none of my disks will spin down. Got all the messages in the log (times seem spread out to me compared to spin down button, but that could be due to reboot and last access on each disk):

 

Feb 24 15:11:23 Tower kernel: mdcmd (41): spindown 0

Feb 24 15:11:24 Tower kernel: mdcmd (42): spindown 5

Feb 24 15:12:37 Tower kernel: mdcmd (43): spindown 1

Feb 24 15:12:55 Tower kernel: mdcmd (44): spindown 2

Feb 24 15:13:04 Tower kernel: mdcmd (45): spindown 3

Feb 24 15:13:15 Tower kernel: mdcmd (46): spindown 4

 

But all remain active in the UI. I am not at home to confirm that they were indeed spinning.

 

Spin down all button seems to reflect in UI:

Feb 24 15:15:39 Tower kernel: mdcmd (47): spindown 0

Feb 24 15:15:39 Tower kernel: mdcmd (48): spindown 1

Feb 24 15:15:39 Tower kernel: mdcmd (49): spindown 2

Feb 24 15:15:40 Tower kernel: mdcmd (50): spindown 3

Feb 24 15:15:40 Tower kernel: mdcmd (51): spindown 4

Feb 24 15:15:41 Tower emhttp: shcmd (65): /usr/sbin/hdparm -y /dev/sdb &> /dev/null

Feb 24 15:15:41 Tower kernel: mdcmd (52): spindown 5

 

I've spun them all up as a test. I'll try another reboot later to test as well.

 

my log is showing spindown for drives and the webui is showing them as active.

 

i tried videoing the screen but it's shite quality for some reason.

 

Yea that was exactly what was happening to me. Not being home I couldn't confirm if they were actually up. Suppose I could have done a cd test on the command line.

 

My spin up test worked though, all of them spun down. The next test is normal day use and watch for disks to stay up well after the 15 minute spin down timer.

 

Feb 24 15:17:34 Tower kernel: mdcmd (53): spinup 0

Feb 24 15:17:34 Tower kernel: mdcmd (54): spinup 1

Feb 24 15:17:34 Tower kernel: mdcmd (55): spinup 2

Feb 24 15:17:34 Tower kernel: mdcmd (56): spinup 3

Feb 24 15:17:34 Tower kernel: mdcmd (57): spinup 4

Feb 24 15:17:34 Tower kernel: mdcmd (58): spinup 5

Feb 24 15:32:35 Tower kernel: mdcmd (59): spindown 0

Feb 24 15:32:36 Tower kernel: mdcmd (60): spindown 1

Feb 24 15:32:36 Tower kernel: mdcmd (61): spindown 2

Feb 24 15:32:37 Tower kernel: mdcmd (62): spindown 3

Feb 24 15:32:37 Tower kernel: mdcmd (63): spindown 4

Feb 24 15:32:38 Tower kernel: mdcmd (64): spindown 5

Link to comment

One thing I noticed is that I lost the cache drive from the tower/Main, the drive does show up on the dashboard.

Yes, I did clear the browser cache and also tried on both firefox and chrome.

Moreover the disks did not spin down after 15min, see below the messages from the syslog after the 15minutes have passed.

 

sdf is the cache drive

 

Feb 24 20:03:08 TowerBackup kernel: mdcmd (21): spindown 0
Feb 24 20:03:09 TowerBackup emhttp: shcmd (47): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:11 TowerBackup emhttp: shcmd (48): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:12 TowerBackup emhttp: shcmd (49): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:13 TowerBackup emhttp: shcmd (50): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:14 TowerBackup emhttp: shcmd (51): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:15 TowerBackup emhttp: shcmd (52): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:16 TowerBackup emhttp: shcmd (53): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:17 TowerBackup emhttp: shcmd (54): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:18 TowerBackup emhttp: shcmd (55): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:19 TowerBackup emhttp: shcmd (56): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:20 TowerBackup emhttp: shcmd (57): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:21 TowerBackup emhttp: shcmd (58): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:22 TowerBackup emhttp: shcmd (59): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:23 TowerBackup emhttp: shcmd (60): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:24 TowerBackup emhttp: shcmd (61): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:25 TowerBackup emhttp: shcmd (62): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:26 TowerBackup emhttp: shcmd (63): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:27 TowerBackup emhttp: shcmd (64): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:28 TowerBackup emhttp: shcmd (65): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:29 TowerBackup emhttp: shcmd (66): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:30 TowerBackup emhttp: shcmd (67): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:31 TowerBackup emhttp: shcmd (68): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:32 TowerBackup emhttp: shcmd (69): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:33 TowerBackup emhttp: shcmd (70): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:34 TowerBackup emhttp: shcmd (71): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:35 TowerBackup emhttp: shcmd (72): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:36 TowerBackup emhttp: shcmd (73): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:37 TowerBackup emhttp: shcmd (74): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:38 TowerBackup emhttp: shcmd (75): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:39 TowerBackup emhttp: shcmd (76): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:40 TowerBackup emhttp: shcmd (77): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:41 TowerBackup emhttp: shcmd (78): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:42 TowerBackup emhttp: shcmd (79): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:43 TowerBackup emhttp: shcmd (80): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:44 TowerBackup emhttp: shcmd (81): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:45 TowerBackup emhttp: shcmd (82): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:46 TowerBackup emhttp: shcmd (83): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:47 TowerBackup emhttp: shcmd (84): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:48 TowerBackup emhttp: shcmd (85): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:49 TowerBackup emhttp: shcmd (86): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:50 TowerBackup emhttp: shcmd (87): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:51 TowerBackup emhttp: shcmd (88): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:52 TowerBackup emhttp: shcmd (89): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:53 TowerBackup emhttp: shcmd (90): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:54 TowerBackup emhttp: shcmd (91): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:55 TowerBackup emhttp: shcmd (92): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:56 TowerBackup emhttp: shcmd (93): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:57 TowerBackup emhttp: shcmd (94): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:58 TowerBackup emhttp: shcmd (95): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:03:59 TowerBackup emhttp: shcmd (96): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:04:00 TowerBackup emhttp: shcmd (97): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:04:01 TowerBackup emhttp: shcmd (98): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:04:02 TowerBackup emhttp: shcmd (99): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:04:03 TowerBackup emhttp: shcmd (100): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:04:04 TowerBackup emhttp: shcmd (101): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:04:05 TowerBackup emhttp: shcmd (102): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:04:06 TowerBackup emhttp: shcmd (103): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:04:07 TowerBackup emhttp: shcmd (104): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:04:08 TowerBackup emhttp: shcmd (105): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:04:09 TowerBackup emhttp: shcmd (106): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:04:10 TowerBackup emhttp: shcmd (107): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:04:11 TowerBackup emhttp: shcmd (108): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:04:12 TowerBackup emhttp: shcmd (109): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:04:13 TowerBackup emhttp: shcmd (110): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:04:14 TowerBackup emhttp: shcmd (111): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:04:15 TowerBackup emhttp: shcmd (112): /usr/sbin/hdparm -y /dev/sdf &> /dev/null
Feb 24 20:04:16 TowerBackup emhttp: shcmd (113): /usr/sbin/hdparm -y /dev/sdf &> /dev/null

missing-cache_drive.JPG.880e90d2211c5100a0400525454a6f0a.JPG

missing-cache-drive-dashboard.JPG.5157201aee6ae799cea65e27c0a89ae5.JPG

Link to comment

Observation on disk status indicators on Dashboard and Main;  After upgrading to b14a, I rebooted and the status indicators showed all of the disks in the array spun up (as I would expect). I came back after about three hours and brought up the GUI again.  I checked the status indicators and they showed all of the disks in the array spun down (again as I would expect).

 

I then opened a file and that I knew was on Disk 2 of my array (Array = parity  + 2 data) but the status indicator indicated that none of the disks were spun up (Not what I would expect)! 

 

I then opened a file that I knew was on  Disk 1.  The status indicators did not show a disk spun up on either the Main or Dashboard pages.  (Once again not what I would expect!)

 

I then refreshed the Main page in the browser (I am using Firefox and did it via the reload symbol on the URL line) and the indicators changed from spundown to spunup for both Disk 1 and 2! 

 

My setting on the 'Display setting' page, I have the 'Page update Frequency' set to 'Real time' and have checked the box for 'disable page updates while parity operation is running.'

 

In all of the earlier beta versions, I had no issue with the status indicators...

 

 

Link to comment

Observation on disk status indicators on Dashboard and Main;  After upgrading to b14a, I rebooted and the status indicators showed all of the disks in the array spun up (as I would expect). I came back after about three hours and brought up the GUI again.  I checked the status indicators and they showed all of the disks in the array spun down (again as I would expect).

 

I then opened a file and that I knew was on Disk 2 of my array (Array = parity  + 2 data) but the status indicator indicated that none of the disks were spun up (Not what I would expect)! 

 

I then opened a file that I knew was on  Disk 1.  The status indicators did not show a disk spun up on either the Main or Dashboard pages.  (Once again not what I would expect!)

 

I then refreshed the Main page in the browser (I am using Firefox and did it via the reload symbol on the URL line) and the indicators changed from spundown to spunup for both Disk 1 and 2! 

 

My setting on the 'Display setting' page, I have the 'Page update Frequency' set to 'Real time' and have checked the box for 'disable page updates while parity operation is running.'

 

In all of the earlier beta versions, I had no issue with the status indicators...

Same here.  I think that its a rather innovative solution to the spin down problem  :P

 

But in my case, no amount of refreshing (or clearing browser cache and refreshing) would show the disk as being spun up.  I tried adding poll_spindown="10" to disk.cfg, but the file is overwritten every reboot.

syslog.zip

Link to comment

 

Same here.  I think that its a rather innovative solution to the spin down problem  :P

 

But in my case, no amount of refreshing (or clearing browser cache and refreshing) would show the disk as being spun up.  I tried adding poll_spindown="10" to disk.cfg, but the file is overwritten every reboot.

 

Now it is showing disk 1 as spunup but the syslog is showing it as spun down.  After I examining the server, I reasonably convinced that the drive is spundown.  This appears to be confirmed in the portion of syslog below:

Feb 24 18:38:30 Rose emhttp: shcmd (31): :>/etc/samba/smb-shares.conf
Feb 24 18:38:30 Rose avahi-daemon[2427]: Files changed, reloading.
Feb 24 18:38:30 Rose emhttp: Restart SMB...
Feb 24 18:38:30 Rose emhttp: shcmd (32): killall -HUP smbd
Feb 24 18:38:30 Rose emhttp: shcmd (33): cp /etc/avahi/services/smb.service- /etc/avahi/services/smb.service
Feb 24 18:38:30 Rose avahi-daemon[2427]: Files changed, reloading.
Feb 24 18:38:30 Rose avahi-daemon[2427]: Service group file /services/smb.service changed, reloading.
Feb 24 18:38:30 Rose emhttp: shcmd (34): pidof rpc.mountd &> /dev/null
Feb 24 18:38:30 Rose emhttp: shcmd (35): /etc/rc.d/rc.atalk status
Feb 24 18:38:30 Rose rc.unRAID[2579][2580]: Processing /etc/rc.d/rc.unRAID.d/ start scripts.
Feb 24 18:38:30 Rose avahi-daemon[2427]: Service "Rose" (/services/ssh.service) successfully established.
Feb 24 18:38:30 Rose avahi-daemon[2427]: Service "Rose" (/services/sftp-ssh.service) successfully established.
Feb 24 18:38:31 Rose avahi-daemon[2427]: Service "Rose" (/services/smb.service) successfully established.
Feb 24 18:39:38 Rose php: /usr/local/sbin/notify cron-init
Feb 24 19:08:30 Rose kernel: mdcmd (35): spindown 0
Feb 24 19:08:38 Rose kernel: mdcmd (36): spindown 1
Feb 24 19:08:39 Rose kernel: mdcmd (37): spindown 2
Feb 24 22:37:36 Rose kernel: mdcmd (38): spindown 2
Feb 24 22:39:28 Rose kernel: mdcmd (39): spindown 1

 

I have attached the complete syslog for analysis if needed.

 

 

 

syslog.txt

Link to comment

But in my case, no amount of refreshing (or clearing browser cache and refreshing) would show the disk as being spun up.  I tried adding poll_spindown="10" to disk.cfg, but the file is overwritten every reboot.

It seems to be a little random in when it decides to start working.  When I first booted my server, I immediately spun down all the drives, loaded a movie I knew was on such and such drive.  Main & dashboard said drive wasn't spinning.  Hit spin up, spin down, gui always said spun down.

 

Now, ~hour later I ran my md5 script (which pretty much guarantees all drives spin up), and out of the blue the dashboard and main shows all drives up.  Spin up / Spin down seem to work on the gui, but play the movie again and still doesn't spin up according to the gui.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.