unRAID Server Release 5.0-beta6a Available


Recommended Posts

  • Replies 349
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Is it normal for the "." folders on the cache drive to be also under /mnt/user?

 

My .yamj folder is under there as well..

 

 

yes.

 

Lol, thanks.

 

Is there a way to prevent certain folders to not become Shares?

 

I keep removing these folders that my Mac creates--Network Trash Folder, and Temporary Items.

 

I remove the folder (first remove all the .Apple* folders created >:| and then remove the Shares).

 

They they get re-created by my Mac and show up as Shares,

 

Any ideas? Thanks.

Link to comment

      I have been doing some testing with Beta 5.0 beta 6 for the last 2 months. Following all of the messages going back and forth before migrating my normal box running 4.7 over. The test box initially slated to run (19) 3TB data drives (1) 3TB parity drive was scaled back to (17) 2tb drives,  (2) 750gb Data Drives and (1) 2TB Parity Drive

    Since this was all new hardware, different from what I had been running on 4.7 I was initially concerned with testing for compatibility issues on the hardware side before even looking at the OS side. I therefore set up a simple array for (17) 2TB Data Drives and the (2) 750GB Data Drives. Set up shares, dumped a bunch of data on the drives, put them through their paces and all is working fine.

      I now want to change out the (2) 750GB drives with (2) 2TB Data Drives, and then proceed to bring the parity drive online. Can someone please give me some insight as to what the correct way to replace these drives, and then set up the parity drive is.

Link to comment
I now want to change out the (2) 750GB drives with (2) 2TB Data Drives, and then proceed to bring the parity drive online. Can someone please give me some insight as to what the correct way to replace these drives, and then set up the parity drive is.

 

I would guess that you just follow the Replace a Failed Drive, one at a time, for the 750GB drives, letting the array completely rebuild on the first, before touching the second.

 

Cheers.

Link to comment

If I'm reading correctly this applies if you already had the parity drive setup and running. I never got that far, I have been testing and array with no parity. So, I am looking to change out 2 smaller drives before I actually go through the process of setting up the parity drive for the first time.

Link to comment

If I'm reading correctly this applies if you already had the parity drive setup and running. I never got that far, I have been testing and array with no parity. So, I am looking to change out 2 smaller drives before I actually go through the process of setting up the parity drive for the first time.

 

do either of those drives have data on them that you want to keep?

Link to comment

I searched but couldn't find this anywhere.  I've had b6a up and running for a while now and everything's been fine.  Now I need to add a drive and after adding the new drive, it is showing MBR unknown.  Tom's first post talks about existing drives that report that after the upgrade to b6a, but does the same still apply to a new drive being added to a system that's been running b6a?  The drive is a brand new Hitachi 5K3000.  I'm lost on this whole sector 63/64 thing.

Link to comment

I searched but couldn't find this anywhere.  I've had b6a up and running for a while now and everything's been fine.  Now I need to add a drive and after adding the new drive, it is showing MBR unknown.  Tom's first post talks about existing drives that report that after the upgrade to b6a, but does the same still apply to a new drive being added to a system that's been running b6a?  The drive is a brand new Hitachi 5K3000.  I'm lost on this whole sector 63/64 thing.

 

I assume it's a <2Tb drive, as 3TB isn't supported as of yet?

Link to comment

I searched but couldn't find this anywhere.  I've had b6a up and running for a while now and everything's been fine.  Now I need to add a drive and after adding the new drive, it is showing MBR unknown.  Tom's first post talks about existing drives that report that after the upgrade to b6a, but does the same still apply to a new drive being added to a system that's been running b6a?  The drive is a brand new Hitachi 5K3000.  I'm lost on this whole sector 63/64 thing.

It is normal for a brand new, just-out-of-the-box drive to show MBR unknown.
Link to comment

I searched but couldn't find this anywhere.  I've had b6a up and running for a while now and everything's been fine.  Now I need to add a drive and after adding the new drive, it is showing MBR unknown.  Tom's first post talks about existing drives that report that after the upgrade to b6a, but does the same still apply to a new drive being added to a system that's been running b6a?  The drive is a brand new Hitachi 5K3000.  I'm lost on this whole sector 63/64 thing.

It is normal for a brand new, just-out-of-the-box drive to show MBR unknown.

 

Thanks!

Link to comment

Hi!

 

Hi!

 

The AFP entries in the LOG. Doesn't matter. As long as this "Spinning drives" problem is not solved I'm going back to 4.7. Which runs a parity check right now. I can't affort running 10 disks 24 hours a day.

 

Bye.

All you need do is change the spindown setting in unRAID to never and then

for each of your disks put a line like this in your config/go script for each of your disks:

hdparm -S 242 /dev/sdX

where sdX = the three letter designation for your disk.

 

My GO-script looks like that:

 

#!/bin/bash

# Start the Management Utility

/usr/local/sbin/emhttp &

/boot/cache_dirs -w

cd /boot/packages && find . -name '*.auto_install' -type f -print | sort | xargs -n1 sh -c

hdparm -S 242 /dev/sdi

hdparm -S 242 /dev/sdb

hdparm -S 242 /dev/sda

hdparm -S 242 /dev/sde

hdparm -S 242 /dev/sdf

hdparm -S 242 /dev/sdc

hdparm -S 242 /dev/sdh

hdparm -S 242 /dev/sdd

 

I changed the spindown settings to "Never" for all the drives and globally.

 

 

In my logfile it looks like all drives have spun down:

 

Jun  7 10:14:26 Tower kernel: mdcmd (43): spindown 0

Jun  7 10:14:26 Tower kernel:

Jun  7 10:14:27 Tower kernel: mdcmd (44): spindown 1

Jun  7 10:14:27 Tower kernel:

Jun  7 10:14:28 Tower kernel: mdcmd (45): spindown 2

Jun  7 10:14:28 Tower kernel:

Jun  7 10:14:29 Tower kernel: mdcmd (46): spindown 3

Jun  7 10:14:29 Tower kernel:

Jun  7 10:14:30 Tower kernel: mdcmd (47): spindown 4

Jun  7 10:14:30 Tower kernel:

Jun  7 10:14:31 Tower kernel: mdcmd (48): spindown 5

Jun  7 10:14:31 Tower kernel:

Jun  7 10:14:32 Tower kernel: mdcmd (49): spindown 6

Jun  7 10:14:32 Tower kernel:

Jun  7 10:14:33 Tower kernel: mdcmd (50): spindown 7

Jun  7 10:14:33 Tower kernel:

 

..while the GUI shows that the drives are spinning. Is that normal or how could I know if the drives are spinning?

 

Thanks.

 

Bye.

 

 

Link to comment
  • 4 weeks later...

So, I've installed 5b6a to my unRAID server in the process of moving it to a HDD install vs the flash key, and am getting the MBR: Unknown message on almost all but my two most recently added drives... I will mention that the three most recent drives were cleared with the new preclear script and 4k-aligned, but only the two most recent ones were added to the system after I upgraded to 4.7...

I have saved the mbr debug info using the command specified in the other topic, if there's a way I can use that information to tell what I need to set the MBR flags using the command in the OP of this thread, please let me know...

unRAID-Upgrade.gif.7f1c56c5c3fb3cceeb42fd0537b7a473.gif

Link to comment

Odds are it's LILO that slightly altered the MBRs on your drives. If possible, uninstall LILO and use GRUB. It does not mess with MBRs of your data drives. Since your drives were pre-4.7 they were sector 63 aligned. The following should correct the LILO bad-touch. Try -1- command first, then refresh the unRAID management console and see if that drive is now listed as MBR: unaligned. If so, proceed to the next drives.

 

mkmbr /dev/sdb
mkmbr /dev/sdc
mkmbr /dev/sdd
mkmbr /dev/sde
mkmbr /dev/sdf
mkmbr /dev/sdj
mkmbr /dev/sdk
mkmbr /dev/sdl
mkmbr /dev/sdm
mkmbr /dev/sdn
mkmbr /dev/sdo

Link to comment

Done... They appear to show up properly now, and the array is started, now I just have to get used to the new way of doing permissions, after the chmod/chown script is done... Is there a thing in the wiki to clarify what setting I should use if I want RO for all, and RW for specific users?

Link to comment

Under Shares, you click the share you want to configure, then you choose whether to enable AFP, NFS, or SMB. Public allows everyone to see the share and R/W access. Secure allows R/O access for all, and everyone can see the share in the network. Private does not broadcast the share, but if a user knows the share address, they will have R/O access (at least). You can then configure which users have Read Only and which have R/W.

Link to comment

Under Shares, you click the share you want to configure, then you choose whether to enable AFP, NFS, or SMB. Public allows everyone to see the share and R/W access. Secure allows R/O access for all, and everyone can see the share in the network. Private does not broadcast the share, but if a user knows the share address, they will have R/O access (at least). You can then configure which users have Read Only and which have R/W.

Thanks much, I was getting a little concerned I couldn't find much on the wiki for these releases...

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.