unRAID Server release 4.5-beta6 available


Recommended Posts

  • Replies 314
  • Created
  • Last Reply

Top Posters In This Topic

How are you able to have exceptions for yourself? I didn't see any way to assign different rights.

 

Create different users for everyone and then export the user shares as read only, then on the exceptions line just add in your user name.

 

Ok I guess I was looking for a way to do it without having to use login ids and passwords

 

I guess I need to find out also if the inability to change my user shares to read only is a bug in this build or if I have some other issue.

Link to comment

 

Is anyone running it with user shares set to read only?

 

My user shares are set to read only, but my disk shares are set to read/write but hidden.  I write my data directly to disk shares, so I have control over where exactly it goes, but I don't have to worry much about accidentally deleting stuff when I'm just browsing using my user shares.

Link to comment

 

Is anyone running it with user shares set to read only?

 

My user shares are set to read only, but my disk shares are set to read/write but hidden.  I write my data directly to disk shares, so I have control over where exactly it goes, but I don't have to worry much about accidentally deleting stuff when I'm just browsing using my user shares.

 

That's the way I used to have it. I just can't figure out why I can't set it that way on this build

Link to comment

Ok I'm having another issue. When trying to copy files over to the unraid it will copy for awhile then get an error and can no longer access the disk volume. I've included the syslog of when this happened. Does anyone know what's going on?

Link to comment

I did a quick search but didn't get any hits on this, but apologies if this has already been discussed elsewhere.

 

I stumbled upon the following error in the syslog, relating to ident.cfg and smbPorts.

 

Jul  8 23:11:15 Tower kernel: r8169: eth0: link up

Jul  8 23:11:15 Tower dhcpcd[1529]: broadcasting DHCP_DISCOVER

Jul  8 23:11:15 Tower dhcpcd[1529]: dhcpIPaddrLeaseTime=345600 in DHCP server response.

Jul  8 23:11:15 Tower dhcpcd[1529]: DHCP_OFFER received from  (10.0.0.128)

Jul  8 23:11:15 Tower dhcpcd[1529]: broadcasting DHCP_REQUEST for 10.0.0.17

Jul  8 23:11:15 Tower dhcpcd[1529]: dhcpIPaddrLeaseTime=345600 in DHCP server response.

Jul  8 23:11:15 Tower dhcpcd[1529]: DHCP_ACK received from  (10.0.0.128)

Jul  8 23:11:15 Tower ifplugd(eth0)[1232]: client: dhcpcd: MAC address = 00:30:18:a5:37:ab

Jul  8 23:11:15 Tower ifplugd(eth0)[1232]: client: dhcpcd: your IP address = 10.0.0.17

Jul  8 23:11:15 Tower ifplugd(eth0)[1232]: client: dhcpcd: MAC address = 00:30:18:a5:37:ab

Jul  8 23:11:15 Tower ifplugd(eth0)[1232]: client: dhcpcd: your IP address = 10.0.0.17

Jul  8 23:11:15 Tower ifplugd(eth0)[1232]: client: /var/tmp/ident.cfg: line 6: 139: command not found

Jul  8 23:11:15 Tower ntpd[1539]: ntpd [email protected] Wed Jan 14 23:46:25 UTC 2009 (1)

Jul  8 23:11:15 Tower ntpd[1540]: precision = 1.000 usec

Jul  8 23:11:15 Tower ifplugd(eth0)[1232]: client: Starting NTP daemon:  /usr/sbin/ntpd -g

 

The ident.cfg file referenced:

 

# Generated settings:

NAME=Tower

COMMENT=Massive

WORKGROUP=ACC

localMaster=no

smbPorts=""445 139""

timeZone=custom

USE_NTP=yes

NTP_SERVER1=0.us.pool.ntp.org

NTP_SERVER2=1.us.pool.ntp.org

NTP_SERVER3=0.pool.ntp.org

 

The escaping/quoting of the samba ports seems to be the cause.

 

Link to comment

Ok I'm having another issue. When trying to copy files over to the unraid it will copy for awhile then get an error and can no longer access the disk volume. I've included the syslog of when this happened. Does anyone know what's going on?

 

That is only a tiny piece of the syslog, but it appears you may have a corrupted file system on one of the data drives.  Please see the Check Disk File systems wiki page, and run reiserfsck on all of your data drives.  If you haven't already, reboot first.

Link to comment
  • 2 weeks later...

Not to sound rude but your question has been asked and answered many times throughout the course of this thread. There has also been many theories to go along with it. The best answer so far has been that the dev's are waiting on a new kernel etc. before they put out a new public release.

 

Best thing for you to do... be patient.

Link to comment

I realize it has been asked before, but since no authoritative answer had been given, I asked again in hopes that a dev might chime in and give us at least a hint as to what the next release might include.  I know that some of the pending issues such as working 20 disk support and SAS support (albeit kernel dependent) are in the works.  I didn't mean for my question to come across as seeming in any way demanding.

 

Not to sound rude but your question has been asked and answered many times throughout the course of this thread. There has also been many theories to go along with it. The best answer so far has been that the dev's are waiting on a new kernel etc. before they put out a new public release.

 

Best thing for you to do... be patient.

Link to comment

It's been a while, and I thought we might have an upgrade by now...going on nearly 3 months...  ???

 

BUT, regarding Cache Disk... I have some User Shares that refuse to use the Cache Disk...Have disabled/re-enabled cache usage for the share, have replaced my Cache Disk, and some of the shares still wont... Is this a known issue with this 3 month old version of the beta?

Link to comment

It's been a while, and I thought we might have an upgrade by now...going on nearly 3 months...  ???

 

BUT, regarding Cache Disk... I have some User Shares that refuse to use the Cache Disk...Have disabled/re-enabled cache usage for the share, have replaced my Cache Disk, and some of the shares still wont... Is this a known issue with this 3 month old version of the beta?

 

http://lime-technology.com/forum/index.php?topic=3764.msg33094#msg33094

Link to comment

It's been a while, and I thought we might have an upgrade by now...going on nearly 3 months...  ???

 

BUT, regarding Cache Disk... I have some User Shares that refuse to use the Cache Disk...Have disabled/re-enabled cache usage for the share, have replaced my Cache Disk, and some of the shares still wont... Is this a known issue with this 3 month old version of the beta?

 

http://lime-technology.com/forum/index.php?topic=3764.msg33094#msg33094

 

???

Link to comment

I wasn't sure where to post this - but I guess here as no 4.5 forum exists.

 

My server kernel panicked yesterday. This in itself could be a separate issue and isn't the focus of this post.

 

After forced rebooting unraid wanted to do a parity check. Which makes sense and it happily auto started the check.

 

However during this checking period :

 

- my disk shares were mounted and all available

- User shares were not (no /mnt/user)

- According to the unraid web page only 1 of my previous 6-7 shares was now configured

- The config for the other shares, however, still existed on the USB stick in config/

- The samba config had been populated with only one share data - which didn't work as there was no /mnt/user

 

This was a bit odd. I didn't get concerned though (my data was still intact, god bless no striping!) hoping that post parity check a reboot would sort everything out. Which it duly did and everything was back to normal with no further action required. No parity errors were found or corrected.

 

So, not a real big issue (other than the original panic) but it seems an odd state for unraid to get into. Could this be expected? potential bug? or just a glitch as you can never guarantee the state it will come back after a crash.

 

As I said, not overly worried - but thought I should post up given it is a beta release being run.

Link to comment

Could this be a flaky flash drive?

 

I wasn't sure where to post this - but I guess here as no 4.5 forum exists.

 

My server kernel panicked yesterday. This in itself could be a separate issue and isn't the focus of this post.

 

After forced rebooting unraid wanted to do a parity check. Which makes sense and it happily auto started the check.

 

However during this checking period :

 

- my disk shares were mounted and all available

- User shares were not (no /mnt/user)

- According to the unraid web page only 1 of my previous 6-7 shares was now configured

- The config for the other shares, however, still existed on the USB stick in config/

- The samba config had been populated with only one share data - which didn't work as there was no /mnt/user

 

This was a bit odd. I didn't get concerned though (my data was still intact, god bless no striping!) hoping that post parity check a reboot would sort everything out. Which it duly did and everything was back to normal with no further action required. No parity errors were found or corrected.

 

So, not a real big issue (other than the original panic) but it seems an odd state for unraid to get into. Could this be expected? potential bug? or just a glitch as you can never guarantee the state it will come back after a crash.

 

As I said, not overly worried - but thought I should post up given it is a beta release being run.

Link to comment

Could this be a flaky flash drive?

 

I wasn't sure where to post this - but I guess here as no 4.5 forum exists.

 

My server kernel panicked yesterday. This in itself could be a separate issue and isn't the focus of this post.

 

After forced rebooting unraid wanted to do a parity check. Which makes sense and it happily auto started the check.

 

However during this checking period :

 

- my disk shares were mounted and all available

- User shares were not (no /mnt/user)

- According to the unraid web page only 1 of my previous 6-7 shares was now configured

- The config for the other shares, however, still existed on the USB stick in config/

- The samba config had been populated with only one share data - which didn't work as there was no /mnt/user

 

This was a bit odd. I didn't get concerned though (my data was still intact, god bless no striping!) hoping that post parity check a reboot would sort everything out. Which it duly did and everything was back to normal with no further action required. No parity errors were found or corrected.

 

So, not a real big issue (other than the original panic) but it seems an odd state for unraid to get into. Could this be expected? potential bug? or just a glitch as you can never guarantee the state it will come back after a crash.

 

As I said, not overly worried - but thought I should post up given it is a beta release being run.

If he has not re-booted, and could post a syslog, we'd have the clues to know what happened.

 

Please, post a syslog.

 

You can use your browser.

Browse to

//tower/log/syslog

save to a file, zip it up, and attach it to a post.

 

Joe L.

Link to comment

Sorry - as mentioned in the post I rebooted once the parity check was complete to see if it would come back in its proper state - which it did.

 

I didn't have the presence of mind to take a syslog beforehand.

 

It's possible its the USB drive - though its managed to boot since with no problems and it would mean just that part of the drive with the configs on was having problems. The rest of unraid and all extensions were loaded ok.

 

I'll keep an eye on it. If it happens again I'll be sure to take a syslog dump.

 

Thanks.

Link to comment

HELP!!!!

 

I hope this is the right place to post this.  I recently upgraded our lime box to 4.5-beta6.  Things seemed to work fine - but while doing a massive delete, (4TB\5,000,000 files)ish - I would get an error saying the lime box, (lime1) was no longer available.  I was running a rmdir commend from the command prompt.  When it did this - I could no longer ping the lime box.  After about 10 seconds of waiting, it would come back up so I would just restart it.

 

I finished the delete, and was working a backup of about 1TB to the freed space.  At some point, it quit responding.  This time it didn't come back up.  Going to the web front end - it is showing all of my drives - except the parity & disk1 as unformatted.  I am attaching a log file.  I've tried to stop the array - and won't stop, other wise I would've tried rebooting it.

 

I sure hope somebody can help me with this...[ftp=ftp://ftp.kellpro.com/pub/syslog.txt]ftp://ftp.kellpro.com/pub/syslog.txt[/ftp]

Link to comment

Maybe I'm going crazy.  After a half hour of waiting, I tried the stop button.  It stopped the array and on restart all the drives showed up.  Will be downgrading after that scare :P

I doubt it had anything to do with the version.  Odds are you ran out of memory.

 

Joe L.

Link to comment

This may be OT enough that I need to take it to another forum - but I've wondered that before.

 

We have about 25 million files currently on one server under one share that spans 14 disks.  Is this too taxing for the memory?  Is it holding all the shortcuts in ram - we currently have 4GB....

Link to comment

I would add a drive to the system to use as a swap drive, just to be on the safe side. The drive should NOT be part of the unraid array. Just do:

mkswap /dev/MYSWAPDRIVE

 

and then, in the go script:

swapon /dev/MYSWAPDRIVE

 

Personally, I have a drive which isn't part of the array, where I put stuff like the VMware machines and binaries, home directories, and a swap file (to use a swap *file*, it's the same way; dd if=/dev/zero of=/path/to/swapfile bs=1M count=2048; mkswap /path/to/swapfile; swapon /path/to/swapfile).

 

The whole system was unstable when using VMware and a few other addons, and it clearly was because of unsufficient memory. It works like a charm now with the swap file, and without any noticable slowdowns. The system has >100m (out of 1gb) of free ram 95% of the time, but when unraid is doing some intensive operations, it likes to use a lot of ram for a short period of time.

Link to comment
I wasn't sure where to post this - but I guess here as no 4.5 forum exists.

 

For future reference, I would like to suggest that support-related posts should probably start a new topic in the forum for the latest unRAID version available, currently the unRAID Server 4.4 forum.  Just consider it the unRAID Server v4.* support forum, for software issues related to unRAID versions 4.4 or higher.

Link to comment

HELP!!!!

 

I hope this is the right place to post this.  I recently upgraded our lime box to 4.5-beta6.  Things seemed to work fine - but while doing a massive delete, (4TB\5,000,000 files)ish - I would get an error saying the lime box, (lime1) was no longer available.  I was running a rmdir commend from the command prompt.  When it did this - I could no longer ping the lime box.  After about 10 seconds of waiting, it would come back up so I would just restart it.

 

I finished the delete, and was working a backup of about 1TB to the freed space.  At some point, it quit responding.  This time it didn't come back up.  Going to the web front end - it is showing all of my drives - except the parity & disk1 as unformatted.  I am attaching a log file.  I've tried to stop the array - and won't stop, other wise I would've tried rebooting it.

 

I sure hope somebody can help me with this...[ftp=ftp://ftp.kellpro.com/pub/syslog.txt]ftp://ftp.kellpro.com/pub/syslog.txt[/ftp]

Maybe I'm going crazy.  After a half hour of waiting, I tried the stop button.  It stopped the array and on restart all the drives showed up.  Will be downgrading after that scare :P

This may be OT enough that I need to take it to another forum - but I've wondered that before.

 

We have about 25 million files currently on one server under one share that spans 14 disks.  Is this too taxing for the memory?  Is it holding all the shortcuts in ram - we currently have 4GB....

 

The syslog does not show anything particularly alarming, in particular, does not show evidence of out-of-memory process killing.

 

I suspect the deletions were still being finished in the background.  The Reiser file system seems to handle deletions very differently to DOS and Windows file systems, in that it clearly must be handling space deallocation in a more time-consuming manner.  Deleting a 1 GB file takes a relatively long time, and deleting a 4GB file takes 4 times as long.  Deletions in FAT or NTFS file systems are essentially instantaneous, the same for small and large files.  I can easily see the deletion of a terabyte taking over a half hour, perhaps close to an hour.  It does not directly show why the parity drive was marked Unformatted, although it would have been 'busy' until the last of the deletions were finished.  It does show that the User Shares and Disk 1 could not be unmounted, so they must have been busy.

 

I too don't see a reason to downgrade, there are a number of fixes you would lose.

 

The 25 million files is a LOT of files, especially without a swap file.  With all of the changes you made, I think it would be wise to restart the system, and let it build a fresh and clean User Share file system.  That is probably a good idea any time really major changes are made to the User Shares.

 

This is probably not important, but it did alarm me at first, so I'll point it out.

Jul 24 13:09:01 Lime1 emhttp: shcmd (202): mv '/mnt/user/MurrayCC' '/mnt/user/ODCR'

Jul 24 13:09:01 Lime1 emhttp: shcmd (203): mv '/boot/config/shares/MurrayCC.cfg' '/boot/config/shares/ODCR.cfg'

 

Jul 24 13:09:11 Lime1 emhttp: shcmd (210): mv '/mnt/user/ODCR' '/mnt/user/Backups'

Jul 24 13:09:12 Lime1 emhttp: shcmd (211): mv '/boot/config/shares/ODCR.cfg' '/boot/config/shares/Backups.cfg'

 

You probably have to be a programmer, used to working in hex, to understand why that is so alarming, but it still looks like editing may have been done in a non-Linux aware editor.  I assume you were renaming a share from MurrayCC to ODCR, then from ODCR to Backups?  It almost looks like hex codes had entered the file naming, and that possibly a garbage file had overwritten the Backups share config.  Just wanted to confirm the shares config is OK ...

Link to comment
Guest
This topic is now closed to further replies.