Jump to content

CHBMB

Community Developer
  • Content Count

    10601
  • Joined

  • Last visited

  • Days Won

    45

Posts posted by CHBMB


  1. Cool topic and if you search for rsync you will probably find a thread or two that I started and within the one that is titled 'rsync between 2 unraid server' you will see a step by step guide that I put together.

     

    It's funny you mention this because I've been thinking about leasing a cabinet... So I work for a carrier and I can easily sell myself data center colocation at very minimum margins. The problem is that it's still expensive, but maybe we can all figure out something...

     

    I would love nothing more than to have  replication of my home unRAID as I have footage from my hobby job that I would be crushed to loose. Sure I have some Bluray backup, but I would really love to host something in data center :) Yes, I'm a sales guy that loves technology and people look at me like I'm crazy that I want a cabinet, LOL

     

    I came to the realisation that it doesn't actually cost much for me to build an UnRAID server and that it was basically an insurance policy.  But with a one off fee and very low yearly fees as the backup will only be switched on when required.  I've lost count of the amount of important data I've lost over the years because I didn't protect it well enough so this was the idea I came up.  Currently cloning disk 3 of 5  ;D

     

    Will look into the Rsync method, thanks for letting me know about your post.


  2. I would not do this with the cache drive. Instead I would use an external USB drive.

    Mount it on /mnt/backups when needed and umount disconnected it when done.

     

    With the cache drive the array will have to be stopped. The cache drive will need to be taken out of the array then restarted.

    The opposite on the other side.

     

    With an external USB drive it can be mounted/unmounted on demand.

     

    You can do an rsync -a /mnt/disk1 /mnt/usbdrive/disk1 on one machine

    then do rsync --remove-sent-files -av /mnt/usbdrive/disk1 /mnt/disk1 on the other machine. which will empty each folder respectively.

     

    The trick is knowing which files to back up.

    In that case you may need to use find command like this.

     

    find /mnt/disk1 -newer /mnt/cache/.backups/disk1.lastbackup -type f -print  > /mnt/cache/.backups/disk1.filelist

     

    Then use rsync with the files-from=/mnt/cache/.backups/disk1.filelist

    then touch /mnt/cache/.backups/disk1.lastbackup to set a new pointer.

     

    I think that sounds like a much better way to do things actually.  What I'd like to do is copy everything to the cache drive as per normal, and when it moves it to the array also make a copy and put it on the external USB.  Is that feasible?


  3. I am sorry as I keep adding things to clarify and you're being very patient with me.  I won't be deleting any files, it's all archived stuff. 

     

    The first task I want to accomplish is just to create a duplicate server with the same files on each disk.

     

    After that I intend to copy new files to the cache drive and leave them there, then transport that drive to the offsite and copy them onto the backup server, then transport the drive back and move the files to my server. 

     

    Once again, thank you so much for taking the trouble to help and painstakingly drag the necessary information from me!!

     

    :D


  4. I'm not planning on doing anything remotely.  Currently they are both attached to the same network hub in my house and that's how I plan to do the initial backup.  After that I will do it manually by taking my cache drive to the offsite backup server (Will be at my parents house) and manually copying the data across there.

     

    Does that make things a little easier?

     

    Thanks for the reply.


  5. Hi,

     

    My UnRAID server has now progressed from being a novelty to an absolute necessity.  Should my house burn down it would be among the items I'd be rushing to save! ;D

     

    Hence I have decided I need an offsite backup.

     

    My plan is to have a complete clone of my server.

     

    For example each disk in the array is an identical image in each server.

     

    That way even if I lose 2 disks in one array, and can't rebuild the data I only have to copy the files from my offsite backup.

    All new data to be copied to the server will be contained on the cache disk so I can remove that and manually copy the files to my offsite backup.

     

    Can anyone help with getting this going?  Ideally I'd like not to use windows as I've found copying large amounts of data through windows sometimes is a little unreliable.

     

    Thanks

     

    Neil


  6. Recently bought 3 Samsung F4s, all updated with the new firmware.

     

    First two precleared and the only attribute that was changed was the temperature.

     

    The third disk had Raw Read Error Rate, Hardware ECC Recovered and Multizone Error Rate all changed from 252 to 100 and UDMA CRC Error Count changed from 200 to 100.

     

    Should I be concerned?

     

    I should probably add, I've now got a niggling doubt that I didn't update the firmware on this drive but will before putting it to use.

     

    Logs of preclear for that drive attached below.

    preclear_start__S2H7JD2B227368_2011-05-26.txt

    preclear_finish__S2H7JD2B227368_2011-05-26.txt

    preclear_rpt__S2H7JD2B227368_2011-05-26.txt


  7. Just got hold of a couple of these drives, have updated the firmware and am planning to use them in a new server build for my off site back ups.

     

    I think I know the answer but I am right to Preclear with the -A switch aren't I? Am planning on using 5.0b6a which I have been using for a while without problems on my other server.

     

    Also would it be correct to use the MBR 4k aligned option?

     

    Cheers

     

    Neil

     

    Cheers

     

    Neil


  8. Have a small problem with my file permissions when I'm copying files to my UnRAID array. Currently using 4.7

     

    It results in the new files not being seen by Sickbeard and my HTPC.

     

    I have got round it by using the script here

     

    http://lime-technology.com/forum/index.php?topic=10064.0

     

    but was wondering if there was a way to enable all files to have the permissions set as default.

    There was a button on one of the 5.xb releases that did this but would love not to have to intervene at all.

     

    After a bit of reading think this may involve a smb-extra.conf file but as a linux newb I have very little idea of what to put in there.

     

    Can anyone give me any help please

     

    Thanks

     

    Neil


  9. Small problem here,  installed Sickbeard (Am working up to the others) all working well but now am getting prompted for a username and password when I try to login to http://server:8081

     

    Get this error report

     

    401 Unauthorized
    
    You are not authorized to access that resource
    In addition, the custom error page failed:
    ValueError: invalid literal for int() with base 10: '401 Unauthorized'
    
    Traceback (most recent call last):
      File "/mnt/cache/_sickbeard/cherrypy/_cprequest.py", line 657, in respond
        self.hooks.run('before_handler')
      File "/mnt/cache/_sickbeard/cherrypy/_cprequest.py", line 99, in run
        hook()
      File "/mnt/cache/_sickbeard/cherrypy/_cprequest.py", line 59, in __call__
        return self.callback(**self.kwargs)
      File "/mnt/cache/_sickbeard/cherrypy/lib/auth_basic.py", line 86, in basic_auth
        raise cherrypy.HTTPError(401, "You are not authorized to access that resource")
    HTTPError: (401, 'You are not authorized to access that resource')
    

     

    Anybody got any ideas?  Have updated Sickbeard via the link in the application and it has been running fine.

     

    I'm a linux newb so not really sure where to go from here.  Has done it before so I just started from scratch to resolve it.  Don't really want to have to do that again.

     

    Thanks

     

    Neil


  10. Same ol story. Updated to 4.7, samsung drive reports too small. Went back to 4.6 and am going to wait until the expensive program I bought can fix the problem. I am tired of patching program errors. Did I buy into a hobby program or a professional program? I am glad the program works good as is and I guess I will not update.

     

    Post a fix subject title in the forum that THE program can fix this problem and I will try again.

    Thanks

     

     

    Try WHS, I've got a copy you can have for 1/2 price!  Complete with all manner of problems and workarounds, but with a lovely looking interface, of course the support is nowhere near as good as here and the updates are few and far between, but the new version (Vail) removes the most useful features and thereby removing the bugs.

     

    Honestly I can't fault the support here and the frequent updates and fantastic user community.  Hands up those of us that have had a BSOD with a MS product, do you start accusing Mr Gates of producing a "Hobby program" I don't see how Tom can be held responsible for a problem created by a hardware manufacturer who forces functionality and alters a drive characteristics whether you want it to or not.

     

    ::)


  11. As I am beginner to unraid it is unwise to use beta 4 on my production server?

    I agree, it would be a bad idea, especially for a beginner, especially since it has only been out for a few days.   4.7 is a MUCH better version for your production use at this time.

     

    Most of us doing the testing have 5.0b4 on a second server, one used for testing. 

     

    I'm using 5.0b4 on my sole setup, BUT do have all my data backed up, so if it all went horribly wrong I can just recopy 6TB of data to a fresh array. :o

     

    Having said that it seems to work fine and as long as you're careful I think it's ok, oddly I'd never in a million years dream of using MS beta software in the same fashion.  ;D

     

    I'm still having a problem with permissions though in this version, the new script works fine, but if I create a directory in my cache drive although it will show up in my shares I cannot access it from two of my PCs.  Rerunning the permissions script fixes the issue though.   It may be the way I'm using the cache drive though and I'm not sure if I've tried copying directly to the user share which would in effect just do what I'm doing anyway automatically.

     

    I'm loving the new 5.0 UnRAID though, truly wonderful.  Just got to get as sleep script working, install the final version of 5.0 when it's released and then leave well alone.

     

    Neil

     

    Still having some problems with permissions, create folders on the server, copy data to it via a cache drive and then can only access it on my main PC, not my laptops or HTPC.

    I don't have any security settings on any shares at all.  Basically wide open full access to anyone on my home network.


  12. do you have bwm-ng installed?

     

    What happens when you run the script in the foreground using the -xv option of the shell?

     

    Invoke it in a telnet window as

    sh -xv s3.sh

    and watch as variables are evaluated.

     

    The method to go to sleep in 5.0b4 is DIFFERENT than in earlier versions of unRAID.  If the s2ram is using the wrong method, the server will not go to sleep

     

    Joe L.

     

    Yep, got bwm-ng installed

     

    Used sh -xv s3.sh and watched carefully, everything was working up until the point it was supposed to go to sleep and it stated that libx86-1.1-i486-1.tgz wasn't installed.  So added the line

     

    installpkg /boot/packages/libx86-1.1-i486-1.tgz

     

    to my go file and hey presto, seems to be working now.

     

    Thanks Joe.  Never would have done it without your help.

     

    Neil :)


  13. Hate to say this, but NO version of unRAID officially supports S3 sleep.  It is apparently on the future feature list here in some form, but there are SO many different buggy implementations in various BIOS I doubt if it will ever be consistent.

     

    See here: http://download.lime-technology.com/develop/infusions/aw_todo/task.php?id=18

     

    The description is listed for 5.? (not even 5.1):

    Power management features:

    - "official" support of S2 standby. In the past this has been unreliable because of bios issues in many motherboards. Probably a lot more stable now.

     

    The method to put a server to sleep in the linux kernel used in 4.6 is different than in the 4.7 version.  The button in unMENU will no longer work and I've not had any opportunity to update it to detect the difference and use the correct command since there have been so many issues and changes I've needed to deal with in the 4.7 and 5.0-beta releases.

     

    I've personally never gotten S3 sleep to wake up successfully on the C2SEE Motherboard that lime-tech uses.  It is not important to me to spend a lot of time on since my server is up 24/7 anyway.  (It will go to sleep, but never wake up)

     

    Good luck, but with different motherboards needing different techniques to restore video and network connectivity after waking, it will on be a trial-and-error basis.  Do not hold your breath waiting for lime-tech.  

     

    It is not that 5.0beta2/3/4/? do not support S3 sleep... none "officially" do.

     

    Joe L.

     

    Thanks Joe,

     

    I understand that it's not officially supported, my problem is that I've got s3 working fine, just have a problem with the script I originally posted so it never goes into s3 sleep.  My server is only used for streaming so makes perfect sense for it to be asleep for large portions of the day/week.

     

    Still hoping someone could be kind enough to look at my script and advise!

     

    :)


  14. As I am beginner to unraid it is unwise to use beta 4 on my production server?

    I agree, it would be a bad idea, especially for a beginner, especially since it has only been out for a few days.   4.7 is a MUCH better version for your production use at this time.

     

    Most of us doing the testing have 5.0b4 on a second server, one used for testing. 

     

    I'm using 5.0b4 on my sole setup, BUT do have all my data backed up, so if it all went horribly wrong I can just recopy 6TB of data to a fresh array. :o

     

    Having said that it seems to work fine and as long as you're careful I think it's ok, oddly I'd never in a million years dream of using MS beta software in the same fashion.  ;D

     

    I'm still having a problem with permissions though in this version, the new script works fine, but if I create a directory in my cache drive although it will show up in my shares I cannot access it from two of my PCs.  Rerunning the permissions script fixes the issue though.  It may be the way I'm using the cache drive though and I'm not sure if I've tried copying directly to the user share which would in effect just do what I'm doing anyway automatically.

     

    I'm loving the new 5.0 UnRAID though, truly wonderful.  Just got to get as sleep script working, install the final version of 5.0 when it's released and then leave well alone.

     

    Neil


  15. Has anyone approached you with a solution yet?  I'm interested in this as well, because I have a strong desire to work on putting my server to sleep at night.

     

    Montr has kindly given some input and another angle for me to try, but due to work pressures haven't yet given it a go.  There is plenty of information on the forum and I've done my best to digest it as much as possible but just need some help with this last bit.

     

    I can recommend having a good read of http://lime-technology.com/forum/index.php?topic=3657.0

     

    It certainly helps, as you can read in my first post there are essentially three different commands to get your server to go to sleep depending on what version of UnRAID you're using and whether your MB works with each one.  I have had to use the s2ram method, which does work beautifully for me, however the other two methods were somewhat unreliable on my hardware causing my box to hang when resuming requiring a dirty reboot and recheck of parity afterwards.

     


  16. First, I am new to Linux. I did not try to understand your change to original S3.sh.

     

    Here is how I use my server. The server is only use for backup and restore. The server is always asleep except that I have a task on a PC (Win 7 64b) at 1:59 AM everyday to wake up the server using WOL. At 2:00 AM, another task start the backup.

     

    If I need to use the server at other time, I have on my PC a icon that I click to send a WOL to the server. When the server is running and after the disk spin down, the server is sending a ping every minutes to the PC. If the PC is responding, the server is not allow to enter sleep.

     

    I included my S3.sh as an example

     

    Thank you, when I have a little more time I will try your script, like the functions available in the one I already have but I think it may be useful to try a different approach and see if that works. 

     


  17. Hi everyone,

     

    New UnRAID user here.  Came from using WHS but became frustrated and fed up of my HD movies stuttering over my homeplug network so migrated to an UnRAID pro setup.

    Still have my WHS array so my data is safe so for that reason I am using the 5.0b3 release.

     

    Have to say I'm very very impressed.  With the help of the forums here and the wiki so far I've got UnRAID up and running, installed unMENU, apcupsd, AirVideo to stream stuff to my iPhone, Bandwidth Monitor NG, iStat, UnRAID status alert sent hourly via e-mail and monthly parity check, pci utils, SMART tools and Clean Powerdown.

     

    Never thought I'd get that far as I'm a linux newbie!  

     

    Now I'm having problems with S3 sleep, Can't get it working at all with the default

     

    echo -n mem >/sys/power/state 

     

    command.  No problems, went down the S2Ram route instead.

     

    Can telnet into my UnRAID server and use the command

     

    /boot/custom/bin/s2ram -f

     

    Which works beautifully and responds faultlessly to WOL from any of my other PCs or my iPhone.

     

    However I'm trying to use the S3.sh script that has been posted in the forums here and this is where I'm stuck.

     

    I have a number of PCs on  my network but actually the only one I'm really interested in is my HTPC and don't want my server to go to sleep if it's either on or preferably if it's streaming.  Most of the HTPC use is for Live TV which is all driven locally via MediaPortal and For The Record.

     

    The IP address of my UnRAID box is 192.168.1.1 and my HTPC is 192.168.1.3, I have DHCP enabled on my Router (WNDR3700) but all my PCs are on reserved IP addresses therefore essentially static.

     

    The script is residing in the boot\custom\bin\ folder and is called s3.sh and s2ram is in the same directory.  I have installed libx86-1.1.10i486 and my go file includes the line

     

    /boot/custom/bin/s3.sh & #| at now + 1 minute

     

    I've attached my s3.sh file and would be most grateful if anyone who is a little more experienced at Linux than me could take a look and tell me why it's not working.  (I have edited it in EditPad Lite using the Unix LF format)  I have left the configuration going for over 24 hours now but my UnRAID box has a severe case of insomnia.

     

    Thanks for any help and I've also included my syslog as well (although I have just rebooted my server after turning DHCP off in my UnRAID configuration.

     

    ATB and Thanks to the UnRAID Team for such a superb piece of software.

     

    Neil

     

    syslog-2011-02-08.txt