danioj

Members
  • Posts

    1530
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by danioj

  1. Further Update. It looks like I have been able to keep a successful copy. Added VM Guest Additions to VM, disabled a few pieces of software running in the background (on both guest and host) and changed the share from DNS name to ip address (Still using latest Terracopy Stable). Anyway here is your update. To build the suspense ...... 25% after 1TB of data the copy is still smashing along at ~38MB/s. Hope it all holds steady! EDIT: For those who need more evidence here is a screenshot of my Backup Server Unraid GUI.
  2. Agreed! It would be much worse if it was Unraid. I went into the data centre at work and asked the tech in there and he might have solved it for me. He said that terracopy has a memory leak issue that he has experienced which means it keeps consuming until it crashes the OS when copying large volumes of data. He said that the issue was identified by the terracopy team and addressed in their latest v3 alpha. Suggested I try that (I downloaded latest "stable"). I'll give it one more go with this suggestion before I switch to Rsync. The gods don't want me to do this do they!?? [emoji6]
  3. Arrrrggghhh. Crashed again. It seems like using a VM from my iMac to do this from within Windows is not going to work. I'll have to look at a different solution again when I get home!! [emoji53] Edit: I am just going to use Rsync. I was concerned about using Rsync before as I didn't know it did post copy transfer verification. However just read this from the man page. Note that rsync always verifies that each transferred file was correctly reconstructed on the receiving side by checking a whole-file checksum that is gener? ated as the file is transferred, but that automatic after-the-transfer verification has nothing to do with this option’s before-the-transfer "Does this file need to be updated?" check. I'll try and set this command up tonight and have it spit out the progress. I guess I could run it in screen. I've only used Rsync locally before not remotely so need to read up.
  4. A little update ..... I fell asleep on the couch last night (watching the EPL in Australia means you have to stay up LATE) and didn't check the copy when I slid of the chase and into bed. Unfortunately the VM must have crashed as the iMac had reset itself. I figured it was an issue with the VM itself as I only had it assigned limited resources. So I corrected this before I went to work this morning and set it running again (I let Terracopy overwrite the existing files). Anyway, that was ~3 hours ago as its lunch time now. I just went in remote and had a check: It is @ 10% (~500GB) and is reporting a speed of between 39MB/s and 41MB/s ** **I took screen shots with the iPhone and will upload to this post when I get home tonight! Will check again in a few hours!
  5. So, copying will be going unRAID Main==>LAN==>iMac==>LAN==>unRAID Backup... I think the network will be the most limiting factor in here. I guess more stressful test would be to NFS-mount unRAID Main on unRAID Backup and then copy unRAID Main==>LAN==>unRAID Backup, using Midnight Commander. **I'll put this at the top here as I might be a little out of my depth so go easy on me** All three are on the same switch and there is nothing else running on it. I might be wrong but I was under the impression that while it "could" be a limiting factor there is no real way that it "will". While I think you are right, taking the network (and the intermediary client) out of the equation makes for a better test I think we can reasonably say that its NOT going to be much of a factor in the results. I know with my Main Server (Which has WD Red's) that I can only write to the array (without Cache) at ~40MB/s. However when I use the Main Server using the Cache Drive I am seeing speeds north of ~100MB/s. So in comparison, for me, the speeds that the test is showing initially are what I would expect of writes to a Parity Protected Array comprising 5400 similar PMR drives. The real test (for me) here is if/when we experience the result of the cache mitigation technology of the SMR drives and what impact that will have on this test.
  6. The Parity Check has just finished after 14 Hours 58 Minutes. So, here we go Enviroment Copy Client 2014 Apple iMac Main Server Unraid 5.04 Pro Intel® Celeron® CPU G550 @ 2.60GHz ASUSTeK COMPUTER INC. - P8B75-M LX G.Skill 1666 Ripjaws 4096 MB Antec Neo Eco 620 Parity Protected Array Comprising of 4 x WDC_WD30EFRX-68AX9N0 3TB Drives (1 x Parity 4 x Data) Backup Server Unraid 6.0-beta14b Pro ASRock C2550D4I Mini ITX Motherboard Kingston 4GB (1x4GB), PC-12800 (1600MHz) ECC Unbuffered DDR3L, ValueRAM, CL11, 1.35V, Single Stick Silverstone ST45SF-G 450W SFX Form Factor Power Supply - ST45SF-G Parity Protected Array Comprising of 3 x Seagate 8TB Archive (SMR) HDD, SATA III, 5900RPM, 128MB (1 x Parity 2 x Data) **Both Unraid Environments have User Shares enabled and will be utilising these shares in the test** Network All clients are running Gigabit NIC's TP-Link 8-Port Gigabit Switch Cat6 Cable Test The test will: Copy ~5TB data (Reduced from 11TB so I don't have to mess with my shares). Utilise a VM running Windows 7 on OS X Yosemite as the copy client. Use the Terracopy Free program to manage the copy between the 2 Unraid Servers. Use the Terracopy Free program for performing a CRC check after each copy to verify a successful transfer. Copy data from a User Share on the Main Server to a user share on the Backup Server using SMB Protocol. Use the Allocation Method "Most Free" within Unraid on the Backup Server User Share (So data will be written in a manner that is intentionally attempting to demonstrate "worst case" parity writes for the new SMR Drives). And .... hopefully provide us with some good valid data! Right, I have just benchmarked the transfer rate. I got myself a 10GB test *.mkv file and copied from one share to the other in the same way the main test will be. The result was a sustained transfer rate of 40MB/s. The post transfer CRC Check (which ran successfully) was done at a sustained read rate of 41MB/s. Benchmark Copy Benchmark Verify Right, now we have our benchmark! I have just hit GO! It is running @ 41MB/s. Test Start 8)
  7. Just replied to the thread you posted a little while ago. http://lime-technology.com/forum/index.php?topic=34084.msg365545#msg365545
  8. I was having a similar weird issue with my C2550D4I board yesterday. It would turn on in that I could access the BMI interface but no matter how many times I pressed the power buttons it wouldn't turn on to post (perhaps every now and then the fans would spin perhaps 3 times and stop). I didn't notice initially because it hadn't been off since roughly the first boot up and had run for DAYS doing preclears etc. This was the first time I had not "pulled the plug" so to speak (when the system was off except bmi) on the board and had just shutdown and wanted to start up again. However - when I did "pull the plug" again and plugged back in - first button press and server comes up! I decided to take everything apart and re seat everything from the PSU to the Sata cables. When I did it seems fine now. Still not sure - not getting any messages like you but now have a feeling it had nothing to do with the physical installation but the bmi /config /setup. Would be interested to see if your box comes up if you jack out the cable to the psu (ok because the system is off except bmi) and plug back in and press power!!? Do remember that there is a 30 second bmi sync when the system posts if you have that setting on in the bios. I'd appreciate it if you would post anything you get from ASRock!
  9. It's appears it is in the Roadmap but it is currently in the "unscheduled" bucket. http://lime-technology.com/forum/index.php?topic=34435.0
  10. Wilco. I haven't started copying yet. I went out last night, so a bit of a "sleep in" this morning! I had to Parity Sync the Array (which completed while I was in bed) and took a sweet 15 Hours 10 Minutes . I have just hit go on the Parity Check and then I will start the copy when that has finished.
  11. Sounds reasonable A few questions before I start though: If I run it using Terracopy do you know if Terracopy has reliable copy validation? In this backup I'd like to be sure that the copy has completed successfully. I was intending on using the confirm copy switch in SyncBack but as long as Terracopy has something similar then all is good. In addition, when I switch to SyncBack for my weekly backup run after this initial backup using Terracopy do you know if SyncBack will allow me to configure it to Incremental mode given it won't have done the initial backup?
  12. Oh go on then Ill do the whole thing using the "Most Free" allocation method and just let it run till completion! How would you like me to document / log the test?
  13. Finally get to actually USE this nifty new array !! I will when I put my new Noctua NF-12's in the rig later today. Until then I am thinking about what the best strategy is for backing up 12TB utilising the new array containing only these drives (1 x Parity and 2 x Data). Given the discussion earlier its clear to maximise the performance of the drives I need to make the writes as sequential as possible. I plan on setting the backup up in Syncback (on a W7 VM under VirtualBox on my iMac) between the usershare on my Main Server to a corresponding usershare on this Backup Server and hitting GO! Its going to be running on a dedicated Gigabit switch so I don't feel there will be any speed issues related to copying over the network. Back to the disk related question - I have a "feeling" that if I assign the Backup Server a Parity protected array from the beginning then execute the backup then those writes to the Parity disk will be considered "random" and I may observe speed issues. If I keep the array unprotected while I do the initial backup and then assign the Parity disk once thats done then the build of the Parity disk will be done more sequentially and less likely to be hit with speed issues due to the nature of the drives. Have I understood the issues with these disks correctly? Does what I am suggesting make sense or am I way off?
  14. Waiting ... Glad the drives survived the power outage without any damage -- that's generally what happens, but you CAN get actual unrecoverable damage if things happen just right [or actually just WRONG ] Well, I'm glad to report that all the long tests completed without error. No additional S.M.A.R.T errors and or attributes to worry about in my opinion. It has taken me a long time (comparative to what I am used to) but its nice to have this level of confidence over these disks. In the future I am defiantly going to make LongTest, 3 preclears, LongTest my disk preparation routine. Backup time!! post_longtest_Z8402JP1_2015_04_11_Disk_1.txt post_longtest_Z8402L5T_2015_04_11_Disk_2.txt post_longtest_Z8402RLJ_2015_04_11_DIsk_3.txt
  15. I did a search on your board and the NIC on it (Intel® 82579). Someone using Ubuntu had a weird driver issue. Might be worth a read - No idea if it will help anyone at all but thought id post it. http://ubuntuforums.org/showthread.php?t=2167864
  16. I guess this thread could be deleted. Just looked (and accidentally responded to - as didn't realise it was a 2014 one) to a Poll I didn't know was going on which indicates this docker is being developed and is a work in progress. Don't know by whom but according to NAS' poll it is!
  17. Will do. The Fans are arriving tomorrow. Given I am running a long S.M.A.R.T test as Weebo advised when thats done Ill install the fans so when I run a Parity check Ill have them in the case and then Ill post the results.
  18. Hi All, As I have mentioned in previous posts in this part of he forum, I have not yet installed v6 but I am looking at how I will set things up around it and through it. I use Logmein Hamachi to help me provide support to my friends and family etc. I thought about putting the plugin on v5 but couldn't get it to work and wasn't really into giving to much time to it so I gave up. I just came across this: https://registry.hub.docker.com/u/gfjardim/hamachi/ Has anyone used it or does anyone know if Unraid has all this needs to run enabled? Ta Daniel
  19. My suggestion would be for a final SMART long test on each drive. I'm going to take your suggestion, thank you. If my memory serves me correctly the command is: smartctl -t long /dev/sdx Right? EDIT1: Yes I was right. Found the info here: http://lime-technology.com/wiki/index.php/Troubleshooting#Obtaining_a_SMART_report Long test on both drives started simultaneously in separate tty's. "Please wait 948 minutes for the test to complete". LOL! EDIT2: For anyone reading this in the future and who is copying what I have done, the command I have written executes the test in the background. To execute the tests in Foreground Mode a "-C" switch must be added to the command like so: smartctl -t -C long /dev/sdx If you do execute the test in the background as I did (without the -C switch), it is nice to know the progress. So I did this for all drives: smartctl -l selftest /dev/sdx Note: I changed sdx to sdb, sdc to sdd in 3 separate commands to check the status of the 3 drives. Example output: === START OF READ SMART DATA SECTION === SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Self-test routine in progress 80% 190 -
  20. Well, I have just finished a preclear of my 8TB drives with the Custom Skirt in place. As a means of bringing my test together I have summarised. ASRock C2550D4I Mini ITX Motherboard Kingston 4GB (1x4GB), PC-12800 (1600MHz) ECC Unbuffered DDR3L, ValueRAM, CL11, 1.35V, Single Stick Silverstone ST45SF-G 450W SFX Form Factor Power Supply - ST45SF-G 3 x Seagate 8TB Archive HDD, SATA III, 5900RPM, 128MB (in top 3 bays). Configuration: There is nothing but trays in the remaining bays and case is running stock fans in default configuration (as attached) with all filters attached as designed. Location: My desktop next to a window which gets some sun throughout the day so some UV exposure BUT the ambient temperature outside over the test has not exceeded 29C. For 8 hours per day (night time) I have had the heating in the house set to 27C. 2 Preclear runs with a time of: ~58 Hours (give or take 40 minutes over the 3 drives) Run 1: Without Custom Skirt: Disk 1: Start Temperature: 42C, Finish Temperature: 41C, Peak Temperature*: 44C Disk 2: Start Temperature: 39C, Finish Temperature: 39C, Peak Temperature*: 41C Disk 3: Start Temperature: 37C, Finish Temperature: 36C, Peak Temperature*: 40C Motherboard Temperature: 35C CPU Temperature: 35C *Note: Peak temperature for Run 1 was captured from S.M.A.R.T reports at end of the run as I knew these were the highest temperatures these new disks had experienced. Run 2: With Custom Skirt: Disk 1: Start Temperature: 31C, Finish Temperature: 35C, Peak Temperature**: 36C Disk 2: Start Temperature: 30C, Finish Temperature: 33C, Peak Temperature**: 34C Disk 3: Start Temperature: 29C, Finish Temperature: 32C, Peak Temperature**: 33C Motherboard Temperature: 39C CPU Temperature: 38C **Note: Peak temperature for Run 2 was captured through observations recorded at 6 Hour Intervals as I knew I wouldn't be able to rely on S.M.A.R.T reports. I am still going to put the Noctua NF-F12 fans in there. As Gary mentioned before: a 1500rpm unit like this would "likely" provide about 25% more airflow than the stock fans => and a high quality unit like the Noctua NF-F12 will run at 1500rpm with the same 22dba noise level the stock fans have at 1200rpm. And I am more than happy with the noise level of the case which is pretty much silent to my ears (and that is with it sat next to me on my desktop - not lower down on a shelf near the floor where it will be when I'm done). To conclude, I love this case. If I had to go back and make the decision to buy it again I would without hesitation. I would have lived with the temperatures noted in Run 1 but the addition of the custom skirt has made them much more acceptable. I feel the addition of more drives (though to capacity) will not materially impact the overall temperature of the case and that (barring the 1 or 2 drives) won't impact temperatures more than 1C or 2C more. With the addition of the Noctua fans mentioned above I think Ill shave another 2C off the temperatures reported in Run 2. Based on this test I will not be "Modding" the drive cage or the case further with additional holes. I feel that adding additional holes does not guarantee better results and it has even been mentioned to me that this "could" have a negative impact on the temperatures (although this is so far unsubstantiated at the time of this post). Im also not going to replace the Cardboard skirt with an Aluminium one because I don't feel there is a need. I feel Silverstone have done a good job with this case and with a little airflow control the temperatures are great for such a nice SFF case. I hope this is helpful. If anyone wants any more details or information that I have not recorded please let me know.
  21. That's exactly as I would have expected. After a preclear the drives all look good. Interesting that one of the three survived the power cut without data loss. But the complete write of zero by preclear corrected that and now all three report clean. Hopefully you'll have same error free results after cycle 3. Woohoo cycle 3 has finished. Well, cycle 3.5 . It has only taken a week! LOL! All looking good thankfully! No Current Pending Sectors or Reallocated Sectors. No additional S.M.A.R.T errors. As far as I am concerned these are all good! ~200 Hours of constant work is a nice workout! I'm very glad the power loss didn't seem to have an impact on the drives. Next Step, configure an array and start backing up my data! Disk 1 ========================================================================1.15b == invoked as: ./preclear_bjp.sh -f -A /dev/sdb == ST8000AS0002-1NA17Z Z8402JP1 == Disk /dev/sdb has been successfully precleared == with a starting sector of 1 == Ran 1 cycle == == Using :Read block size = 1000448 Bytes == Last Cycle's Pre Read Time : 20:11:14 (110 MB/s) == Last Cycle's Zeroing time : 16:17:40 (136 MB/s) == Last Cycle's Post Read Time : 21:13:08 (104 MB/s) == Last Cycle's Total Time : 57:43:07 == == Total Elapsed Time 57:43:07 == == Disk Start Temperature: 31C == == Current Disk Temperature: 35C, == ============================================================================ ** Changed attributes in files: /tmp/smart_start_sdb /tmp/smart_finish_sdb ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Raw_Read_Error_Rate = 118 112 6 ok 200546392 Seek_Error_Rate = 77 75 30 ok 51612607 Spin_Retry_Count = 100 100 97 near_thresh 0 End-to-End_Error = 100 100 99 near_thresh 0 Airflow_Temperature_Cel = 65 69 45 near_thresh 35 Temperature_Celsius = 35 31 0 ok 35 Hardware_ECC_Recovered = 118 112 0 ok 200546392 No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 1. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 1. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 0 sectors had been re-allocated before the start of the preclear. 0 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change. ============================================================================ Disk 2 ========================================================================1.15b == invoked as: ./preclear_bjp.sh -f -A /dev/sdc == ST8000AS0002-1NA17Z Z8402L5T == Disk /dev/sdc has been successfully precleared == with a starting sector of 1 == Ran 1 cycle == == Using :Read block size = 1000448 Bytes == Last Cycle's Pre Read Time : 19:55:01 (111 MB/s) == Last Cycle's Zeroing time : 16:06:33 (137 MB/s) == Last Cycle's Post Read Time : 20:59:31 (105 MB/s) == Last Cycle's Total Time : 57:02:30 == == Total Elapsed Time 57:02:30 == == Disk Start Temperature: 30C == == Current Disk Temperature: 33C, == ============================================================================ ** Changed attributes in files: /tmp/smart_start_sdc /tmp/smart_finish_sdc ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Raw_Read_Error_Rate = 120 115 6 ok 241352488 Seek_Error_Rate = 77 75 30 ok 51594284 Spin_Retry_Count = 100 100 97 near_thresh 0 End-to-End_Error = 100 100 99 near_thresh 0 Airflow_Temperature_Cel = 67 70 45 near_thresh 33 Temperature_Celsius = 33 30 0 ok 33 Hardware_ECC_Recovered = 120 115 0 ok 241352488 No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 1. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 1. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 0 sectors had been re-allocated before the start of the preclear. 0 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change. ============================================================================ Disk 3 ========================================================================1.15b == invoked as: ./preclear_bjp.sh -f -A /dev/sdd == ST8000AS0002-1NA17Z Z8402RLJ == Disk /dev/sdd has been successfully precleared == with a starting sector of 1 == Ran 1 cycle == == Using :Read block size = 1000448 Bytes == Last Cycle's Pre Read Time : 19:51:02 (111 MB/s) == Last Cycle's Zeroing time : 16:27:28 (135 MB/s) == Last Cycle's Post Read Time : 21:05:08 (105 MB/s) == Last Cycle's Total Time : 57:24:42 == == Total Elapsed Time 57:24:42 == == Disk Start Temperature: 29C == == Current Disk Temperature: 32C, == ============================================================================ ** Changed attributes in files: /tmp/smart_start_sdd /tmp/smart_finish_sdd ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Raw_Read_Error_Rate = 109 115 6 ok 23575200 Seek_Error_Rate = 77 75 30 ok 51389300 Spin_Retry_Count = 100 100 97 near_thresh 0 End-to-End_Error = 100 100 99 near_thresh 0 Airflow_Temperature_Cel = 68 71 45 near_thresh 32 Temperature_Celsius = 32 29 0 ok 32 Hardware_ECC_Recovered = 109 115 0 ok 23575200 No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 1. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 1. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 0 sectors had been re-allocated before the start of the preclear. 0 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change. ============================================================================ preclear_finish_Z8402JP1_2015-04-10_Disk_1.txt preclear_finish_Z8402L5T_2015-04-10_Disk_2.txt preclear_finish_Z8402RLJ_2015-04-10_Disk_3.txt
  22. That's the biggest piece of nonsense I've ever heard. I've been using 3rd party apps to change and manage foreign movies for years. Truth is XBMC and Kodi never catch everything properly and hence a manual scrape via a 3rd party app is necessary. Also a lot of these apps ate written specifically with xbmc/kodi in mind. Tiny Media Manager is a perfect example. Nonsense? Not in my experience. I have seen 3rd party tools come and go too or not keep pace with new XBMC developments which gets frustrating. In addition, as a lover of Arthouse and non English movies I've not seen the XBMC engine make a mistake now for years.
  23. I like the idea of seperate cache and app (VM) drives. I guess for a seperate app drive you need to mount another drive outside of the array. In v5 the method I have used to mount a drive outside the array is SNAP. This post here details how to do this with the plugin: http://lime-technology.com/forum/index.php?topic=29519.0 I haven't installed v6x (in anger) yet but found this post you might want to read: http://lime-technology.com/forum/index.php?topic=31594.0
  24. Fair enough. That argument makes sense, but like most things it's dependant on what you're after. There is a cost (whether it be $$, power, heat etc) for anything - in this case speed. I tend to agree that 5400rpm is fast enough as I have only WD Reds atm and am even going to shift to Greens (which have a lower cache) and I feel what I have is lightening! That said, for the right price $/TB I'd buy these (wish I lived in US) BUT that would be the only variable as id overlook (cost) increase in power consumption and (benefit) speed. Was just interested.