atom

Members
  • Posts

    31
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed
  • Location
    USA

atom's Achievements

Noob

Noob (1/14)

1

Reputation

  1. I'm running unRAID v6.2.4 and trying to mount a share via NFSv4 from a linux client but it doesn't seem to be enabled/supported. # mount -t nfs4 nas:/mnt/disk1 /mnt/disk1 mount.nfs4: Protocol not supported # rpcinfo -s nas program version(s) netid(s) service owner 100000 2,3,4 local,udp,tcp portmapper superuser 100024 1 tcp,udp status 32 100003 3 udp,tcp nfs superuser 100021 4,3,1 tcp,udp nlockmgr superuser 100005 3,2,1 tcp,udp mountd superuser EDIT: So it looks like NFSv4 requires a bit of work to be functional on Slackware. http://www.linuxquestions.org/questions/slackware-14/does-slackware-14-1-or-current-have-nfsv4-support-4175550162/ Assuming that's correct, can a mod/dev look into this? Or correct me in the event I'm wrong.
  2. So I guess there is no unRAID package for rsnapshot? Is the install simple enough where one isn't needed or is a package something people would be interested in?
  3. How did you get this display? I know it is part of the apcupsd package I downloaded for Windows but I am letting unRAID be the "master" in my configuration. Are you getting this status monitor cgi from Windows or do you somehow have unRAID web serving this cgi? It's supplied by the apcupsd-cgi package which I'm running from my Ubuntu XBMC box. http://www.apcupsd.com/manual/manual.html#apcupsd-network-monitoring-cgi-programs
  4. Update on WOL... I had some time to play around with it and couldn't get it to work. At first I couldn't enter S3. Did some reading and realized I had to enable PME in BIOS. I was then able to enter S3 but unable to correctly wake with magic packet. System seems to power back up from S3 but there is no video, response from keyboard, and NIC doesn't come back up as I was unable to ping it. Same issue described here http://lime-technology.com/forum/index.php?topic=21019.msg187740#msg187740. WOL works from S5, however. Not really as useful as S3 though. I saw mention of an unRAID build with a 8169 NIC driver but not sure if that resolves issue. In the case of that thread, that user tried an Intel NIC and was still unable to get the WOL working with S3. Seems some WOL just won't work with some MBs. If you really want WOL working I look to a MB that is known to work with it.
  5. For the most part i see: "USB Connectivity HID compliant USB port enables full integration with built-in power management and auto shutdown features of Windows, Linux and Mac OS X." Whether they work with apcupsd is another question. If not done already i would look into the linux compatibility.
  6. I'm not a power supply expert... However, my best answer to your question would be; availability, quality, and price. All three of the previously mentioned power supplies are readily available, of known good quality, and fairly priced. I really don't see any benefit to use the power supply you mentioned. The PSU dimensions might also be an issue depending on your case. Although a little tight, my PSU fits in my case. And as previously mentioned there's not much power saving benefit from running a 250w as opposed to a 400w PSU if they are of high efficiency. WOL and standby I've never used on any of my computers. My NAS is no exception. I run it 24/7 so never needed to look into it but i'll see what I can find out. I'm more concerned about noise than power and my setup is very quiet...
  7. I wouldn't think you necessarily need a UPS with the same wattage as your PSU unless you're actually maxing out our PSU. If you really want to know how much capacity you'll need, you can try http://www.apc.com/tools/ups_selector/index.cfm. I have APC BE750G Power Saving Battery Back-UPS. My NAS (in my sig), Atom HTPC, DishTV receiver, and 55" Plasma TV are running off of it. It's more than enough to shut down everything gracefully. This is what the load usage looks like with the NAS, HTPC, Dish receiver.
  8. The PSU only draws as much power as it needs. How much power it draws for a given wattage depends on the energy efficiency of the PSU. All our PSU picks have an "80 PLUS" rating. Meaning its 80% or greater energy efficient. This in turn means that 80% of the power drawn from the AC wall outlet is converted to DC power for the computer system. eskro666: SeaSonic SSR-360GP 360W 80 PLUS GOLD Certified Single +12V Rail ATX12V me: CORSAIR Builder Series CX430 430W ATX12V v2.3 80 PLUS BRONZE Certified Active PFC Power Supply ccruzen: CORSAIR Builder Series CX500 500W ATX12V v2.3 80 PLUS BRONZE Certified Active PFC Power Supply The lowest wattage PSU on Newegg is 275W APEX SL-275TFX 275W TFX12V which is only has an efficiency rating of 65%. So any of our PSU's use less power than that 275W PSU for a giving wattage. At least that's my understanding. Anyone feel free to correct me if I'm wrong.
  9. 3 cycle preclear on one drive ========================================================================1.13 == invoked as: /boot/preclear_disk.sh -c 3 /dev/sdd == WDC WD30EZRX-00MMMB0 WD-WCAWZ2350602 == Disk /dev/sdd has been successfully precleared == with a starting sector of 1 == Ran 3 cycles == == Using :Read block size = 8225280 Bytes == Last Cycle's Pre Read Time : 9:19:38 (89 MB/s) == Last Cycle's Zeroing time : 8:39:37 (96 MB/s) == Last Cycle's Post Read Time : 28:37:00 (29 MB/s) == Last Cycle's Total Time : 37:17:39 == == Total Elapsed Time 123:17:21 Parity sync @ 8h40m Nov 24 21:10:54 nas kernel: mdcmd (17): check NOCORRECT Nov 24 21:10:54 nas kernel: md: recovery thread woken up ... Nov 24 21:10:54 nas kernel: md: recovery thread syncing parity disk ... Nov 24 21:10:54 nas kernel: md: using 1536k window, over a total of 2930266532 blocks. Nov 25 05:53:01 nas kernel: md: sync done. time=31327sec Userscripts -> Disk Speed Test (sdd is parity drive and a Parity-Check is running) /dev/sda: Timing cached reads: 1500 MB in 2.00 seconds = 750.06 MB/sec Timing buffered disk reads: 112 MB in 3.13 seconds = 35.74 MB/sec /dev/sdb: Timing cached reads: 1446 MB in 2.00 seconds = 723.40 MB/sec Timing buffered disk reads: 116 MB in 3.00 seconds = 38.63 MB/sec /dev/sdd: Timing cached reads: 1494 MB in 2.00 seconds = 746.82 MB/sec Timing buffered disk reads: 50 MB in 3.15 seconds = 15.88 MB/sec Copy from htpc box to unRAID user share mounted via NFS and this is over gigabit connection. Parity on and Parity-Check is running. Not sure how much a Parity-Check affects write speed. $ rsync Batman.The.Brave.and.the.Bold.S02E01.720p.WEB-DL.AAC2.0.AVC-TCW.mkv /media/TV\ Shows/ sending incremental file list Batman.The.Brave.and.the.Bold.S02E01.720p.WEB-DL.AAC2.0.AVC-TCW.mkv 733.41M 100% 10.21MB/s 0:01:08 (xfer#1, to-check=0/1) sent 733.50M bytes received 31 bytes 10.26M bytes/sec total size is 733.41M speedup is 1.00 Let me know if you want me to run any other commands.
  10. You say that post read is comparing values. So what is the defining factor in post read speed? CPU? If is is that would make sense since my MB (ASUS C60M1-I) has SATA3 controller but a weak CPU. It has to perform more operations on a post read so a slower CPU would perform slower than a faster one. But even increasing the processing speed can only do so much. So are my results and speed ok?
  11. You say that post read is comparing values. So what is the defining factor in post read speed? CPU? If is is that would make sense since my MB (ASUS C60M1-I) has SATA3 controller but a weak CPU.
  12. Is it normal for the post read speed to be 1/3 that of the pre read? This is the 2nd preclear of this drive. First time I was running 2 simultaneous preclears of two of the same drives. These results are from a single pass of a single preclear. Other users of the same drive are reporting preclear times of ~36h. Is this something I should be worried about ========================================================================1.13 == invoked as: ./preclear_disk.sh /dev/sda == WDC WD30EZRX-00MMMB0 WD-WMAWZ0333927 == Disk /dev/sda has been successfully precleared == with a starting sector of 1 == Ran 1 cycle == == Using :Read block size = 8225280 Bytes == Last Cycle's Pre Read Time : 9:20:54 (89 MB/s) == Last Cycle's Zeroing time : 8:27:11 (98 MB/s) == Last Cycle's Post Read Time : 27:30:18 (30 MB/s) == Last Cycle's Total Time : 45:19:25 == == Total Elapsed Time 45:19:25 == == Disk Start Temperature: 31C == == Current Disk Temperature: 30C, == ============================================================================ ** Changed attributes in files: /tmp/smart_start_sda /tmp/smart_finish_sda ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Temperature_Celsius = 122 121 0 ok 30 No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 1. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 1. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 0 sectors had been re-allocated before the start of the preclear. 0 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change. ============================================================================
  13. What model 3TB drive? And what version of unRAID? Unfortunately I don't have any other boxes to test on. The preclears finally finished @ around 46-48h each. The problem seems to be the Post-Read phase of the preclear. I'm preclearing a single drive now and console shows 112 MB/s but unMENU shows 33 MB/s. I'll have to wait until it finishes to see the final time. As for RMA I'm now worried since I ordered from Amazon. Doesn't appear to be an issue with the Asus C60M1-I per ccruzen. But I'm just not sure how to figure out what the issue is. Anyway, I'll post in the other threads about WD30EZRX's and preclearing so I don't muck this thread anymore.
  14. I just did a single preclear @ 45h. Looks like the post read is what's slow. So is this necessarily an issue? ========================================================================1.13 == invoked as: ./preclear_disk.sh /dev/sda == WDC WD30EZRX-00MMMB0 WD-WMAWZ0333927 == Disk /dev/sda has been successfully precleared == with a starting sector of 1 == Ran 1 cycle == == Using :Read block size = 8225280 Bytes == Last Cycle's Pre Read Time : 9:20:54 (89 MB/s) == Last Cycle's Zeroing time : 8:27:11 (98 MB/s) == Last Cycle's Post Read Time : 27:30:18 (30 MB/s) == Last Cycle's Total Time : 45:19:25 == == Total Elapsed Time 45:19:25 == == Disk Start Temperature: 31C == == Current Disk Temperature: 30C, == ============================================================================ ** Changed attributes in files: /tmp/smart_start_sda /tmp/smart_finish_sda ATTRIBUTE NEW_VAL OLD_VAL FAILURE_THRESHOLD STATUS RAW_VALUE Temperature_Celsius = 122 121 0 ok 30 No SMART attributes are FAILING_NOW 0 sectors were pending re-allocation before the start of the preclear. 0 sectors were pending re-allocation after pre-read in cycle 1 of 1. 0 sectors were pending re-allocation after zero of disk in cycle 1 of 1. 0 sectors are pending re-allocation at the end of the preclear, the number of sectors pending re-allocation did not change. 0 sectors had been re-allocated before the start of the preclear. 0 sectors are re-allocated at the end of the preclear, the number of sectors re-allocated did not change. ============================================================================ EDIT: Results from a 3 cycle run are faster..? ========================================================================1.13 == invoked as: /boot/preclear_disk.sh -c 3 /dev/sdd == WDC WD30EZRX-00MMMB0 WD-WCAWZ2350602 == Disk /dev/sdd has been successfully precleared == with a starting sector of 1 == Ran 3 cycles == == Using :Read block size = 8225280 Bytes == Last Cycle's Pre Read Time : 9:19:38 (89 MB/s) == Last Cycle's Zeroing time : 8:39:37 (96 MB/s) == Last Cycle's Post Read Time : 28:37:00 (29 MB/s) == Last Cycle's Total Time : 37:17:39 == == Total Elapsed Time 123:17:21
  15. ccruzen, You notice any problems with simultaneous preclears running slow? I finally got my setup and I'm in the preclear phase. Current doing 2 3TB (WD30EZRX) at the same time. From some other threads it appears to be running a little slow. 46+ hours and haven't finished the first cycle.