doron

Community Developer
  • Posts

    640
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by doron

  1. Um, these are in fact SAS ports. They can (as SAS ports do) accept SATA drives as well. I can't see any reason why it wouldn't work (and plan to make use of such when the ports on my X10SL7 run out).
  2. Very good. Don't. If you need performance, and it sounds as if you do, look elsewhere. The 500Mb/s variety of powerline (case in point: Zyxel "HD" boxes) would deliver perhaps 1/10 (one tenth!) of that as sustained rate, and typically even less. The 500Mb/s figure is a nominal signaling rate value, good primarily for marketing, and certainly not an effective rate. I tested mine (PLA4205) on two adjacent power outlets (3 inches apart), just to get a baseline. Could not get more than 50Mb/s. I thought I had a faulty set so I contacted support. They were very friendly and helpful, and the tech guy actually responded with something like "50Mb/s? That's actually very cool; in our tests we didn't get more than 30-40". This is not about Zyxel being bad - they're actually pretty good, relatively - it's about that technology and the overhyped numbers used. (this is all bits-per-second).
  3. I never realized that. What is this based on? The manual seemed to be silent about this. (I connected my CPU fan to the closest fan connector and everything works like a charm, but I'm still curious).
  4. Bumping this thread out of oblivion... So, are there plans to add this into an upcoming version of unRAID? Thanks!
  5. I don't see why it should. You set security for each share separately. The whole tree is one share, and the subfolder is another. You can experiment with it and see what happens.
  6. You can create symbolic links to that folder at the top of the fs of all the disks it lives on, and then share that. cd /mnt/disk2 ln -s some/deep/folder folder cd /mnt/disk4 ln -s some/deep/folder folder ... then configure the share via the web GUI. Haven't tried that on my system but can't see why it won't work. EDIT: Even simpler, you can do: cd /mnt/user ln -s some/deep/folder folder then configure share via GUI.
  7. I don't know why it should not work perfectly (standard disclaimer applies ). I'm using KVR13E9/8I which works flawlessly. Kingston's website's memory selector recommends the one you pointed out for this mobo.
  8. Yes, absolutely (I assume you're asking re ESXi). In fact you can do it in two different ways: 1. Pass the whole controller unto the unRAID VM (this is, according to common wisdom in this forum, the recommended way). 2. Pass the specific drives, using ESXi RDM (this is how I do it on my system, since I need one of the drives on the controller to serve as datastore). I tested both methods, both work very well.
  9. Not sure I can actually help, but it does seem like it all stems from the crash of emhttp. Once it fails, it can't be brought back up, and the powerdown process (that shuts down the array cleanly) depends on emhttp to actually do that. So after an unorderly shutdown of the array, it will come back up complaining that it wasn't shut down cleanly. The important point is to see whether the parity check actually finds parity errors. If not, then you should be okay, methinks. One more data point: I run unRAID 5.0 under ESXi 5.5 with no issues, for some time now. Runs happily, VM tools allow for clean shutdown and restart from vSphere, all good. So without speculating on the reason for the first emhttp crash, if you finish the parity check without problems, you may well be just fine.
  10. The script logic of skipping disks is faulty at the edges. If the last disk (highest name) needs to be skipped, it will remain in the DiskID array and the script will test it and will fail (cuz it's already in the DiskID array), although the script declares it will be skipped. Obligatory patch: --- diskspeed.sh.v2.2 2014-01-15 00:00:42.000000000 +0200 +++ diskspeed.sh 2014-01-15 22:52:12.000000000 +0200 @@ -172,7 +172,7 @@ CurrLine=( $line ) tmp1=${CurrLine[1]} tmp2=${tmp1:5:3} - DiskID[$DriveCount]=$tmp2 + CurrDiskID=$tmp2 LastDrive=$tmp2 tmp1=${CurrLine[2]} i=$(expr index "$tmp1" ".") @@ -185,7 +185,7 @@ tmp3=$tmp1 fi # Identify if the current disk has been mounted and look for UNRAID files if so - tmp4=$(mount -l | grep ${DiskID[$DriveCount]}) + tmp4=$(mount -l | grep ${CurrDiskID}) MountPoint="" if [ "$tmp4" != "" ];then mount=( $tmp4 ) @@ -196,12 +196,13 @@ fi if [ "$MountPoint" == "bzimage" ];then DriveSkipped=1 - echo "Disk /dev/${DiskID[$DriveCount]} skipped; UNRAID boot or flash drive" + echo "Disk /dev/${CurrDiskID} skipped; UNRAID boot or flash drive" else if [ "$tmp2" == "MB" ] || [[ $tmp3 -lt 25 ]];then DriveSkipped=1 - echo "Disk /dev/${DiskID[$DriveCount]} skipped for being under 25 GB" + echo "Disk /dev/${CurrDiskID} skipped for being under 25 GB" else + DiskID[$DriveCount]=$CurrDiskID DiskGB[$DriveCount]=$tmp3 if [ "$tmp2" == "GB" ];then if [[ $tmp3 -gt 999 ]];then @@ -217,7 +218,7 @@ done < /tmp/inventory.txt rm /tmp/inventory.txt if [[ $DriveSkipped -eq 1 ]];then - echo + echo;echo fi #CursorUp="\033[1A"
  11. +1 (on all counts, including testing!)
  12. Seriously?! Well, from looking at your code, you must be a hell of a fast learner. Doesn't look like a bash beginner's code, at all.
  13. Ah. This is a bit of a rathole we're walking into :-) This varies from one hypervisor to the next. Running under ESXi, if you enumerate /dev/disks/by-id, you will not see those vdisks at all. "hdparm -I" will give you a null ID section. On Virtualbox, OTOH, you will have nice disk "Serial Number" (vbox makes one up for the device, which is kinda nice), with disk model being "VBOX HARDDISK". Xen probably has a third variation on this and KVM - a fourth. All in all, nothing that's standard, dependable or consistent. Suggest you stay with the size-based calculation. As I said, for robustness you could go with the 4th field in "fdisk -l" output - size in bytes - and consider all cases (excluding the uninteresting ones).
  14. Thank you. Just a point of clarity: While my (and probably everyone's who's virtualizing unRAID) system boots off a smallish vdisk (8GB in my case), that drive is not mounted as /boot. unRAID is smart that way - after boot, it looks for a drive whose label is UNRAID, and actually mount that drive as /boot. So while it does boot from the vdisk, it immediately thereafter mounts the USB stick as /boot (which is where nonvolatile stuff is kept, and also where the license key is kept). Bottom line, on many/most virtualized unRAIDs you will have a small vdisk used just for boot, and then a USB stick mounted as /boot.
  15. I don't know if you decided to change the motherboard or not. In case you have not: [*]I agree it is a serious overkill. Most of this mobo's extreme capabilities will just go to waste. [*]While pcpartpciker states the part list as "compatible", I'm not sure that's quite correct. The onboard video on that mobo is dependent on a CPU with Intel HD graphics, which the 1230v3 is not. If you keep that mobo, suggest you switch to the 1225v3, which does have the Intel graphics support. All in all, I would suggest you take a look at the Supermicro X10SL7. Has 14 HDD ports on board, IPMI (i.e. easy to run headless), ECC memory support, and works well with the 1230v3 (this is actually the combination I have).
  16. Thanks for posting the new version! Clearly lots of hard work goes into this. Some issues with the new version: [*]fdisk (on my system at least, 5.0) doesn't like GPT and spews a warning message. Doesn't interrupt the working of the script, just aesthetics. Suggest to add "2> /dev/null" to the "fdisk -l" invocation. [*]On my system, there's one HDD that's smaller than 10GB. It's not part of the array so it didn't show up until now. However now it breaks the script completely, as the calculations seem to not have anticipated such a small drive. The thing is that fdisk reports its size as "8589 MB" (MB and not GB); the script assumes the number is in GB and then, hell breaks loose (okay, not THAT bad ). Suggest to look at the string output by fdisk and if the unit is MB, calculate accordingly. For all I care, you can just ignore such drives and move on. (In case you're curious, this is the unRAID boot drive - since my unRAID is virtualized, and hypervisors cannot boot from USB for some god-knows-what reason.) [*]During the run of that small disk, for some reason, awk spews out its input line. See below. [*]Suggestion - add a new command line argument to test only the drives in the array. Or reverse logic - some "-a" to test all drives, while default is array only. This is what I get: root@Tower:/boot/plug# ./diskspeed.sh diskspeed.sh for UNRAID, version 2.1 By John Bartlett. Support board @ limetech: http://goo.gl/ysJeYV Syncing disks... WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted. WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted. /dev/sda (Disk 2): 148 MB/sec avg /dev/sdb (Parity): 151 MB/sec avg /dev/sdc (Disk 1): 144 MB/sec avg SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0a 00 00 00 00 20 00 Performance testing /dev/sdd at 0 GB (0%) awk: BEGIN{printf("%0.0f", * 0.10)} Performance testing /dev/sdd at GB (10%) awk: BEGIN{printf("%0.0f", * 0.20)} Performance testing /dev/sdd at GB (20%) awk: BEGIN{printf("%0.0f", * 0.30)} Performance testing /dev/sdd at GB (30%) awk: BEGIN{printf("%0.0f", * 0.40)} Performance testing /dev/sdd at GB (40%) awk: BEGIN{printf("%0.0f", * 0.50)} Performance testing /dev/sdd at GB (50%) awk: BEGIN{printf("%0.0f", * 0.60)} Performance testing /dev/sdd at GB (60%) awk: BEGIN{printf("%0.0f", * 0.70)} Performance testing /dev/sdd at GB (70%) awk: BEGIN{printf("%0.0f", * 0.80)} Performance testing /dev/sdd at GB (80%) awk: BEGIN{printf("%0.0f", * 0.90)} Performance testing /dev/sdd at GB (100%) Performance testing /dev/sdd at -10 GB (hit end of disk) (100%) Performance testing /dev/sdd at -20 GB (hit end of disk) (100%) Performance testing /dev/sdd at -30 GB (hit end of disk) (100%) Performance testing /dev/sdd at -40 GB (hit end of disk) (100%) Performance testing /dev/sdd at -50 GB (hit end of disk) (100%) Performance testing /dev/sdd at -60 GB (hit end of disk) (100%) Performance testing /dev/sdd at -70 GB (hit end of disk) (100%) Performance testing /dev/sdd at -80 GB (hit end of disk) (100%) Performance testing /dev/sdd at -90 GB (hit end of disk) (100%) Performance testing /dev/sdd at -100 GB (hit end of disk) (100%) Performance testing /dev/sdd at -110 GB (hit end of disk) (100%) Performance testing /dev/sdd at -120 GB (hit end of disk) (100%) Performance testing /dev/sdd at -130 GB (hit end of disk) (100%) Performance testing /dev/sdd at -140 GB (hit end of disk) (100%) Performance testing /dev/sdd at -150 GB (hit end of disk) (100%) Performance testing /dev/sdd at -160 GB (hit end of disk) (100%) Performance testing /dev/sdd at -170 GB (hit end of disk) (100%) Performance testing /dev/sdd at -180 GB (hit end of disk) (100%) Performance testing /dev/sdd at -190 GB (hit end of disk) (100%) Performance testing /dev/sdd at -200 GB (hit end of disk) (100%) Performance testing /dev/sdd at -210 GB (hit end of disk) (100%) Performance testing /dev/sdd at -220 GB (hit end of disk) (100%) Performance testing /dev/sdd at -230 GB (hit end of disk) (100%) Performance testing /dev/sdd at -240 GB (hit end of disk) (100%) Performance testing /dev/sdd at -250 GB (hit end of disk) (100%) Performance testing /dev/sdd at -260 GB (hit end of disk) (100%) Performance testing /dev/sdd at -270 GB (hit end of disk) (100%) Performance testing /dev/sdd at -280 GB (hit end of disk) (100%) Performance testing /dev/sdd at -290 GB (hit end of disk) (100%) Performance testing /dev/sdd at -300 GB (hit end of disk) (100%) Performance testing /dev/sdd at -310 GB (hit end of disk) (100%) Performance testing /dev/sdd at -320 GB (hit end of disk) (100%) Performance testing /dev/sdd at -330 GB (hit end of disk) (100%) Performance testing /dev/sdd at -340 GB (hit end of disk) (100%) Performance testing /dev/sdd at -350 GB (hit end of disk) (100%) Performance testing /dev/sdd at -360 GB (hit end of disk) (100%) Performance testing /dev/sdd at -370 GB (hit end of disk) (100%) Performance testing /dev/sdd at -380 GB (hit end of disk) (100%) Performance testing /dev/sdd at -390 GB (hit end of disk) (100%) Performance testing /dev/sdd at -400 GB (hit end of disk) (100%) Performance testing /dev/sdd at -410 GB (hit end of disk) (100%) Performance testing /dev/sdd at -420 GB (hit end of disk) (100%) Performance testing /dev/sdd at -430 GB (hit end of disk) (100%) Performance testing /dev/sdd at -440 GB (hit end of disk) (100%) Performance testing /dev/sdd at -450 GB (hit end of disk) (100%) Performance testing /dev/sdd at -460 GB (hit end of disk) (100%) Performance testing /dev/sdd at -470 GB (hit end of disk) (100%) Performance testing /dev/sdd at -480 GB (hit end of disk) (100%) Performance testing /dev/sdd at -490 GB (hit end of disk) (100%) Performance testing /dev/sdd at -500 GB (hit end of disk) (100%) Performance testing /dev/sdd at -510 GB (hit end of disk) (100%) Performance testing /dev/sdd at -520 GB (hit end of disk) (100%) Performance testing /dev/sdd at -530 GB (hit end of disk) (100%) Performance testing /dev/sdd at -540 GB (hit end of disk) (100%) Performance testing /dev/sdd at -550 GB (hit end of disk) (100%) Performance testing /dev/sdd at -560 GB (hit end of disk) (100%) Performance testing /dev/sdd at -570 GB (hit end of disk) (100%) Performance testing /dev/sdd at -580 GB (hit end of disk) (100%) Performance testing /dev/sdd at -590 GB (hit end of disk) (100%) Performance testing /dev/sdd at -600 GB (hit end of disk) (100%) Performance testing /dev/sdd at -610 GB (hit end of disk) (100%) Performance testing /dev/sdd at -620 GB (hit end of disk) (100%) Performance testing /dev/sdd at -630 GB (hit end of disk) (100%) Performance testing /dev/sdd at -640 GB (hit end of disk) (100%) Performance testing /dev/sdd at -650 GB (hit end of disk) (100%) Performance testing /dev/sdd at -660 GB (hit end of disk) (100%) Performance testing /dev/sdd at -670 GB (hit end of disk) (100%) Performance testing /dev/sdd at -680 GB (hit end of disk) (100%) Performance testing /dev/sdd at -690 GB (hit end of disk) (100%) Performance testing /dev/sdd at -700 GB (hit end of disk) (100%) Performance testing /dev/sdd at -710 GB (hit end of disk) (100%) Performance testing /dev/sdd at -720 GB (hit end of disk) (100%) Performance testing /dev/sdd at -730 GB (hit end of disk) (100%) Performance testing /dev/sdd at -740 GB (hit end of disk) (100%) Performance testing /dev/sdd at -750 GB (hit end of disk) (100%) Performance testing /dev/sdd at -760 GB (hit end of disk) (100%) Performance testing /dev/sdd at -770 GB (hit end of disk) (100%) Performance testing /dev/sdd at -780 GB (hit end of disk) (100%) Performance testing /dev/sdd at -790 GB (hit end of disk) (100%) Performance testing /dev/sdd at -800 GB (hit end of disk) (100%) Performance testing /dev/sdd at -810 GB (hit end of disk) (100%) Performance testing /dev/sdd at -820 GB (hit end of disk) (100%) Performance testing /dev/sdd at -830 GB (hit end of disk) (100%) Performance testing /dev/sdd at -840 GB (hit end of disk) (100%) Performance testing /dev/sdd at -850 GB (hit end of disk) (100%) Performance testing /dev/sdd at -860 GB (hit end of disk) (100%) Performance testing /dev/sdd at -870 GB (hit end of disk) (100%) Performance testing /dev/sdd at -880 GB (hit end of disk) (100%) Performance testing /dev/sdd at -890 GB (hit end of disk) (100%) Performance testing /dev/sdd at -900 GB (hit end of disk) (100%) ^C root@Tower:/boot/plug# And here's a quick-n-dirty patch that solved the problems for me. Perhaps the "correct" way to fix the size issue is to look at the 4th positional output off of fdisk - number of bytes - and to calculate MB/GB/TB off it. --- diskspeed.sh.orig 2014-01-14 09:14:43.000000000 +0200 +++ diskspeed.sh 2014-01-14 10:12:24.000000000 +0200 @@ -165,7 +165,7 @@ fi # Inventory drives -fdisk -l | grep "Disk /" > /tmp/inventory.txt +fdisk -l 2> /dev/null | grep "Disk /" > /tmp/inventory.txt sort /tmp/inventory.txt -o /tmp/inventory.txt DriveCount=0 LastDrive="" @@ -175,7 +175,8 @@ CurrLine=( $line ) tmp1=${CurrLine[1]} tmp2=${tmp1:5:3} - if [ "$tmp2" != "$FlashID" ];then + tmpunit=${CurrLine[3]:0:2} + if [ "$tmp2" != "$FlashID" ] && [ "$tmpunit" != "MB" ] ; then DiskID[$DriveCount]=$tmp2 LastDrive=$tmp2 tmp1=${CurrLine[2]}
  17. ESXi 5.5, w/ updated driver to support the onboard NICs. (no extra driver required for unRAID 5.0 and up - builtin drivers are good).
  18. I'm using his brother, X10SL7-F. Same form factor, similar attributes, but with an LSI SAS controller on the mobo (so before sticking any controller cards, you can connect 8+6 drives right into the board. Sweet). My CPU is Xeon 1230v3. It works very well with unRAID, both bare-bones and virtualized under ESXi. No need to flash anything (even on the LSI, I run with the stock IR firmware and it works perfectly). You might consider this one... or, think of it as "close enough" to the one you were looking at.
  19. In v2.0, if I use "-s" with any number other than the default, the average comes out skewed (check e.g. with -s 3; you can check with -s 33 to get really interesting result). Average calculation seems wrong. Cumulative patch against v2.0: --- diskspeed.sh.orig 2014-01-11 19:30:04.000000000 +0200 +++ diskspeed.sh 2014-01-11 19:50:26.000000000 +0200 @@ -151,7 +151,7 @@ rm /tmp/diskspeed.tmp DiskGB=${DiskGB:32} DiskGB=${DiskGB/" MBytes"/""} - DiskGB=$(($DiskGB / 1000)) + DiskGB=$(($DiskGB / 1024)) SlowestTest=999999999 LoopEnd=$(( $samples - 1 )) @@ -257,7 +257,7 @@ else drivenum="Drive $disk" fi - diskavgspeed=$(($total / 10 / 1000)) + diskavgspeed=$(($total / $samples / 1000)) echo -e "\033[1A$drivenum: $diskavgspeed MB/sec avg\033[K" echo fi
  20. Neat script! Well done, jbartlett, and thank you! A small bug (which triggers the "hit end of disk" issue) can be fixed with the following patch: --- diskspeed.sh.orig 2014-01-11 19:30:04.000000000 +0200 +++ diskspeed.sh 2014-01-11 19:30:17.000000000 +0200 @@ -151,7 +151,7 @@ rm /tmp/diskspeed.tmp DiskGB=${DiskGB:32} DiskGB=${DiskGB/" MBytes"/""} - DiskGB=$(($DiskGB / 1000)) + DiskGB=$(($DiskGB / 1024)) SlowestTest=999999999 LoopEnd=$(( $samples - 1 ))
  21. This actually makes a lot of sense as well. To implement this on unRAID user shares, block-level encryption layer will not be suitable (as user shares can span several drives, so you might end up with partially encrypted shares). This would be better achieved with a file-level encryption system, such as eCryptfs or EncFS. Having such a filesystem under emhttp's share system could also be very nice.
  22. Pretty much, but let me make some comments. This is presuming you choose truecrypt as your crypto layer. See below. I'm not sure what the -N option stands for. The TC version I have does not have it. Could you be referring to the TC slot number? In that case, the version I have has this as "--slot". Now, this stage is a bit tricky, cuz that's where you need to provide the crypto key. This could be a keyfile, or a passphrase. The latter is often expected to be provided at time of mount (manually); the former may be kept in /boot (hence allow for automatic mount upon server boot) or also be provided manually at time of mount. The choice depends on what your threat model is (if one is concerned about the entire box being lifted and taken away, then a keyfile that's on the boot flash drive will give itself away to the thief; if you're only concerned with individual drives being readable, then an on-server keyfile might just be good enough for you). So bottom line, we should allow for an optional dialog at that point, if the user so chooses. Keyfile or passphrase can be provided on the command line, if obtained via GUI. Next, to run this way, unRAID kernel and bzroot will need to contain the device mapper modules and tools. (To work around this, I used "-m nokernelcrypto" on the truecrypt command line.) And last, truecrypt is but one option. A second one, maybe even a bit more convenient for us, would be dm-crypt. Towards that end, the mount command will look something like: cryptsetup luksOpen /dev/md1 crdisk1 mount /dev/mapper/crdisk1 /mnt/disk1 Here too, the first line carries the keying process, so passphrase and/or keyfile are provided at that time. Either server-resident (so no manual intervention) or manually. Yep. Or its equivalent with dm-crypt (/dev/mapper/crdisk1 in the above example). Yes, with same alternatives.
  23. Why would this weaken the security? Because e.g. if you encrypt disk1 (only) in this method ("under" the md), and the parity and disk2..diskn remain clear, then the clear-text data of disk1 can be computed (so encryption is compromised). If you encrypt disk1+disk2 and leave parity and disk3...diskn clear, it does become much more complex but still allows for decent cryptoanalysis. Only if you encrypt all of the drives, will you match the security of encrypting from "above" md. The point is that parity is computed at the md level; and in this alternative #3, it is computed off the clear text. If on the other hand crypto layer lives above md, then regardless of whether you opt to encrypt one, two or all your drives, parity is computed after encryption is done (i.e. on crypto text for encrypted drives and on plain text on non-encrypted drives), so it does not give away anything.
  24. Have you tried RDM passthru for the individual drives, instead of passing the whole controller? Might work better for you in ESXi. Just a thought.
  25. Or you can still connect the reds to your RocketRaid ctlr, and have the Se's on the LSI side?