Jump to content

madburg

Members
  • Posts

    1,277
  • Joined

  • Last visited

Everything posted by madburg

  1. The size difference will/can be measured when all said and done. It can't be argued there is nothing but how much more will be seen. For some of us that can be difficult to measure as we don't have say a 6.0 (plain 64 bit) and a 6.1 (xen/unraid). In the end it will be what it will be, its Tom product. Off butlerpeter point, Tom will there be 2 Webgui's or one which would dynamically display options based on the boot mode selected, thus Xen/VM options not showing up if one was to boot to unRAID versus boot to Xen/unRAID Dom0?
  2. I don't disagree with that and that's why the "APPS" i run are on VMs not on unRAID, you said it better certain things belong in core unRAID and if Tom doesn't add (always pledging he does), then we have no choice but to; apcupsd is a perfect example.
  3. That was my understanding from Tom's first post clarifying things with this 6.0 beta. I just want to express that I hope that options remains through 6.0-Final and does not get removed. That flexibility would be greatly appreciated. I appreciate and thankful that ssh, mailx and other improvements came with the 64 bit version, so I no longer need to install them as plugin's. I have expressed in the pass if Tom adds these tidbits in I feel better someone in the know and owns and supports the product did it right (linux is not my thing, I add just what I need to get the proper functionality of a NAS).
  4. See the position being missed here is that some don't need and don't want to use anyone elses flavor of VM, and be depended on it, you get hit by a school bus tomorrow and they don't know what to do. If someone wants to virtualize they have turnkey hypervisors for that today, right now. As one example, I have ESXi, I run various VMs, one of them is unRAID, I have a dedicated controller for it, and a dedicated controller for my VM's (VM Datastore) i picked the raid level I wanted, I back them up the way i want It doesn't affect unRAID nor unRAID performance. I spin up any OS I want, load whatever apps I want on VMs today, right now. But to have to wait for someone to create a plugin for unRAID and it may or not work well and as easy as other turn key hypervisor already out there is questionable. I agree with dalben comment and believe in the unRAID I purchase for what it does being a good NAS which spins down drives. For those who don't have the money/interest/knowledge to do something like the above to offload apps like sab, sickbeard, via VMs etc.. they want it to run on unRAID as a plugin, and with that, the hope was with a x64 bit version from Tom, it would be what is require to run those type of plugin and not run into memory issues as they currently are it the 32 bit version. What it sound and looks like at this moment is, yeah here is the 64 bit version, but don't run plugins, run them with a VM through unRAID with no solid easy way to do so (as it stands). So I ask why? I have my chosen hypervisor today which I can spin up an VM OS i want with a few clicks and just want a 64 bit unRAID, to be able to say run cache-dirs on ALL the content and preclear drives at the same time, run parity check without things boming out. I understand for the tinker and advance linux user this is great fun but it not for the general community. TO the point above if I only have 4-8 gb of ram I just want a few thing say like sab/sickbeard/couch to run on unRAID. Why cant have person run it as a plugin on a 64 bit version, why would they have to run a VM and allocate it memory to run those three apps, that just doesn't make sense. P.S. I am not saying to ripe XEN out, if its a small footprint and makes advanced linux users happy to pay with, sure they can knock themselves out and play.
  5. Thank you for the clarifications Tom, glad that the option to boot unRAID with out Dom0 exists. As well as BLKMGR post with some points. Have to find some hardware & key now for that extra license. Still on a bus going home F'in bus drivers, just hate them.
  6. I appreciate the replies, good, bad, an the dude as well. The post was not to take the thread of coarse just some clarification. To touch on some points, Tom created and expressed the roadmap. Reading the two beta threads took me by surprise with the roadmap for 6.0 What I dont understand with some of the points made, is many people have taken a hyper visor like VMware (free) virtualized unRAID and passed-througth the disk controllers and off loaded varies functions. So while some may want unRaid to be the hypervisor, I ask this if in this case unRaid for myself and others is already virtualized and now 6.0 boots to dom0 it really is now just another layer between the controllers and unRAID? Does that even work, and if it does and there is a performance hit and/or bugs, that's ok for all those people? Where if this would be the case at least we would not be stuck with a 32bit unRAID with this setup. We would have a 64 bit 6.0 and 6.1 would be for others. Secondly even with off loading various functions to our VMs today we still have memory management issues, thus why we were happy to hear that a 64 bit counter part was in the roadmap. How about the people who really wanted cache pool? It's not cool to roadmap and then jump ship on it. As I expressed its not that all this new new stuff is not cool, it's a bit unfair not to offer a 64 bit counter part to 5.0 to basic users and who would benefit from a 32 bit to 64 bit swap. Tom takes pride when non-techie setup unRAID and love the easy. This is way past that. I respectfully request some consideration from Tom on this. Moved this as 6.1 and work 5.x, 6.x and 6.1 at the same time if need be, since this work started. Sorry in advance for typos, on a mobile phone typing this up.
  7. I recall that 6.0 was supposed to be for a lack of better words the 64 bit counter part of 5.0 to help us with memory management. I do understand some variation, but this seems that it has forked completely off that premise, no? Looking more like a 6.1beta. I think this is all cool, but we lost the 64 bit counterpart, Tom will be so busy with all these virtual layers and new samba, etc. I am sure many would have appreciated the 64 bit counter part to 5.0 first, fix and learn from it as the first pass and pick these additions back up in a 6.1 beta.
  8. In Addition too #3 For an example I created a directory on the cache drive "TV Shows" which is a unRAID share (SMB/AFP), the share consists of disk 12-18. I created a 0kb flat file test.txt in the TV Shows folder on the cache drive. All drives spun down except for the cache drive. Execute mover, Disk #1 spun up and disk 12 AND Disk 13 spun up. The file was copied to Disk #13 (which is wrong based on my share configuration most free, should have been disk 18, let forget this for a moment) Disk#1 spun up because of mover (you can explain that one better than I) Disk#12 spun up because it timestamped JUST the TV Shows directory and Disk #13 spun up because the file was copied to it AND timestamp the TV Shows directory on this drive as well. Whats wrong with this picture? If is going to timestamp just because that directory exists then it should have done so on disk 12-18, I don't believe this is the intended behavior it should have only spun up and worked with disk#13 (actual disk #18 but that a whole other issue) Simple test anyone will a fairly large array with shares and a cache drive can try.
  9. Hey Tom, 1) First time the download has ever failed for me, it brought down about 14mb (did not fail, Chrome browser, FIOS in the states). Second attempt a few minutes later succeeded. 2) smartmontools drivedb update fails in this release, assumption it has something to do with the addition "- slack: added curl" root@Tower:~# /usr/sbin/update-smart-drivedb curl: error while loading shared libraries: libcurl.so.4: cannot open shared object file: No such file or directory curl: error while loading shared libraries: libcurl.so.4: cannot open shared object file: No such file or directory /usr/sbin/update-smart-drivedb: download from trunk failed (HTTP error) 3) For "- mover: removed -O switch from rsync command, wasn't doing what I thought" This spins-up Drive #1 upon execution of mover (just tested under 5.0.4 since this change was mentioned), we worked through this before so you are fully aware as to why this occurs and you added the 'O' as 'the fix' previously.
  10. I understand, and appreciate the link, but IF Tom can do something on his end instead, I am all for it. If he cannot (because its out of his hands) then the hack it will be.
  11. SAN, thats Hot! I always wondered why it wouldnt let you assign the drive. Syslog attached, its a RC16c dev vm, I just brought it up and assigned one disk (as expressed above it went back to unassigned) and captured the syslog from that point for you. syslogvmscsi.txt
  12. The size of the source 5.0.2-rc1 is different than 5.0.2, so it seems something more than a suffix was dropped, if you would re-verify that would be most appreciated. Maybe a different compression selected?
  13. http://www.buddyns.com/delegation-lab/download.lime-technology.com http://www.buddyns.com/delegation-lab/lime-technology.com You may want to look at this Tom, results changes when you check back every so often.
  14. Thank you very much Tom for upgrading the smartmontools version! I would drop the 'rc', it's clearly label as minor release/patch by the versioning. P.S> no issues downloading the files.
  15. Thanks, looking forward to testing it out. FYI on smarmontools, that is still too old of a version, you can't get to the new SVN repository to download the lastest HD db Please update to 6.2 when you get a moment.
  16. Well unfortunately my parity check is longer, "kernel: md: sync done. time=44969sec" 12.49 hrs What I noticed was the last TB in the new parity drive (4TB) took the additional 2 hrs to complete, largest data drives are 3TB, so even after all data drives completed their reads, it still took this additional amount (which I previously didn't have, as the parity drive was a 3TB drive) guess thats how it goes. Go big go long So taking the last 2 hours out of the equations (If the parity drive was a 3TB drive), would put the parity check at 10hrs, which is the same amount of time I had with the 250GB drive installed and running a parity check. Surprisingly as the script showed a bump of 27MB from the previous Best bang for the buck to the new one. I also decide to take a deeper look at my green drives, the 2TB units are 5940 rpm, and the 3TB units are 5700 rpm (all drives are Hitachi)
  17. I upgrade by 7200prm 3TB parity to a 7200rpm 4TB parity drive, removed my 250GB drive altogether and replace it with the previous parity drive. This broke the 100MB barrier for my system. In my original post I forgot to mention unRAID runs virtualized, 4GB Ram reservation, 16port LSI controller with an lsi expander hanging off 2 of the ports. I have a mixture of drives, Data: (7) 2TB green drives, (7) 3TB green drives, (1) 2TB 7200rpm / Parity 4TB 7200rpm / Cache 2TB 7200rpm I do wish the script ran a test with the default unRAID values first to see the difference. Tunables Report from unRAID Tunables Tester v2.2 by Pauven NOTE: Use the smallest set of values that produce good results. Larger values increase server memory use, and may cause stability issues with unRAID, especially if you have any add-ons or plug-ins installed. Test | num_stripes | write_limit | sync_window | Speed --- FULLY AUTOMATIC TEST PASS 1 (Rough - 20 Sample Points @ 3min Duration)--- 1 | 1408 | 768 | 512 | 100.3 MB/s 2 | 1536 | 768 | 640 | 108.1 MB/s 3 | 1664 | 768 | 768 | 109.4 MB/s 4 | 1920 | 896 | 896 | 110.8 MB/s 5 | 2176 | 1024 | 1024 | 111.4 MB/s 6 | 2560 | 1152 | 1152 | 111.7 MB/s 7 | 2816 | 1280 | 1280 | 112.0 MB/s 8 | 3072 | 1408 | 1408 | 112.0 MB/s 9 | 3328 | 1536 | 1536 | 112.1 MB/s 10 | 3584 | 1664 | 1664 | 112.2 MB/s 11 | 3968 | 1792 | 1792 | 111.9 MB/s 12 | 4224 | 1920 | 1920 | 111.9 MB/s 13 | 4480 | 2048 | 2048 | 111.9 MB/s 14 | 4736 | 2176 | 2176 | 112.1 MB/s 15 | 5120 | 2304 | 2304 | 111.8 MB/s 16 | 5376 | 2432 | 2432 | 112.0 MB/s 17 | 5632 | 2560 | 2560 | 112.1 MB/s 18 | 5888 | 2688 | 2688 | 112.1 MB/s 19 | 6144 | 2816 | 2816 | 111.9 MB/s 20 | 6528 | 2944 | 2944 | 111.9 MB/s --- Targeting Fastest Result of md_sync_window 1664 bytes for Final Pass --- --- FULLY AUTOMATIC TEST PASS 2 (Final - 16 Sample Points @ 4min Duration)--- 21 | 3424 | 1544 | 1544 | 112.1 MB/s 22 | 3448 | 1552 | 1552 | 112.0 MB/s 23 | 3464 | 1560 | 1560 | 112.2 MB/s 24 | 3480 | 1568 | 1568 | 112.1 MB/s 25 | 3496 | 1576 | 1576 | 112.2 MB/s 26 | 3520 | 1584 | 1584 | 112.2 MB/s 27 | 3536 | 1592 | 1592 | 112.0 MB/s 28 | 3552 | 1600 | 1600 | 112.1 MB/s 29 | 3568 | 1608 | 1608 | 112.1 MB/s 30 | 3584 | 1616 | 1616 | 112.1 MB/s 31 | 3608 | 1624 | 1624 | 112.1 MB/s 32 | 3624 | 1632 | 1632 | 112.1 MB/s 33 | 3640 | 1640 | 1640 | 112.1 MB/s 34 | 3656 | 1648 | 1648 | 112.0 MB/s 35 | 3680 | 1656 | 1656 | 112.0 MB/s 36 | 3696 | 1664 | 1664 | 112.0 MB/s Completed: 2 Hrs 8 Min 17 Sec. Best Bang for the Buck: Test 5 with a speed of 111.4 MB/s Tunable (md_num_stripes): 2176 Tunable (md_write_limit): 1024 Tunable (md_sync_window): 1024 These settings will consume 153MB of RAM on your hardware. Unthrottled values for your server came from Test 23 with a speed of 112.2 MB/s Tunable (md_num_stripes): 3464 Tunable (md_write_limit): 1560 Tunable (md_sync_window): 1560 These settings will consume 243MB of RAM on your hardware. This is 153MB more than your current utilization of 90MB. NOTE: Adding additional drives will increase memory consumption. In unRAID, go to Settings > Disk Settings to set your chosen parameter values. Again thanks for all the work. I will run a parity check to see if its less than 10 hours (which is what I had with the 250GB drive in the array previously) previous results with 250GB drive and 3TB parity drive (all else was the same) and v2.0 of the script back then: Tunables Report from unRAID Tunables Tester v2.0 by Pauven NOTE: Use the smallest set of values that produce good results. Larger values increase server memory use, and may cause stability issues with unRAID, especially if you have any add-ons or plug-ins installed. Test | num_stripes | write_limit | sync_window | Speed --- FULLY AUTOMATIC TEST PASS 1 (Rough - 20 Sample Points @ 3min Duration)--- 1 | 1408 | 768 | 512 | 84.4 MB/s 2 | 1536 | 768 | 640 | 82.4 MB/s 3 | 1664 | 768 | 768 | 84.7 MB/s 4 | 1920 | 896 | 896 | 84.9 MB/s 5 | 2176 | 1024 | 1024 | 85.0 MB/s 6 | 2560 | 1152 | 1152 | 84.8 MB/s 7 | 2816 | 1280 | 1280 | 84.9 MB/s 8 | 3072 | 1408 | 1408 | 85.0 MB/s 9 | 3328 | 1536 | 1536 | 84.9 MB/s 10 | 3584 | 1664 | 1664 | 85.0 MB/s 11 | 3968 | 1792 | 1792 | 85.0 MB/s 12 | 4224 | 1920 | 1920 | 84.4 MB/s 13 | 4480 | 2048 | 2048 | 84.9 MB/s 14 | 4736 | 2176 | 2176 | 85.0 MB/s 15 | 5120 | 2304 | 2304 | 84.8 MB/s 16 | 5376 | 2432 | 2432 | 85.0 MB/s 17 | 5632 | 2560 | 2560 | 84.9 MB/s 18 | 5888 | 2688 | 2688 | 85.0 MB/s 19 | 6144 | 2816 | 2816 | 85.0 MB/s 20 | 6528 | 2944 | 2944 | 85.0 MB/s --- Targeting Fastest Result of md_sync_window 1024 bytes for Medium Pass --- --- FULLY AUTOMATIC TEST PASS 2 (Final - 16 Sample Points @ 4min Duration)--- 21 | 2008 | 904 | 904 | 84.8 MB/s 22 | 2024 | 912 | 912 | 84.8 MB/s 23 | 2040 | 920 | 920 | 84.7 MB/s 24 | 2056 | 928 | 928 | 84.7 MB/s 25 | 2080 | 936 | 936 | 82.9 MB/s 26 | 2096 | 944 | 944 | 84.7 MB/s 27 | 2112 | 952 | 952 | 84.8 MB/s 28 | 2128 | 960 | 960 | 84.3 MB/s 29 | 2144 | 968 | 968 | 84.4 MB/s 30 | 2168 | 976 | 976 | 84.8 MB/s 31 | 2184 | 984 | 984 | 84.2 MB/s 32 | 2200 | 992 | 992 | 84.9 MB/s 33 | 2216 | 1000 | 1000 | 84.8 MB/s 34 | 2240 | 1008 | 1008 | 84.7 MB/s 35 | 2256 | 1016 | 1016 | 84.7 MB/s 36 | 2272 | 1024 | 1024 | 73.7 MB/s Completed: 2 Hrs 8 Min 12 Sec. Best Bang for the Buck: Test 1 with a speed of 84.4 MB/s Tunable (md_num_stripes): 1408 Tunable (md_write_limit): 768 Tunable (md_sync_window): 512 These settings will consume 99MB of RAM on your hardware. Unthrottled values for your server came from Test 32 with a speed of 84.9 MB/s Tunable (md_num_stripes): 2200 Tunable (md_write_limit): 992 Tunable (md_sync_window): 992 These settings will consume 154MB of RAM on your hardware. This is 64MB more than your current utilization of 90MB. NOTE: Adding additional drives will increase memory consumption. In unRAID, go to Settings > Disk Settings to set your chosen parameter values. P.S. I always have NCQ enabled
  18. Thanks, I understand now, it was not clear for me and seems for others as well that it explicitly MUST be a FAILED disk; wrong disk/missing disk are redballed as well but not FAILED. Maybe Tom would consider allowing this, so one doesn't have to force a failed disk. As it seems once you take the extra steps to force unRAID to see an assignment as failed, it will perform the SWAP. Months of waiting for documentation to be updated before final could be called final, and sh^ts still not clear. Tom could have answered me back with this one liner detail and the additional steps to take to achieve the same, and I am not the only one who reached out to him and got no reply. Instead I burned 24 hours with a stopped array waiting for a reply from him, and 4 days later you are the only one who can explain it properly. Wouldn't be the same if one was purchasing a license, you'd get a reply back. EPIC FAILURE! Tom#2 you should get access to Tom#1 mailbox and see just how much gets unanswered so you can smack him around!
  19. ESXi is installed to a USB drive in the server. The top slot in the server is for the datastore drive in ESXi where all the VM's are housed. My misunderstanding, sorry, got it now. I fully understand the programming side of things and that it would take some time to figure out properly. I don't expect it to be fixed/implemented over night but it would be a good thing to get on the list. It would be a welcomed addition
  20. @abeta 1) turn spin down off, not to longer value, otherwise you will have the 5 second polling. Until you have luck with Tom testing anything. 2) You should flash you your LSI controller bios (long story for this post), it is incorrect not to and via the LSI configuration you can make adjustment so it post fasts, and the ports will be enabled ahead of time, your syslog shows the behaviour of this. @prostuff Waste of a drive and slot to run esxi from a drive, recommend running from usb and not losing a slot. Theres no fundamental issue of the system as a whole from what I can see just no full SAS support by unRAID is all. sdparm is no where near a counterpart, etc.. of hdparm ist just similar and would give the ability to spin up/down drives, adding the utility itself to unRAID is simple but coding the detection for which drives are and are not SAS and executing different commands based on if it a SAS drive or not is not (getting it right). Also have to keep in mind that sdparm would only be used for a cache drive, and the unraid md driver would need to make SCSI calls not ATA calls to the SAS drives to accomplish what Tom does today to implement spin groups and spinning down/up drives. This all goes back to what Purko stated several times, that this may not be the best thing. I understand both their points. Its up to Tom whether he would entertain SAS drive support, but it will never be overnight so keep that in mind. The easiest is getting the drive temps and could be done overnight and a start.
  21. I also see why your not getting drive temps, but was looking for Toms comments in a post to properly explain why in your situation (with SAS drives). Quote from Tom But smartctl for a SAS drive reports temps as "Current Drive Temperature:" So you could ask Tom to add checking for this third variation. Secondly, spin up/down Quote from Tom So whether or not you are using a cache drive its important to have hdparm compliant with your HDs', which you can see its not. Its also pretty clear in your situation that the unraid driver calls of "ATA_OP_STANDBYNOW1" and "ATA_OP_SETIDLE1" are failing as well (syslog entries) so this is a Tom thing to let you know if its a linux disk driver thing... and what he could possible do (have you test something) OR its a problem with controller/cable/backplane, which you will have to test (quite easy especially if you have a spare SAS Drive) So in the end the drive temp could be an easy addition by Tom, you would benefit knowing your HD's are to HOT Secondly, you're not spinning drives down so (two things) 1) you might as well turn that off for now (so attempts are not being made and syslog entries logged about it, as there is a background thread that wakes up every 5 seconds to see if it's time to spin down a drive) 2) That sucks as it is a key point for unRAID to keep your drives spun down, but not everyone cares... thats something for you to decide. You mention "I think its probably normal or as close to normal", I personally don't believe so, we started back in the beta 5.0 days with no temps and no spindown/up with anything LSI controller based and things were added/change to get that working, so here you are running unraid 5.0 like pre beta 7, in your setup. Not trying to rain on your parade, I welcome the change(s) that would need to be made to unRAID to support SAS drives. Looking around it does look like hdparm support SAS (SCSI) SAS controllers tunnel ATA (for a lack of better words) so our SATA drives are fine. There is a sdparm utility ("An utility similar to hdparm but for SCSI devices") that can spin up/down SAS drives thought, and this is what would probably be required...
  22. Thats all cool, nothing wrong with trying something new as long as you don't mind it might not be 100% in the end. It could also be the backplanes possible as well. If you have a spare SAS drive what you could do is add it to a different box (shutdown your unRAID server and pull one of the LSI controllers for a moment if need be), running some other OS and see if hdparm gives you back complete results. Or pull the unraid usb key and all the drive and insert a Spare SAS drive, load say windows, and see what hdparm comes back with. Can help you rule out if its an OS/component issue, or controller/cable/backplane, etc.. you get the idea. I couldn't find anything about that particular HD model and hdparm issues on the net. So I don't believe its hdparm and your particular drive.
  23. The LSI controllers and most cages support SAS but I don't have experience with unRAID and SAS drives. I have seen bad/missing sense data once befoer when cable wasnt fully seating, but in your case its the same for all the drives, and there all SAS. SAS cables are different than SATA so its either the cables or some type of missing (for a lack of better words) component in unRAID to support either the SAS protocol entirely (something to that affect). When you say they tested one of the SAS drive, do you mean precleared it or some other more tests? Wondering if unRAID was actually loaded to see what it thought of the drive (hence it would have been noticed no drive temp and syslog errors for that one drive)? You really should get those drives running cooler P.S. Did you choose/pick those particular drives?
  24. That sites pictures load @ss slow, they supplied you with the SAS drives and cables to them? or you did that portion? I would not think they would have supplied you with a setup that does not provide drive temps and complete integration of drive info via unRAID P.S. Your drives are running (based on they are all the same and in that chassis) very hot 48 C, may want to look into that.
  25. Interesting, hopefully it can help someone else in the future (and they are able to confirm that worked for them as well), and the documentation can get updated with the exact correct steps to take. I can no longer test this; passed that point.
×
×
  • Create New...