Johnm

Members
  • Posts

    2484
  • Joined

  • Last visited

Everything posted by Johnm

  1. I think webo is suggesting that you map an unraid share in your Win XP guest. Then write your torrents directly to the shared drive map. For my usenet downloading guest, i have my cache, temp files databases on the SSD. BUT... I have a second VMDK mechanical drive for that guest that is dedicated for the end files (similar to your completed torrents). those completed files are then moved to my array via other programs/scripts once complete. In a way, I do what Webo suggests, but i use a virtual drive for the bulk of my transferable data. that way my array can spindown and is not taxed by 24x7 access so i guess i do it this way... 1 windows Guest with 2 virtual drives. VMDK1 > 30GB (part of a 120GB Datastore SSD)Windows + dowloading app and cache/temp files/database VMDK2 > 500GB (part of a 2TB datastore spinner) Windows storage drive attached to above guest. My scripts then move any completed file to my unraid SDD cache drive, this then moves to the unraid with the mover. This turns out to be very efficient. for me and uses as little power and system resources as possible. you could bypass the second spinner and write directly to your unraid cache drive. this would be almost as efficient depending on your downloading speed and impact. My torrent guest is just on a 100mb VMDK on a spinner. but i hardly ever use torrents to be honest. When I do, it is like 1 or 2 files every 6 months.
  2. that hardware is completely compatible with unraid. (assuming you didnt change the ram) in fact, i just got the same stuff this week to play with. It sounds like you made a mistake in the creation of your boot flash drive. since you have never booted to unraid yet, I would go ahead and format the flash drive (fat32) in your desktop and start over with the creation.
  3. i would grab a copy of beyond compare and use that for your testing. it has a 30 day trial set the rules to binary. it will do the copy then verify every copy at a binary level. this might help with some testing.
  4. I'll agree there. in addition XP is not SSD aware to complicate it more. I do have a 2k8 guest that is constantly 24x7 writing to a database and making temp files for every usenet download i grab. That would be millions of photos and rar files. I should take a look at that drive and report back what sort of damage i have done in the last 6 months to it.
  5. Sounds like some sort of strange flash drive corruption.
  6. So for I have only installed some of my New guests as VM8's. My existing guests need none of the new features (that I am aware of). i have the "Why fix what aint broke?" mentality when it comes to my (virtual) servers. I play with servers at work to much to want to deal with them at home.
  7. that does not sound right... I decided to take a look at my Cache Drive SSD to see if it is really as much data as I thought. It looks like it was not AS much as I thought, But close. I ran some SMART tools. I humored myself and ran SSD life also. Both tools showed 12.5 TB of data written and 19 TB of data read from my cache drive and it is is in Excellent health. The smart reports showed nothing bad.
  8. It looks like you have quite a few older drives in your array. That Samsung 400GB for example is a very slow performer by todays standards. That would be a suspect for slow writes to that drive. http://www.harddrivebenchmark.net/hdd.php?hdd=SAMSUNG+HD400LJ When people are saying you should see 30MB/s+. we are assuming you have current models. Try and limit your your testing to your best drives. also, your parity drive should not only the largest drive, but it helps if it is the fastest drive you have. It will always be the deciding link on the array for maximum write speed. a slow parity drive will slow all writes. But, I will agree that 5-6MB/s is still a little slow. I feel something is not quite right.
  9. I believe those reviews for failing SSD's are for the firmware bug where the drive vanishes from the bios and is 100% unreadable. I have experienced this myself in the past. this has been fixed recently to my knowledge. Also, i would guess there might be a few DOA's but nothing like the DOA count on a mechanical drive. I am referring to mass failures of the sectors themselves. yes, sectors will fail, but not at the alarming rate people are predicting. most are trying to imply the drive will be toast in a few months. they have proven that's not the case. The ssd's should last almost the same as mechanical drive put under the same stress. Don't get me wrong, I am not preaching SSD's are the gods of the computer world. but, they are the future and were we are going. we need to have some trust in them. I would love to see some new data on failure rate of real world testing. Newegg reviews tend to be tweens that are computer illiterate.
  10. One thing I would do for windows guests on SSD's I do disable the pagefile and indexing. if you cut back as much unessisry windows writing, I am sure it will help. I think they have pretty much proved that most of the "SSD is going to fail" is a lot of paranoia. As long as you dont fill it up 100% then do a lot of data writing to it (im talking 1000's of megs a day). it should have a nice happy long life. My ssd unraid cache drive has seen up to 12-15 TB of data writen/erased across it in the 2-3 months it has been in service. i do have it set to leave 20gb space on it.
  11. As answered. the Lighs are power and the IPMI heartbeat so you know it is active. for the beeps, i think it is actually nice to know when it reboots and when unraid is up. I actually have mine beep the super mario brothers tune when booted into unraid. the tones are handy if you are running completely headless. it give you an alert to whats happening. If they were louder, I would agree with you, but they are a reasonable volume. I believe you can disable it 100% by jumpering the external speaker plug (JP1 or JP2?) but then you wont get any other sounds from it.
  12. they exist still. my IKEA has tons of them.. my 2 unraid boxes in a Lac i got a few months ago. Bad cell photo sorry.
  13. Was this issue ironed out in the beta 14? Or Johnm have you not updated it to beta 14? I have another question, in your opinion would it be better to start with the stable version (4.7) and then start to migrate towards the latest beta version (currently beta14) or would it be possible to jump straight to beta 14? sorry for the late answer, i never saw this. i have beta14 on my second box (Goliath in sig) . I seems ok but it only gets minimal use (off for 2 weeks then pounded with a TB or 2 then back off). it would be best to check the beta 14 thread as far as to start off with what version really depends on your comfort and hardware. 4.7 is getting pretty old and a lot of hardware runs better on the betas. I honestly came very recently to the unraid game. I tried 4.7 for a few days then tried 5beta6a before i ever put any data on my server. I went with the beta, but I always had 2 copies of my data for a long time before I trusted the beta. as long as you read the info in each beta thread before you jump in and endanger (harsh word and not really true) your data. in general the betas have been pretty stable with a few driver and AFP errors. It is my understanding that we are getting close to a release candidate. that should give you some comfort. just make sure you test, test and then test again before you put into production.
  14. Put the Win XP on the SSD.. if you worry about killing the SSD, there is no point in buying it. i have been non stop pounding my SSD's and they are ok so far. putting a Heavy IO task on a single mechanical drive datastore will just impact and possibly cripple all other guests on that drive. with ssd's the lag from the high IO guest is negligible. this reminds me of when touchscreen smartphones came out.. "you'll wear the touch screen out by touching it".. but umm.. yeah.. there is truth to that, but we tend to upgrade before we get there.
  15. i'll take 21 for my unraid server....
  16. I'm glad you mentioned this, I was going to also. It is also the reason I went with a 160GB for my local/home/apps drive on uinRAID. I need about 120GB, but I knew I would need extra space for sector reallocation. What is the ESXi dedicated cache ? I guess I should use the correct term “Swap to host cache”. It was added with ESXi 5.0. If you have an ESXi server that is short on RAM and/or your hypervisors are putting pressure on the host for "More RAM" you can use an SSD to help with this. It is like a windows paging file for ESXi for lack of a technical description. You need one SSD per host. You can actually assign it to less then a full SSD. if you have a 120, you can give it say 28GB of cache space off that drive. I would guess there might be a slight performance impact for the guests on the rest of the drive. I would just give it a dedicated SSD or share one with Low impact Guests (firewalls, unraid boot, etc). we do use it at work but I have not really seen the performance boost with our set up.
  17. Ack, i hate to break it to you. that is the wrong cable. to go from a motherboard sata connector to a Norco backplane, you need a "reverse" breakout cable. The "forward" breakout cable is for going from a raid card to a sata drive. However, the SAS cable you got for the MV8 to the backplane is Correct. you should see those drives just fine.
  18. As far as the the "THE" question. I leave the "the" in the front. My XBMC auto sorts the "the" to the back of the title for me. Good point on the Scraper Raj. I have very little scraping problems. with I due, it is usually because IMDB (scraper I use) tends not find the/ or have the movie listed. a few movies i did have to put Name, (i), date for the correct scraping. that, or i used a 3rd party plugin to manually overwrite the bad Scrape.
  19. Ack. is this on the hardware build in your Sig? PSOD's have usually been a driver/hardware issue in my experience. I was getting those with some crappy OEM.tgz drive for a non-Intel card once a while back. It was happening during boot, not while running. Is your hardware listed on the ESX whitebox pages?
  20. So if I am planning on installing 4 VMs: -- unRAID --> set to 2Gb partition -- WinXP (FileZilla box) --> set to 25 Gb partition -- Win2k3R2 DC (maybe) --> set to 25 Gb partition with no pagefile -- ClearOS or pfense firewall distro (assuming I can make it work) --> set to 5Gb will a 60 Gb SSD work or should I stick with a 250-320 Gb SATA drive? That's a bit much for a single 60GB SSD IMO, even set to thin. you want to have overhead on the SSD to keep from burning it out. Also if you are backing your Guests up, some methods will need as much free space as your biggest guest usually for the snapshot while it backs up (assuming you dont down the guest for the backup). That 2k3 (and 2k8) DC will run fine on a spinner. the same for the firewall. the XP/unRAID boot will like the SSD. a 60 would be fine for that. Larger is always better if you asked me. I know it is expensive. but if you have free drive space to play with, you will be a happier camper. especially for any sandbox windows guests you might build for a day and erase. PS. dont be scared to build it on a spinner and migrate the guest to an SSD when you have the available funds. That is one of the beauties of virtual drives/machines. you can move/migrate/copy/port-able-ize them.
  21. I tend to use Gladiator.2000.Extended.Cut.1080.BluRay.DTS.x264 for my formatting. Movie name.Year (lots of movie remakes out there).(directors cut or theatrical if i have multiple rips).Resolution (1080p, 720p, 420 etc), Source format (Bluray,DVD,SD source ETC),audio type if applicable.(DTS, DD5.1, Etc.),Container type (MKV, ISO, DIVX, etc). I would recommend at the least Movie Name, Year, Format/Resolution It took a while for me to format older movies. Once you get into the grove, it is pretty easy and well worth the extra effort. especially once you have 1000's of movies. I do still have lots of movies With Brackets and parenthesis in the tile and some with spaces. I Did remove all "&" and replaced it with "and" because I was running into issues.
  22. The OCZ had massive issues with the older firmware, this turned out to be bad firmware from sandforce. this has since been addressed and affect most vendors I believe. I originally hated my OCZ's. Since the firmware upgrade, they have been fine. Almost any brand/type will do the job. the OCZ just seem to always be on sale and easy to get, so I went with those. I am not sure what I would get if I started over. I also have some older Mushkin 120's that I love. I would look at Intels and Kingstons also. Just make sure you update your firmware on any SSD you get. As far as using a consumer SSD for an ESXi box, the SSD vendors will frown upon it and say buy the enterprise SSD's. if you ever send them in for warranty, they will look at the drives and see it was not a desktop OS and might refuse the drive. That said, I am running consumer 120's. I wish I had larger drives. my original plan was 3-4 120gig in raid0, so I went with smaller drives. since I am not currently in RAID0, I wish I had 2x 240's. I might still go that route with an m1015. I believe i have one of each in my ESXi. OZC solid3, Agility3 and Vertex3. (I originally had matching drives that I since repurposed). once you feel the speed, it is hard to go back to spinners. for non IO intensive VM's, spinners will do fine in balanced moderation. it really depends on your build and usage. you should never fill the drive all the way up. that is a sure way to speed up the death of the SSD. I forgot the recommended data usage, I personally try to leave 30 gigs or more free on my 120's. If you get a 60GB I would only put one Windows guest on it. perhaps a few small *NIX guests. I would consider a small 30-60 for the ESXi dedicated cache they now offer with 5
  23. Axe mans question got cut in the split. I'll answer it here to keep on topic. Switching from 4.1 to 5.0 is pretty simple. there are several ways to go about it. the problem with the upgrade is if you are using custom drivers (OEM.tgz). you might not have support in in 5.0 with those devices. A little research first. followed possibly booting up with 5.0 on a second flash drive first to make sure all of your devices are found. would be your best migration path.
  24. It is all hit or miss. and what you like and what is on sale at the time. By the time 5 comes out, who knows what the "Preferred HBA is." I liked the M1015's because they were about 1/2 the price of the MV8 with over twice the performance. Since then, the price of the m1015 has gone up making them not as great of a value. in addition to the difficulties of flashing them and with the poor support in 13 and 14. I'll assume that 15 will fix that. It is still a solid card in my opinion. It is just not as cost effective these days and can be difficult to flash. The saslp-MV8 is also a solid card. it is just a bit older and bit slower with 8 drives on it during parity checks.. but, it works right out of the box and works in all supported unraid versions. this is a much better choice for less technical people, those that do not want to run a beta or for those that just want it to work out of the box. In a few months we will see the PCIe 3x and HBA cards later this year. is my guess. In addition' there is the SAS2LP-MV8 out there, it is the replacement for the SASLP-MV8. for now I have not tested this card, but it looks like once 5 comes out it might be a good choice. for now I'm indifferent it to due its limited version support in unraid. It is a faster card then the original, but the driver suport is only versions 5Beta 11 & 13-14's and its possible 12-12a works but could be buggy with this card. Unless you are cutting edge, I'd wait and see on this card personally. I know some people are having success with it. but, when it comes to a server, you want solid stability. Mixing and matching does work fine. I was running mixes M1015's and MV8's until beta 12-14 broke one card or the other. once that fixed it will be fine to do that again. for now, I am running 12 drives through a single M1015. Get what you can afford and what works for you.
  25. People are using "thebay"'s BIOS on the N40L: http://www.avforums.com/forums/networking-nas/1521657-hp-n36l-microserver-updated-ahci-bios-support.html Thanks. That was the one I already had. I just wanted an expert opinion.