Johnm

Members
  • Posts

    2484
  • Joined

  • Last visited

Everything posted by Johnm

  1. That's awesome I'd love to see a picture of how bad the inside is if you have not packed it up.. I'm honestly surprised he is asking for it back.. I had a Mac pro get mangled in shipping. it would have cost like $50 to ship it back. the guy just said keep it and funded the money .. they knew they would never have made a dime on it after the shipping costs and just gave up.. it sounds like he deserves to pay the extra cash... anyways.. now you have another $200 budget for your server..
  2. i hope you have all of this in writing. don't let this seller strong arm you. It is an AIC chassis.. i just found a couple of AIC RMC5E's in our graveyard at work. it is the same Chassis as yours except the AIC RMC5E is the SATA version. a few people did do mods on a similar AIC chassis (sata version) on the forums here.
  3. I actually have a similar situation.. My internet comes in at the worse part of my house, the dining room. I have a wi-fi 2 channel "N" AP broadcasting the internet only signal in the house. In my server room (spare bedroom and separate from the home office). I have a wireless N bridge (plugged into my main switch) that's communicating with the internet only wi-fi to get my whole network online. I then have another 2 channel "N" AP that 3 of my htpc's. I have no issues with 1080p over N under normal situations. I also have a Wi-Fi "AB" (Gigabit wireless) AP dedicated for my "data channels". In my office i have an "AB" (gigabit) bridge (just an AP in bridge mode) connecting to another swtich that my main gaming rig, my main work PC's, some of my thin terminals and another HTPC. In the basement I also have another "AB" bridge that has a few secondary servers that are just for minor backups and my music studio. One day I'll hard-wire all of this. Right now it is not an option and i had to do this quick and dirty. for my laptops and other mobile devices. ill connect to the internet AP in teh dining room (that still gets me to my servers). if i need to push a lot of data via wi-fi with my laptop, i'll connect to the server rooms "N" (or plug it into an Ethernet jack). the gigabit wireless can push a good amount of data though it. it defiantly limited in speed and cant touch gigabit wired. but I dont use it for massive data loads often. the big loads it sees are time machine backups and backing up my music and work I'm doing in the office. I did try powerlines a few year back and they were slower then N for me. the new models might be better. i was also very limited where I could use them in my home. each area in the house runs back to the fuse box.
  4. You are looking at another $60-$100 in modding not counting time to get it up and running just ok.. If you got it though ebay. ebay will undo the sale. thats not even close. if it was a paypal deal. work with them next. While its can be a good base for a mod project not at that price. that thing was a boat anchor at one point. Wrong part, and if he didn't mention it was water damaged, that's grounds for a refund. Even your bank might refund the transaction.. Did you get it from new jersey or new orleans? I saw a lot of servers like this after both Katrina and Sandy.. they spent a few days/weeks below water level.
  5. If I'm not mistaken. the ST4000DM000 is the 8 head 4 platter version. the DX is the older XT version (5 platter) I would guess your DX's pre-read at about 145MB/s and the DM's pre-read closer to 190MB/s at least at first.
  6. It was 145MB/s - 98MB/s at the tail It took approximately 39 hours to preclear 1 pass (on an I7-3770k W/ 16GB ram) I'm preclearing my second Seagate 4TB drive now ... the first one completed in 36:20:21 using an AMD Phenom 9950 Quad Core with 8GB of RAM. CPU and memory aside, I am actually seeing better performance on a 4TB by using the Adaptec 1430SA PCI-E x4 SATAII port, as opposed to a (ASUS M3A78-T) motherbord SATAII port. Is that the XT with 5 platters or the newer 4 platter drive? I am on the sata2 port on my mobile ITX (Asrock) desktop rig.
  7. It was 145MB/s - 98MB/s at the tail It took approximately 39 hours to preclear 1 pass (on an I7-3770k W/ 16GB ram)
  8. I'll confirm. to open this drive does destroy the case. I had one the 3TB versions fail on me a few weeks after I voided the warranty.
  9. that sucks. I did convert a vintage supermicro SCSI server to to SATA a while back by just removing all of the backplanes. once all of that was gone. I was able to use the SCSI bays as standard SATA (individually wired drives). I lost all hot swap ability. it basically just became a normal case. I then had to fabricate a fanwall for it since i had to remove the old fan brackets that were were attached to the SCSI backplanes.. (it is still running as a windows server in my farm) EDIT: fixed typo
  10. Did that server come out a flood? it looks like it went for a good swim. Yes. the C204 chipsets have 2x PCIe2 8X slots and 2x PCIe2 4x slots (in 8x from factor) some boards have a different arraignment (like an *x slot in 16 form factor). but the same speeds. this will not bee a problem, the SaSLP-MV8 cards are PCIe1 4x and fast enough for unraid. running an M1015 at PCIeV2 4X wont saturate it with mechanical consumer drives. technically you only need 1 channel of that controller anyways.. until unRAID supports more then 24 drives. EDIT.. I think you might want to reconsider that case. id hate to see you fry a bunch of hard drives from shorted out backplanes. Mounting a standard PSU in there might be a bit tricky as that thing is meant to have 1 hot-swap PSUs. then again, I'm the kind of guy that chops up Mac Pro's into X86 PC cases... So sandblasting it down and spray painting it a new color sounds like something I'd do just to say I did it.. Lime Green unRAID chassis..
  11. In general, more platters = better IO and seek time. the problem is that you're comparing 800GB platters vs 1TB platters. this would need all new testing. my 3TB drives with the 1TB platters preclear much faster then my 4 and 5 platter 3TB drives. the drive density is very different between the two drives. the 4 platter will be faster for continuous read write by its very nature of moving more data in the same rotation. with a fan mine are holding stable at 41c. I am not to worried about burning it out during preclears. After all, thats the purpose of the preclear, see if the drive is going to hold up in stress (and if it shipped with defects). The first thing I noticed when i Opened the box, it was half the thickness of my Hitachi and WD 3TB drives. @korith, I do plan on keeping one in its USB for a while. I am surprised at how well it performs as a usb3 drive. I usually take a Hitachi 3TB drive with me when I travel. This is going to replace it. Less luggage space and more storage space.
  12. That is the nature of *NIX systems. They will use all spare RAM for cache. if you give it 8GB, it will eat almost all of it. My Performance PRO's ran quite fine until the one died. I will admit, I pounded on that SSD. it never had time to run its garbage collection. I have a few of the 830's in my Macs and laptop. I'll have to look at the 840's specs.
  13. I picked up several today. I started the preclear. the plugs on the bottom.. while they are standard, they are recessed pretty deep. I had to go though about 9 cables before i found one that would plug in solid. then the heat shot way up on them one unit hit 58c the other right behind it at 56c almost instantly (about 4-5% in). i had to abort the preclear and rig up a fan to keep them cooler.
  14. #3. Just the opposite. many desktop boards that have a VT-d in the bios do not work correctly even with the correct CPU. If the vendor has deviated from the Intel reference design, the VT-d might be broken. There are also boards out that are probably quite capable of VT-d that do not have the bios option because the vendor does not want to support it. when working with desktop and VT-d it is potluck. most desktop boards get new revisions quite often when the suppliers change available parts or what parts are cheaper that month. As far as your board, the nic is not ESXi compatible. you would need a PCI or PCIe nic and another CPU. Even then it is not 100% that it will work.
  15. There are lots of members running 3 SAS cards together. IBM M1015's and/or SM AOC=ASALP-MV8's the Supermicro boards are the logical choice for this. the X9SCM the new popular one. The older X8SIL is still a strong board IF you can pick one up dirt cheap. There are also Tyan C202 and C204 chipset boards that work fine. several people use these when the SM board is not an option. Asus has some also. I don't think to many (if any?) have the Asus here so I can not comment on it. The I3 will be perfectly fine if you are not doing transcoding (and might still do light transcoding fine). If you plan to heavy transcode with these boards a Xeon CPU "might" be needed. You can build a 20 drive system with only 2 SAS cards and the 6 motherboard connectors. you could then add the additional card once you expand past 20 drives to save up front build costs. I would bypass the expander for a baremetal unraid system. most people that use them with unraid use them because of hardware limitations of number of PCIe slots available or because they have virtual unraid builds. usually 3 SAS cards is more cost effective then a SAS card plus expander. IF you do go with the C20x chipset from any vendor, be aware that many cheaper PSU's don't work.
  16. keep in mind the I3 does not support VT-d (with the exception of one OEM only I3)
  17. i am desperate for space right now and I was about to start recycling older 2TB drives i had laying about. and then order some 3TB drives and upgrade when they arrive.. this has me thinking i should pick up a few of these and let them run as eSATA drives in my main unRAID box. if they survive until i return home from my trip in a few months, i'll open them up then. these i can just pick up now. (well when i go home tuesday) buying 2 of these are still cheaper then any single internal 4TB drive in the market right now. if i get 4 and install 3. the 4th is my fail-over/warranty.. and I still come out a drive ahead (2 ahead if i put the spare into service). To bad these are the 5 platter design and not the 4 platter ones.
  18. In general this is the case, but we have seen more then once it was a driver error (or driver error with a certain hardware firmware) taking out an entire controller. this very system was plagued with the same errors back in 5B6 was it? It did turn out to be a bug. changing of kernels and drivers can had odd unpredictable (and sometimes hard to replicate) effects. the fact that all of the drives showing issues were on a single SAS plug would make me think bad backplane or loose wire (hard to say wire on system thats been racked for months). the flashdrive passed windows scan disks. the fact that i have now run 2 parity checks (correct off) after the downgrade with no hardware errors (but yes there are sync errors) has me thinking it is not a hardware issue. this is not my main concern. my concern was that unraid appeared to be correcting parity AFTER a failed Drive/controller. (and yes i could be very wrong about , i need the log files) After downgrading. i have run parity checks twice and will a third time. every time it is showing the same 500ish sync errors each pass the question i have; are these sync errors because unraid adjusted parity after a dropped controller or because files were written to the redballed system and it lost track of them after the system unflagged the drive.
  19. if i am not mistaken, cant you remove teh dock and plug a standard sata and sata power connector into these and do your preclears before you pry them apart?
  20. Hardware was X9 SCM Xeon 1240 RAM (4 GB when a VM and 32 when bare metal) 1x M1015 (with 4 drives) 2x SASLP-MV8 (with 8 drives each) Original crash was under hypervisor. all subsequent reboots were as baremetal. Not having my original syslog is huge. that is the missing key to the issue. i swore i downloaded it to my laptop but i cant find it and it is not on my flash. hopefully there is a clue in the second reboot syslog.
  21. Yesterday i had 2 more drives redball yesterday. I was unable to do anything with it since I am out of town. I had the server reboot from and older flash backup (RC5). it came right up all errors went away. I performed a parity check. zero errors. I copied a few GB to each individual disk, verified the copy and then deleted the data. zero errors. I need to upgrade the server again and see if i can recreate the issues. this will have to wait until i get back in a week or two..
  22. I do not seem t have syslog from yesterday when it all started. i have one from reboot#2 reboot #3 is still non-responsive. after an hour. I had assumed it was doing transaction replays. the older versions would show these on the console. i am not seeing them on RC11 if thats what it is doing. syslog-2013-02-01.zip
  23. I'll agree. if you don't need the additional space right away, I would hold on to the WHS drives for a while. You never know what might happen. especially if unRAID is new to you. As far as load balancing the drives. there is no need for that. that many less drives spinning when you are using the array.. Even microsoft stopped balancing the drives in drivepooled disks on WHSv1. I sort of like my initial files contiguous. After the initial copy, depending on your split settings, any additional drives will be balanced per your split settings. I used to keep my data all super tidy and sorted by drive. now i just let the data go where it wants within my split settings.
  24. Not always true. I recently had a Hitachi that used to be external that i tore apart (no way to reassemble it afterwards). it died a few weeks later. it showed good on the warranty check. I sent it in for warranty and they emailed me back to say the drive serial number is that of an external model and they would not fix it.
  25. I think I can speak for him on this, I definitely wouldn't want unraid to write data to the parity drive based on read errors from multiple disks. If there are multiple failed drives, for whatever reason, unraid should gracefully give up and take the array offline pending intervention. The parity system is only able to handle a single drive failure. If another disk fails, writes to the parity drive shouldn't be happening at all. You are reading my thoughts. until I can get my syslogs. I can not verify what is happening and I can not give Tom any useful help. Are you sure that the failed drive is still red-balled and offline after the reboot? This sounds very similar to the problem I had with rc8a and for which a fix was implemented in rc10, here. After reboot 15 is still offline and I think I lost 16 also. I started copying the files from the emulated 15 to another server incase I can not run recovery. It started giving me reiserFS errors or drive 15 that is not plugged in (emulated erros?) array and server went unresponsive again. GUI is not responsive. So I lost 2-4 drives at once, next to each other on the same SAS channel just days after a parity check (and upgrade). possible but odd. I lost a single channel of a sas card that has had no previous errors.. possible but odd. I'd expect the whole card to go down. Driver error/unraid error... I had this happen twice before with my M1015's and it turned out the be a driver issue. My concern is IF parity being updated from read errors while in a degraded state. is that going to hurt recovery attempts later? Right now unraid is up from reboot #3 and completely unresponsive via webgui.