jesseasi

Members
  • Posts

    128
  • Joined

  • Last visited

Everything posted by jesseasi

  1. I have a Norco 4224 with Super Micro X9SCM-F, Intel Xeon(R) CPU E31270, 32GB Ram - running ESXI 7.0. I run a couple VMs- 2 windows machines (one for downloading) and an unraid server. I have 24 drives populated on the server and as I need space I have been pulling smaller drives out and replacing them with bigger drives. The 24 drives are attached via SAS 3008 controller (each supporting 8 drives). I use a couple of the on board Sata ports for the ESXI OS and Datastores. I recently got an Intel NUC 12 Extreme i9 to run my Plex server - and to handle any 4k transcoding. It came with a 10GB ethernet. I have unifi aggregation switch. I added an intel X540 dual port 10Gbe switch to my server (in the 4th PCI slot). I am not getting speeds from my VM's any faster than 1GB. I passed through one of the 10gbe ports to my unraid - and I am getting 3-4gbe speeds from my Intel Nuc to Unraid. But sustained read speeds at barely over 1gbe. (when transferring a file from unraid to my intel nuc). I think the issue is that my Motherboard / CPU combo has become outdated and my PCIe slots can't handle the bandwidth. Does anyone have any good recommendations for upgrade? I don't need to do any transcoding on the server - at least not part of my plan. I would like a motherboard that supports 10GBe networking, can run ESXI 7.0 with a few VM's - and support the higher throughput from my SAS 3008 cards. As for the case - I am open to upgrading the whole case to something more robust. Appreciate any suggestions.
  2. So far so good! Thank you so much!!!!!
  3. Can someone tell me what I am doing wrong? I would like parity check to run once every 2 months - start at 11:30PM each night and run for 12 hours till about 11:30am - then repeat until the whole process is finished. I have attached what all my settings are. Right now parity seems to start around 10pm and stops every night at 12:30am. I have 18TB drives - so at this pace - it will probably take 3 months to finish a parity check. Anyone have any ideas?
  4. Thank you so much! Renaming that secrets file has got me back online. Still dealing with slower than normal transfer rates. But I am making progress again.
  5. Ok. I can’t even find this file to rename / delete it.
  6. The computers that were the source was a windows 10 VM running on an ESXi Server. yes shares were mapped - this is my fault. I have since turned off that VM. I deleted every encrypted file from unraid that I could find. I will look into that secrets suggestion. Thank you so much for the response - I will report back later.
  7. I was hit with with a ransomware virus. Phobos. They wanted $20K USD to decrypt everything. Over 15 years of data lost. I am not paying that. I have started the process of restoring what I have from backups and downloading stuff again. But I was so worried about my server still being infected. Data transfers were painfully slow - only 20mb/s. I removed the parity drive and it got up to 38mb/s. But back before this virus I was able to transfer to 110mb/s to my cache drive and much faster than 20-38mb/s to non-cached shares. So I became worried maybe there was still a virus.... I found a post there this command was to be run: docker run --name ClamAV -v /mnt/user:/scan:ro tquinnelly/clamav-alpine -i I did it - it did some stuff. But suddenly all my drives and shares started disappearing and reappearing. I did a shutdown and rebooted. Now all my drives and shares show up - everything looks ok. But I can't access any shares over the network. I have no idea what is going on. Where do I start - what do I do? tower-diagnostics-20221009-0702.zip
  8. I guess if it ain't broke - don't fix it. My main computers are Mac - so I always have to run a virtual windows just to run the Vsphere Client or RDP into one of my virtual Windows machines to run Vsphere. My CPU is not that old - it is an Intel Xeon E31270 3.4gh. Running on a super micro mother board. So I just thought maybe newer ESXI would have features to make it easier to manage / maybe an iOS app? Otherwise everything has been rock solid. Up to 80TB of storage space. I love unraid.
  9. It has been a LONG time since I have been back to these forums. I like the new clean look. I guess that is a testament to how reliable ESXI 5.5 and Unraid has been. I am thinking about upgrading from ESXI 5.5 to the latest version 6.7? Will Unraid still work under 6.7? Are there any tricks or tips to doing this? Sorry if this has been covered - I am just not sure where to look for find this. Thanks!
  10. Well that was super easy. I bought some Tapcon tape - arrived today - took me 10 minutes with some reading glasses and tweezers to cut and apply the tape. Plug it in and spun right up!
  11. Ok. well if you have a Norco 4224 older model - not sure when they updated them...but when installing one of these power disable drives - they do not spin up. Now I need to find a hack? an adapter? or see if Norco can sell me an updated backplane? Anyone here can point me in the right direction?
  12. I am looking to purchase some 10TB drives to upgrade my 24 drive unraid system. Ideally once I have a 10TB parity drive in position I could add a few more and pull out 7-8 of my 10 year old 1.5TB drives. The case is not that old though. Probably 4-5 years. As documented here - https://www.hgst.com/sites/default/files/resources/HGST-Power-Disable-Pin-TB.pdf . Many new drives are coming with a new Power Disable Feature. Apparently you need a power supply that supports this feature or you buy an adapter than removes Pin 3. However my older Norco cases has a back plane that supplies all the connections for the drives. Does anyone know if I get drives with this Power Disable Feature - will they work? I prefer not to have to cut wires or put tape over pins. If you are in the know - please let me know. Thank you.
  13. Tried everything I could - finally thought I should check Un-raid for updates. I was running 6.2. I installed 6.2.1 and fixed!
  14. Control-C. I get a message that the configuration has changed and that the bios may need to be setup. In the bios settings of the LSI cards, If I go into the BIOS. There are options to enable only bios, only is, and both Bios and OS. What should that setting be? Each controller sees all 8 of the attached drives in the bios. ESXI also sees the controllers and I have them set to pass through. I just don't know what to do.
  15. My ESXI setup has been rock solid for the last several years. The other day the power went out - when it came back on unraid shows all drives missing. I have done everything I can think of - everything has been running so smoothly for so long I am not pretty rusty on trying to fix this. The Supermicro system see's the controllers on bootup. Was there a bios setting for the controllers? Please help. What can I do?
  16. This is also posted on a popular auction website. But rather sell to a prospective unraid user. I just rebuild my Un-Raid server upgrading to the 24 port version. This case has served me for the last 3 years - running quietly away in my little server closet. I have made some modifications to it and even got more parts to make it more effective at cooling and quieter. You can see all the pictures here. http://www.smugmug.com/gallery/43677294_3RB3Gk Everything here sold as is. I can tell you that when I took it all apart it was all working. But I am not going to make any guarantees or warranty anything. What you see is what you get. Here is what you get. - Everything is sold as is and can be seen in the pictures. I have been boxing everything as I go - so hopefully my pictures can answer all your questions. Additional Pictures can be seen here. Along with all the goodies. 1 - Norco - 4220 case. No dents, normal wear and tear. The case has had all the fans replaced with some Orange Thermal Take fans. Some have heat sensors to auto adjust speed. Some of those heat sensors are broken - but look further in this post for something better. The case was modified over the years removing the mounts on top of the drive cages - so that I could add 4 additional drives up there. I had the original 3.5" cage and I have added that back. But if you plan to mount a slimline DVD you may need to tweak it a little. If you take a closer look at the pictures. You will see I have removed some small tabs where you would mount the slimline dvd player. If you plan to use this as an unraid - or server rack - you don't need it or you can make it work. 1- Heavy Duty Rail Kit. For ease of shipment I am going to mount the railkit to the case during the shipping. 10 - SFF-8087 Breakout Cables Over $100 value as these cables go for at least $15 each. 4 - NORCO C-SFF8087-4S Discrete to SFF-8087 (Reverse breakout) Cable (red ones in the bag) 6 - Additional SFF-8087 cables that are better - I forget what brand. But these have the sheath. (Black/Blue ones in the bag) 1 - 120MM Fan Bracket - if you prefer - you can remove the bracket holding the 5 80mm fans and replace it with the bracket that holds 3 120MM fans. 3 - 120MM Fans as seen in the pictures. 2 - Superrmicro AOC-SASlP-VM8 controller. PCI Express Cards - Over $200 value. Sold as is. They worked when I removed them from my unraid configuration. Overall this is a great value for anyone that has been interested in one of these cases. There is nothing wrong with any of it. I just upgraded my case and instead of throwing all this away - I would rather someone else get it and enjoy it. I estimate brand new - all this stuff cost over $800. I am selling it today - no reserve. Don't want to wait - buy it now and it is all yours. I would like 300 for all this stuff. I will accept any legit offer from a fellow unraid user. I got other stuff too. Motherboards, CPU. Memory, if you live near 91791 you can come pick it all up.
  17. Any plans for 5.05? My old way of using the plpbt.iso does not seem to work anymore - I have to manually select the boot process each time the VM ever needs to be restarted. Currently running 5.05 and anxious to give this a try. Thanks!
  18. Yes that is what I have been doing. But I also unplug the hard drives used for the ESXi VM's. During the boot sequence you may have to hit F11 and make sure you boot from the unraid usb drive. I don't know what would happen if you left the VM/ESXi drives plugged in. Unraid would likely "see" them and want to set them up as available new drives. May also detract from any testing you may be trying to do.
  19. Yes the LED's light up on the card. Whenever I run any of the commands to clear the firmware - I get no card found. Not sure what to do....maybe the card is bricked...
  20. Anyone with any ideas here? I think I need to get the original IBM firmware back on the card before I can start the whole process.
  21. Seems that people reported issues with the Ivy Bridge. At one point I was almost certain that was my problem. But after swapping CPU's I still had the same problems. I think the issue for me were the two Adaptec 1430 cards and/or maybe the brand new MV8 that I had bought. I was able to return my Ivy Bridge CPU (bought it at Amazon). I think I paid about $340 for the Ivy Bridge 3.2. But for $50 more I was able to get a Sandy Bridge 3.4 from MicroCenter. For the price I will go with the 80 watt increase in power consumption but the added speed. I plan to run 4-5 different VMs with some video transcoding which will be a cpu hog. I think Ivy Bridge is ok to use.
  22. Success..... After who knows how many hours into this project. I was only able to flash 2 of the 3 M1015's that I got in - but I removed the two Adaptec 1430's and one MV8. I am currently running two M1015's and one MV8. For some reason this combination works. Rebuilding in my parity drive right now at 78MB/Sec. So far everything is looking good. I do want to find a way to flash my "bricked" M1015 and get then in my system. Feels good to see it all working. Thank you everyone for all your help. Some quick notes for anyone else that tries this. Running ESXi 5.1 & UnRaid 5.0RC11. I had started with Ivy Bridge CPU. In the tasks of troubleshooting - I switched to a Sandy Bridge. In the end I don't think it matters. Unraid does not need any more than 1 CPU and 4GB of ram. Can probably run just as fast with just 2GB ram. In the VM you need to reserve the full amount of memory to the unraid. The Supermicro Board can flash the M1015 using the UEFI shell. Took me a long time to figure it out - but you can follow the steps here - http://lime-technology.com/forum/index.php?topic=20761.msg186485#msg186485 If you choose to use the MV8 controller - be sure to add the MV8 Hack found in the 1st page of this thread. ESXi 5.1 needs the hack. Also if you remove a pass through and then add it back later - you need to reapply the hack. When a pass through device is removed the settings (and hack) is removed from the VMX file. Well worth all the effort. Thank you to everyone and to this thread which is a wealth of information. Now if I can just fix my M1015 that would not flash.
  23. I was able to flash 2 of the three M1015's I have in UEFI mode. The 3rd is not working - reporting that no card is found when trying to do so.
  24. I have 4gb ram and just one CPU setup. I am currently switching over from 2 MV8 and 2 adaptec 1430 to 2 m1015 and one mv8. Will see if that helps.
  25. Just purchased 3 M1015's off ebay. The process to flash these to IT mode has been a long process. Using the steps found here. http://forums.laptopvideo2go.com/topic/29059-sas2008-lsi92409211-firmware-files/ I ran into the dreaded PAL error. Which means that my motherboard was not going to work. After trying 3 different boards - and managing only to erase the firmware from my cards. I found this tread - which uses an EFI shell to finish the job. The EFI feature is available on my supermicro 9SCM board. http://lime-technology.com/forum/index.php?topic=20761.msg186485#msg186485 Two of my cards flashed perfectly after this. But one of my cards may have been corrupted during the process of trying to find a working board. When trying to run any of these commands - I get the "no mr controllers found" Seems that I may have bricked my card. Not sure where to go now. I think I need to find a way to restore the original IBM firmware to the card and start over. Unfortunately I am having no luck at all in trying to find a way to restore it. There is an IBM file called megacli.exe but when I try to run that I get "not enough extended memory to load application" Discovered I needed to make a different boot disk that would support hi-mem (a windows based boot disk). But what now? What command will help me recover my M1015?