Jump to content

m4f1050

Members
  • Posts

    256
  • Joined

  • Last visited

Everything posted by m4f1050

  1. 10-4... I need a 4 port though, know of any working 4 port? I ordered the other 8 TB Red NAS (total of 6 - 5 data 1 parity) drive and the other 1 TB Crucial SSD cache (total of 2 cache drives - cache pooling/raid 1/whatever you want to call it I guess LOL) drives plus a Bluray burner for my VM Windows 10. EDIT: I guess I can order 2 of these 2 port ASM1061's on Amazon for $11.99 each (2 day shipping so it won't slow down my transition) https://www.amazon.com/dp/B005B0A6ZS Now here is one interesting card... It has an ASM1061 and 2 JMicron JMB575 https://www.amazon.com/dp/B0177GBY0Y
  2. Oh ok. I was reading the box, I guess I should've checked. I'm ditching those cards anyways and getting a PCI-E with 2 ports (or 4 ports) but I need to find out which one to get that's SATA III and compatible. I was looking at one with ASM1061, will that one be compatible? Going with 6 x 8 TB on the internal SATA III controller on the MOBO and adding 2 x 1 TB Crucial SSD chache drives on the card. If I get the 4 port card I can put a DVD burner for the Windows 10 VM that has the RX 460 pass-thru GPU. The 4 port I am looking at has Marvell 88SE9215 chipset.
  3. SASLP (PCI-E x4) -- Wow, ASUS says it has 3 x16, so they crippled the 3rd "x16" into a "x4" and lied about it? Interesting...! I'm moving to 4 x 8TB data, 1 x 8TB parity and my 1 x 1TB cache and using the onboard SATA III (or are they also crippled by ASUS?) so I am going to sell the 2 controllers, the 21 x 2TB drives, the Icy Docks, the 850 watts PSU and the Antec 1200 case. These drives have been great and working, mainly because I don't use the array and since unRAID spins them down, they literally haven't been used that much. (Maybe 5 or 6 parity checks and 3 rebuilds with 0 errors total that I can remember.)
  4. They are WD Caviar Green WD20EARS/WD20EARX/WD20EZRX drives. About 6 or 7 year old drives. 2 x SuperMicro 8-port SAS controllers are on PCI-E x16 slots (mobo is an ASRock 970 Extreme 4, which has 3 PCI-E x16 - one has an RX 460 GPU and the other 2 have the controllers, and mobo has 5 internal SATA III and 1 external SATA III with wire going back inside for cache drive along with the parity drive on one of the internal SATA III, other 4 plus the 16 from controllers have the data drives. System has 32GB memory, when VM's are off unRAID uses all 32GB I believe? VM's are using 12GB each = 24GB, leaving unRAID with 8GB, using 4 x 5 drive Icy Docks, all inside an Antec 1200 case with an 850watt single rail PSU.)
  5. Ok, so how can I speed it up even more? I have 20 x 2 TB drives, is that my problem? Well, 21 counting the parity drive, and 1 TB ssd for cache. This unRAID server setup is *OLD* it's 5 years old 🤦‍♂️ I had 6 x 1.5 TB Seagate Barracuda drives when I first started this unRAID server.
  6. Ok, I set it to these values, it gave me an increase from 77 MBs --> 92 MBs. The 8 TB is SATA III, but the 2 TB are SATA II 😕 Reason why I am moving to 8 TB. But I am still waiting on 2 more 8 TB drives, I have 3 so I cancelled the rebuild, moved stuff out of one of the 2 TB to use it as parity and I am just going to copy stuff over to the 8 TB drives, THEN build my array and rebuild when I have all 8 TB drives. No sense in doing a rebuild on the 8 TB parity and then build an array of 4 x 8 TB and do another rebuild... LOL Thanks, that helped.
  7. I just swapped my parity drive with a WD Red NAS 8 TB drive and from 8 hours (with WD Green 2 TB drive that I had before) it now says it will take 1 day and 8 hours. I have all VM's down, all Dockers off and nothing is accessing the drives except the rebuild. MBs vary from 64 to 77, goes up and down in that range which it always have, the Green drives are SATA II (3 Gig) and the Red is SATA III (6 Gig) but I guess once I swap all the drives it will somewhat speed up. My question is, can something be done BEFORE the rebuild (i.e. defragmentation) or can I request a feature to only rebuild up to the largest size drive (obviously not including the parity which HAS to be the largest) and speed up the process? Or will it automatically speed up the process once it reaches the end of the 2 TB space? Thanks!
  8. I want to add to that... If you are already using VirtIO on Windows 2008 R2 (my case,) you need to add a Manual SCSI drive (1 meg should be ok, you can delete afterwards, just to get drivers installed first) then you can swap the type or your Windows 2008 R2 won't boot, it will blue-screen. After I created the 1 meg drive and booted and installed the drivers, I was able to switch from VirtIO to SCSI without blue-screen.
  9. I ran defrag on Windows 2008R2 then I ran SDelete -z to simulate a "TRIM" by zeroing the free space Also I went ahead and took a further step, I reconverted the image with compression: mv win10.img win10.img.bak qemu-img convert -O qcow2 win10.img.bak win10.img mv win2k.img win2k.img.bak qemu-img convert -O qcow2 win2k.img.bak win2k.img Win 10 from a 135gigs image it compressed to a 58gigs image (the OS is using 95gigs) Win 2008R2 from a 130gigs image it compressed to a 60gigs image (the OS is using 63gigs) -- No trim, used SDelete from Sysinternals to zero empty sectors. Got another question, is the docker.img file QCOW2? Thinking about compressing that one too. 🙂 EDIT: ***DO NOT USE SDelete*** It made the image bigger. I did SDelete on the Win 10 and then compressed it and from 58gigs it went to 97gigs -- NOT WORTH IT -- I bet if I wouldn't have SDelete'd the Win 2k image it would've compressed even more. Adding a 2nd drive to each VM, cloning the drives then doing the qemu-img convet -O qcow2 and leaving them alone after that.
  10. It worked on my Windows 10 VM. Now, will this work on Windows 2008 R2? ***FINGERS CROSSED***
  11. Is there a way I can make my image file smaller after I uninstall several apps that take up a lot of space (or reset Windows 10 and start from scratch?) Maybe creating a new image from the original image (BUT somehow only copy data sectors and not free sectors?) Right now my image file is 110 gigs, it's set to grow, but I uninstalled a lot of apps and deleted a lot of install files and files that took a lot of space, and right now I am only using 50~60 gigs worth, not 110 gigs. Thanks!
  12. Can I use an RX 580 or better (if better, which one is the best one I can use as of today?) for video pass-thru? I have latest unRAID.
  13. Jun 25 09:44:13 MMPC sshd[20914]: Failed password for root from 58.218.198.168 port 38471 ssh2 Jun 25 09:44:13 MMPC sshd[20914]: Received disconnect from 58.218.198.168 port 38471:11: [preauth] Jun 25 09:44:13 MMPC sshd[20914]: Disconnected from authenticating user root 58.218.198.168 port 38471 [preauth] Jun 25 09:44:37 MMPC sshd[20951]: Failed password for root from 58.218.198.168 port 51533 ssh2 Jun 25 09:44:37 MMPC sshd[20951]: Failed password for root from 58.218.198.168 port 51533 ssh2 Jun 25 09:44:38 MMPC sshd[20951]: Failed password for root from 58.218.198.168 port 51533 ssh2 Jun 25 09:44:38 MMPC sshd[20951]: Received disconnect from 58.218.198.168 port 51533:11: [preauth] Jun 25 09:44:38 MMPC sshd[20951]: Disconnected from authenticating user root 58.218.198.168 port 51533 [preauth] Jun 25 09:45:01 MMPC sshd[20983]: Failed password for root from 58.218.198.168 port 53625 ssh2 Jun 25 09:45:01 MMPC sshd[20983]: Failed password for root from 58.218.198.168 port 53625 ssh2 Jun 25 09:45:01 MMPC sshd[20983]: Failed password for root from 58.218.198.168 port 53625 ssh2 Jun 25 09:45:02 MMPC sshd[20983]: Received disconnect from 58.218.198.168 port 53625:11: [preauth] Jun 25 09:45:02 MMPC sshd[20983]: Disconnected from authenticating user root 58.218.198.168 port 53625 [preauth] Jun 25 09:45:26 MMPC sshd[21042]: Failed password for root from 58.218.198.168 port 61755 ssh2 Jun 25 09:45:26 MMPC sshd[21042]: Failed password for root from 58.218.198.168 port 61755 ssh2 Jun 25 09:45:26 MMPC sshd[21042]: Failed password for root from 58.218.198.168 port 61755 ssh2 Jun 25 09:45:27 MMPC sshd[21042]: Received disconnect from 58.218.198.168 port 61755:11: [preauth] Jun 25 09:45:27 MMPC sshd[21042]: Disconnected from authenticating user root 58.218.198.168 port 61755 [preauth] Jun 25 09:45:50 MMPC sshd[21094]: Failed password for root from 58.218.198.168 port 61991 ssh2 Jun 25 09:45:50 MMPC sshd[21094]: Failed password for root from 58.218.198.168 port 61991 ssh2 Jun 25 09:45:50 MMPC sshd[21094]: Failed password for root from 58.218.198.168 port 61991 ssh2 Jun 25 09:45:50 MMPC sshd[21094]: Received disconnect from 58.218.198.168 port 61991:11: [preauth] Jun 25 09:45:50 MMPC sshd[21094]: Disconnected from authenticating user root 58.218.198.168 port 61991 [preauth] Jun 25 09:46:15 MMPC sshd[21131]: Failed password for root from 58.218.198.168 port 18562 ssh2 Jun 25 09:46:15 MMPC sshd[21131]: Failed password for root from 58.218.198.168 port 18562 ssh2 Jun 25 09:46:15 MMPC sshd[21131]: Failed password for root from 58.218.198.168 port 18562 ssh2 Jun 25 09:46:15 MMPC sshd[21131]: Received disconnect from 58.218.198.168 port 18562:11: [preauth] Jun 25 09:46:15 MMPC sshd[21131]: Disconnected from authenticating user root 58.218.198.168 port 18562 [preauth] Jun 25 09:46:40 MMPC sshd[21183]: Failed password for root from 58.218.198.168 port 33124 ssh2 Jun 25 09:46:41 MMPC sshd[21183]: Failed password for root from 58.218.198.168 port 33124 ssh2 Jun 25 09:46:41 MMPC sshd[21183]: Failed password for root from 58.218.198.168 port 33124 ssh2 Jun 25 09:46:41 MMPC sshd[21183]: Received disconnect from 58.218.198.168 port 33124:11: [preauth] Jun 25 09:46:41 MMPC sshd[21183]: Disconnected from authenticating user root 58.218.198.168 port 33124 [preauth] Jun 25 09:47:06 MMPC sshd[21224]: Failed password for root from 58.218.198.168 port 45894 ssh2 Jun 25 09:47:06 MMPC sshd[21224]: Failed password for root from 58.218.198.168 port 45894 ssh2 Jun 25 09:47:06 MMPC sshd[21224]: Failed password for root from 58.218.198.168 port 45894 ssh2 Jun 25 09:47:06 MMPC sshd[21224]: Received disconnect from 58.218.198.168 port 45894:11: [preauth] Jun 25 09:47:06 MMPC sshd[21224]: Disconnected from authenticating user root 58.218.198.168 port 45894 [preauth] I am getting a ton of these, has anybody setup a good docker to block these after, let's say 5 or 10 invalid logins? Thanks!
  14. I always install with my XP and my key (not even SP1 or SP2) but I have noticed I can't uae my key to activate after an SP install and move the drive to another machine...
  15. I reinstalled, didn't take me long at all, I use it for an old sofware that runs on XP only. All is good. But good to know about the Acronis universal restore. I actually own an Acronis license, not sure if my version has it but will definitely play with that another time. Thank you all for the info.
  16. I don't think it will work, I am just reinstalling Windows XP and the programs. If I restore it will restore drivers and stuff not needed and overwrite the Windows installation.
  17. SeaBIOS, tried IDE, Sata, USB and VirtIO. I got further though.... I am getting a generic BSOD... Here is what I manage to capture since it reboots automatically..... When I boot in SAFE MODE it stops at AGP440.SYS and then the BSOD comes up..... EDIT: I mounted the image, renamed AGP440.SYS but now it's getting stuck on MUP.SYS... UGH!
  18. I have a Windows XP machine running from a machine using VMWare Player and I would like to move it to my unRAID box. I have tried many things, like creating a WXP and using "qemu-img convert -f vpc -O raw orig.vhd new.raw" and using this new converted image but the farthest I have gotten is: "Booting from Hard Disk... No bootable device." Any ideas?
  19. Not sure. I use a Radeon HD 6450. I believe you can also use 7450/8450/8490 and R5 230/235/235X.
  20. Same here.... *** Running /etc/my_init.d/00_regen_ssh_host_keys.sh... *** Running /etc/my_init.d/40_install_xeoma.sh... grep: /config/xeoma.conf: No such file or directory /etc/my_init.d/40_install_xeoma.sh: line 131: /config/xeoma.conf: No such file or directory *** /etc/my_init.d/40_install_xeoma.sh failed with status 1
  21. You can boot from one 16x and pass IT through as one of the gaming cards. unRAID web gui got steroidized. LOL (U can turn off the gaming PC from web gui if needed from any machine on net, and keep that pci-e for anything else needed like SAS controller.)
  22. Even cache only shares appear in /mnt/user/ Since when? Was my samba config file deleted when I upgraded or something? I had it NOT show up/share. And I can't do anything, my unRAID is FROZEN. AAAAAAAAAAAAARG!!!!!!!!!!! Im out.
  23. /mnt/cache/custom is part of the 'custom' user share and thus will also appear at /mnt/user/custom. Deleting 'mint/user/custom will therefore delete the files regardless of whether they are on the cache drive or the array drives. You did not mention what you did to try and stop mover moving the files to the array. The correct way to do this is to set the share as Cache Only. I had it setup to ONLY!!! This explains the "system" folder (and it CANT be changed) WTH??? VM Manager: Libvirt storage location: /mnt/user/system/libvirt/libvirt.img domains was in Docker settings, but I changed it to /mnt/cache/custom/domains tmp was in a docker and I moved it to /mnt/cache/custom/tmp I'M PISSED... everything good I had to say about unRAID is GONE! Jesus Christ....
×
×
  • Create New...