JimmyJoe

Members
  • Posts

    169
  • Joined

  • Last visited

Everything posted by JimmyJoe

  1. I just wanted to say thank you. This plugin is great! I am converting from reiserfs to xfs and I fat fingered a slash, which ended up giving me an extra folder that caused a user share to have the same name as a disk share. Fix Common Problems found the error, and the explanation was spot-on so I could figure out what I did wrong and fix it. Most Excellent! Thanks! Best Regards, Jimmy
  2. Upgraded and all looks good so far. Dockers and VMs are happy. Thank you!
  3. Thanks for your help. I think I am going to try to reproduce this again later when I have some time, but for now will mark this solved and just make sure to have good backups.
  4. I think I had set them as IDE. I just checked the vdisk bus for the first WinXP VM and it was set to virtio, but I never installed the virtio drivers when I installed XP. I changed it to IDE and it bootup up just fine. I only installed WinXP as a test because I had the iso handy. I'll probably try to install with the virtio drivers later, just to go through the process. Thanks for your help. I understand why it wasn't working now for the vdisk, just not how it got changed for one of my VMs.
  5. No, I didn't do that. I didn't do that when I setup the first WinXP VM either, and it worked. Should I install the virtio drivers?
  6. So now when I try to create a new Windows XP VM, it boots into setup and then won't install because it says "Setup did not find any hard disks in your computer." I must be doing something stupid. I'm pretty sure I am following the exact same procedure I did the first time when I successfully created the first WinXP VM.
  7. Thanks for the reply. I only have the one libvirt.img file. This is a new build/upgrade and I am still testing 6.8, so no backups yet. I went ahead and re-defined one of the VMs (Windows XP) and on boot it gets a BSOD using the existing virtual disk. I tried going into Windows recovery mode to run chkdsk and XP setup tells me that no hard disk is installed, but if you don't boot from the CD it boots from the disk and then gets into a BSOD boot loop. 😕 Honestly, I'm disappointed. This is my first time using VMs with 6.8 and I've already lost my VM configurations and one of my virtual disks is corrupted, won't boot and I don't know how to fix it. The only thing that's changed since I had this problem is I added a second cache drive to create a cache pool. When I rebooted to add the drive to the array the VM wasn't even running, it we stopped. Since the VMs are running on the cache drive, it seems likely that the addition of the drive has damaged the VMs. So I have no idea how/why this happened and that has me concerned with regards to reliability of the functionality.
  8. I'm using two Samsung 860 EVO 1TB drives in my cache pool in Raid1 and the server is NOT locking up for me when I transfer large files. I already bought the drives before I saw this thread, but can still return them. I like tweaking and tuning stuff so I was trying to reproduce the issues others are seeing in this thread before making the decision to possibly return the drives. I can copy a 50GB file to the cache pool and don't see any issues. My main Unraid server is still running 5.0. I recently upgraded my backup server from 5.0rc11 to 6.8. Also, I swapped the case from a 4U Norco 4020 to a silent mid tower because I'm relocating the server to a different location (noise is an issue) and added the SSDs. I installed a bunch of docker containers and a couple of VMs. Tonight when I shutdown the server to add the second cache drive, after restart my VMs are no longer visible in the GUI. Don't know why, started another thread on that issue here: My Hardware Components: CPU: Intel Xeon E3-1220 Sandy Bridge Motherboard: Supermicro X9SCM-IIF-O RAM: 32GB - 4x Super Talent DDR3-1333 8GB ECC Micron Controllers: 1x IBM M1015. Flashed in IT mode. Case: Antec P101 Silent Power Supply: CORSAIR HX750 Flash: 4GB Cruzer Micro Parity Drive: 1x4TB Seagate ST4000DM000 5900RPM 64MB 4x1000GB CC43 Data Drives: 5x4TB Seagate ST4000DM000 5900RPM 64MB 4x1000GB CC43 Cache Drives: 2x1TB Samsung SSD 860 EVO 1TB Hard drives are connected to the M1015. SSDs are connected to SATA3 ports on the motherboard. Multiple times I copied a 50GB file from a Win10 PC to my Unraid server over gigabit ethernet: Cache pool during transfer: Top during transfer: So, during the transfer I was at about 2 load average, highest I saw was ~3. I still need to figure out what's going on with the VMs, so I couldn't test with those. But during the transfer I used several docker containers and didn't notice any performance impacts, including: Krusader - browsing files/folders on the server CouchDB - exploring the GUI/interface dukuwiki - Editing wiki pages Oracle Database - browsing with the console Everything appears to be working for me with 2 Samsung SSDs in my cache pool while copying large files. Should my test have reproduced the problem others are seeing? Anything else I can/should try? Best Regards, Jimmy
  9. Recently upgraded from 5.0rc11. I'm really liking 6.8 and all the features now built in to unraid. I have my docker containers and VMs on shares that are set to prefer, so files are on cache. I added a second cache drive and rebooted. After reboot, docker containers are fine but VMs are missing from GUI. Any help is greatly appreciated. Best Regards, Jimmy voyagerold-diagnostics-20200107-1813.zip
  10. So it's been a couple of years and this is still an issue? That's unfortunate. I'm in the process of building a new 6.8 server and was planning on using a couple Samsung SSD drives for a cache pool. Has anyone got that working without having the issues mentioned in this thread and if so using what SSD drives? Thanks!
  11. Yes, those are the drives. I bought mine as external's and removed the drives, but the same drive. I did a quick test copying a 3.5GB file from a windows client over gigabit ethernet. I see writes at 35 MB/s to a parity protected disk. Write to cache are 105 MB/s. Reads are 115 MB/s. Temps usually hover in the high 20's to low 30's. During parity checks mid-high 30's with a few into the low 40's. That's with the fan controller on the middle speed. No issues for me with the 14030sa's and the C2SEA. You may need to update the firmware on the adaptec cards, but that's pretty painless.
  12. Sweet. Good to hear! Been running rock solid for me too.
  13. Got my 4 drives today, all DM. Life is good.
  14. Must resist.... good deal on a great drive.... It's like an addiction..... Do I really need more space? YES, of course I do. I just ordered 4 more.
  15. I ordered three of these and they were shipped to me quickly, but I am in the US.
  16. Thanks for posting this! Deal is Limit 1. I just place an order, hopefully it will be a DM drive. Also, Amazon has matched the before promo code price of $149.99: http://www.amazon.com/Seagate-Backup-Desktop-External-STCA4000100/dp/B00829THLE
  17. I had the ASrock Z77 Extreme 4 MB in one of my unRAID arrays for a while with 8x4TB drives attached to the MB and they worked fine. It also worked with 2 Adaptec 1430sa cards with 4x4TB drives attached to each controller. I think so, IBM has several part numbers but the board is basically the same. Many don't come with the right bracket, make sure you order one of those too if you need it. I also built an ESXi box in a Norco4020, using supermicro MB with M1015s. I just added my first two 4TB drives on the M1015 today, which is passed through to an unRAID guest. They are preclearing for the next couple days. So far so good, they were recognized no problem.
  18. Looks like this is dead, Amazon price is now back up to $179. I got my 2 drives delivered today, both of the are DM drives!
  19. Thanks, that worked for me. I configured a 500GB virtual disk that is on my ZFS datastore and added that to my unRAID guest as my cache drive. Unfortunately the performance is horrible. In my tests I have the following: 1. ESXi guest for OI/ZFS/Napp-it configured with 4 WD 1TB Green drives in raidz 2. ESXi Windows guest 3. ESXi unRAID guest 4. unRAID bare metal Here's my write speeds when copying a 3.3GB file multiple times with consistent results: ~130 MB/s - Windows Guest -> zfs over SMB (I saw similar ~130MB/s with iometer tests) ~100 MB/s - Windows Guest -> unRAID bare metal (limited by 1GB network) ~80 MB/s - Window Guest -> unRAID guest using dedicated cache drive (limited by hard drive speed) ~20 MB/s - Windows Guest -> unRAID guest w/virtual disk for cache on zfs datastore Why is this so slow when using a virtual disk on zfs for a cache drive in unRAID? Why do I get ~20MB/s instead of ~130MB/s? I would get better performance (~33 MB/s) if I didn't use the cache drive at all and took the parity hit in unRAID. :'(
  20. Looking at the front, power LED is on the bottom right. Hard drive LED is on the top right, it lights up through the triangle on the door.
  21. Got some better performance numbers from ZFS now. I changed from e1000 to vmxnet3 and switched from CrystalDiskMark to iometer. With 4 WD 1TB green drives in raidz I am seeing 210 MB/s write and 260 MB/s read locally and from a Win7 guest about 132 MB/s write and 206 MB/s read. Good enough for me. Now that I have ZFS setup, how do I use it for a cache drive in unRAID?
  22. The warranty on the external drive is 2 years. All of my DM drives show under warranty according to Seagate through either 3/15 or 4/15. YMMV if they will honor the warranty of a drive purchased as an external drive if you send them just the internal.
  23. This is one 4TB drive. Yes, I just got back from Best Buy, had them pricematch to Amazon and it took me 5 min to get it out of the case. I have done this a dozen times though, first one took me about 15 min. There are several videos on youtube that show how to take it apart. Best Buy had 4 drives, 3 DX and 1 DM. I only want DM drives. I currently have 8 of these DM drives in one of my unRaid servers, running great. DX drives run too hot for me. Ordering from Amazon, you may get a DX you may get a DM. Here is how you can tell the difference from looking at the box. Here is some drive details and performance testing between the DX and DM drives. I did order 2 from Amazon also, and I will return them if they turn out to be DX drives.