SG872

Members
  • Posts

    15
  • Joined

  • Last visited

Everything posted by SG872

  1. So i found a tool (UFS Explorer) which can mount and read the data off the disks that say "ZFS member" in Windows, the tool recognizes it as XFS formatting. I am thinking the other XFS disk is unmountable because it was the parity drive.
  2. Using your plugin I found I can only mount one of my 4 drives: Two of them show ZFS member (I experimented with FreeNAS at one point but rebuilt the array and everything was hunky dory) What I am trying to do is recover some data from the disks I have removed (rebuilt onto 2x8TB) as they couldn't be read from backups and TBH not sure which disk the data was on. I think sde was my parity disk, if that is the case would that explain me being unable to mount it? Any idea what I could possibly do with these zfs member disks?
  3. To be honest not where why they say ZFS, they were possibly originally in a ZFS array but since then had been moved to unRAID and been working fine, but these are the same 4 drive's, they were never fully removed from the chassis just unplugged.
  4. Tried starting it no change, I will copy what I need off the disk currently functioning and then try the others one by one. Thanks for your help in pointing out this tool
  5. The unassigned devices in the community Applications? Edit so with that I see all 4 drives in unassigned however I am unable to mount more than 1? Does it matter since the array isnt started?
  6. So I rebuild my array from 4x 1TB disks to 2x 8TB, and restored all the files from backup, however a few files for whatever reason cannot be read from the backup source so I would like to retreive them. I still have the disks that contained the data untouched just removed from the array. Prior to removing the old disks I did run through the new config option. Since I did this am I unable to access the data? or can i mount it in an unprotected way?
  7. There might be something else going on here, now on startup the IPMI is stating Memory not installed..... Appears like something may be loose or the motherboard might be borked
  8. The time frame was the flooding (which shut down production) which caused HDD prices to skyrocket, and HDD Mfgr's reduced warranties at the time as well (presumably to cut costs) There is a specific model of the Seagate enterprise 3TB drives which suffer from the same problems, we have had near 100% failure rate in about a year of use. I do agree to take the backblaze study with a grain of salt however there was a major problem with Seagate 3TB disks.
  9. I had 2 VM's running on my UnRAID setup, one was idle the other was transferring a few TB using Free File Sync from an iSCSI LUN to a network share (not something I do ever really do but migrating stuff from different arrays (Synology and FreeNAS) to do a complete rebuild, Anyways during the process everything on UnRAID became completely unresponsive. I grabbed a screenshot of the IPMI when it locked up: And now on startup I get a Kernel Panic: Hardware is as follows: ASRock Rack E3C236D4U E3-1230v5 2x 16GB Kingston ValueRAM KVR21E15D8/16 2x Samsung 850 Evo 250GB Cache 4x WD 1TB Red's 1x Mellanox Connectx-2 10Gbe NIC MTU set to 9k UnRAID Flash Drive: SanDisk Cruzer Blade 32GB USB 2.0 Flash Drive- SDCZ50-032G-B35 VMs: Windows Server 2016 Standard Windows 7 Professional x64 (The one doing the syncing) What should I do? I imagine I can reinstall but I would rather not have to have this problem again, I am currently running Memtest86+ on the DRAM, no errors so far. Is there a log I can grab from somewhere? I also attached a video capture but its framerate doesn't capture everything VideoCaptured-on09-09-2017-at-18-33-38.avi
  10. The thing is in unRAID it isn't consistent. The disks shows as 1TB instead of 931 GiB, but when you create a vdisk say 700GB it is actually 700GiB and not 751GB
  11. Is there a way/option to have the UI show disk capacities in the base 2 system instead of base 10? Ex: Instead of 1TB it would show 931GB, and the same would be said for used space. You know when 1024 bytes = 1KB instead of 1000 bytes
  12. Wondering if anybody has any input in my previous post. I did look up Turbo Write I will have to re-setup my unRAID system and play with that. Heres another thought though, What happens in terms of performance of the parity disk a SSD, while the data disks are HDD's?
  13. So now I have two questions, what is Turbo Write? and You can have a device outside of the array (or multiple disks) and passes through to the VM? I assume you cannot use this disk on any other VM's as it is assigned to one specific VM. The local "C" drive will be on a SSD as its capacity will fit, but I want more I/O performance for my "D" drive which would be on mechanical disks than XOR parity can provide. I suppose another option is use the 1TB disks (No parity) and provision say 500GB on each disk as a vdisk for the VM, and have Windows do a software RAID 5? Not a big fan of that. Edit: I don't intend on having the disks spin down. Hows the write performance in unRAID's version of RAID 1?, say if I got 2x 3TB or 4TB drives
  14. I have been experimenting with unRAID, FreeNAS, and a little bit of VMware (but dont have a hardware RAID controller) Between unRAID and FreeNAS 11 I find unRAID to be much more user friendly and easier to manage, setup and configure VM's. The state of VM's in FreeNAS 11 is severely lacking. However there is one thing that is really hurting me with unRAID, the VM I intend to use will have 90GB allocated to its local "C:" drive, and around 2TB or so for its "D:" drive (Windows Server 2016) For hardware I have: Xeon 1230v5 32GB DDR4-2133 ECC 2x Samsung 850 EVO's 250GB 4x (old) 1TB WD Red's (2.5") Dual Gbe LAN, Mellanox 10Gbe What my experience there is no way to cache writes on the second vdisk (drive D) on the cache drives as it would be way to large. So when writing to the second vdisk (drive D) a single 1TB drive with 1 parity disk, I hit a whopping 22MB/s, and if I hit all 3x 1TB drives at once i get writes of 7.1MB/s per disk, which is ridiculous. I know these are not fast drives but writing with RAID 5 instead of XOR seems to be better. Is there someway I can cache writes to the SSD (or RAM) at all? or are there any recommendations for increasing throughput? I know FreeNAS can cache vdisk (zvol) writes but it is a completely different product (mostly targeted at NAS usage, and I have a perfectly good DS1812+ filled with 3TB drives running, no real reason to replace that) I really want to use unRAID because of the great features it enables me, sure I can use Hyper-V on Server 2016 but unRAID gives me the flexibility to run VM's independently. The only option I can think of is replacing my 1TB's drives with SSD's which isn't exactly inexpensive.
  15. I was reading though some of the documentation and noticed this (in regards to VM's): IMPORTANT: Do NOT store your active virtual machines on a share where the Use Cache setting is set to Yes. Doing so will cause your VMs to be moved to the array when the mover is invoked. (I had a VM dissapear cause I set this wrong) I currently have a 120GB NVMe MLC cache disk and a 250GB 850 EVO (with a 110GB disk for Server 2016) I disabled the cache for the "domains" share, my question is what would be the best way to add my 4x 1TB WD Red's to the VM as disks and take advantage of the cache? Can I create another share with the Cache On and add a disk to the VM specifically or is that ill advised? Should I just use a SMB share and attach that to the VM? (Not the way I really want to do it though). Is there any compatiblity issues either with using an Intel X520-DA1 or a Mellanox Connect 2EN SFP+ 10Gb Ethernet adapters with the system (In terms of driver support)? Is is possible and is it supported add that NIC to a VM for use only with that VM. (My plan to have all my bulk data traffic (SMB, iSCSI on a seperate VLAN). Is there a reason why my isos and the system share are by default on the cache? (currently don't have that setup with parity) If you have to rebuild from a failed drive do you need to stop the Array first? Why are so many settings disabled until the array is stopped (even changing the hostname)