Madoc

Members
  • Posts

    7
  • Joined

  • Last visited

Madoc's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Ok I got it, the gui does bad things if you change the vdisk path the manual and then select a new one. I got the xml from the backup,pasted that in, edited the path by hand in the xml and it now shows 1/40gig like it should and she boots
  2. So I have a windows server 2019 vm and my centos router vm. Wanted to have them run from an ssd in UD. did cp -sparse=always to the new folder. Change vdisk on windows vm, starts up way better performance a good time was had by all. change centos router vdisk, wierd it says its 1/2gig instead of the 30 gig I had assigned. And won't boot. No big deal, I'll go back to the origional...1/2 won't boot, won't let me up the size. Ok not a big deal I'll mount the vdisk from the nightly backup. 1/2 won't boot. In case these matter: I did delete the domains share (folder/files are still there I just got rid of the share) I also changed the default vm save folder to a folder on the new ssd. Elp! leviathaniii-diagnostics-20200319-2140.zip
  3. Thank you that thread not only answered #1 but also the additional questions that answer raised 2) There are ways to do it. I could use hardware to raid 0 the 2 drives. Then Unraid would see a 506 and a 500 and raid 1 them to a 500. Or I've seen people talking about using other tools (mdadm) to do linux software raid, so mdadm software the raid 0 and the rest is the same. Might not be able to do a raid of raids with btrfs but there are ways. What I'm wondering is would be benificial. Or am I just buying myself headaches. The raid1 pool and the 500 in UD is what I was thinking before and if it turns out my new idea is impractical or inadvisable thats the way I'll go.
  4. Google seems unable to answer my specific questions, but it might be more that I dont know the terms to search for/what I'm reading. Drives: 1X 256g lighton SSD 1X 250g 840 EVO 1x 500g 860 EVO (brand new) As an additional wrinkle, this mboard only has 2 sata 3 ports the rest are 2. I will fix that with a new controller down the road, right now I have blown my budget for this project twice over. I tested the crap out of the 2 smaller drives they are good. I am running UD and UD+ (super useful especially during onboarding) So how I have it setup now, I added the 256g and the 250g to the cache. Which resulted in a 253g raid1....which seems wrong to me... 1) I wanted raid 1 for protection from a drive failure. Is it using the extra 6 gig on the 256 as 2x 3gig? How does it do that? However I was thinking. I have the min free space set to 4.7g so if that 'extra' 3 gig is at the END of the drive then it does not hurt me and actually helps me. Origionally I was planning on running the 500g alone and using it as a vm/docker/system store. Leaving the 250ish as the write cache and calling it a day. But other configs have occurred to me. 2) Would it be beneficial to raid 0 the 256 and 250, and then raid 1 that to the 500. I dont want to lose unraid monitoring of the drives so I was thinking the raid 0 would have to be software too. But this would way simplify management, I'd just set the shares to cache only and bobs your uncle. This also gives me raid 1 protection of the vm store. Since I'd be mixing sata 2/3 for now and old/new drives it'd be giving up a lot of of performance...but these are vms on ssds, its already going to be way quicker then my 15k rpm drives/SAS1 drives I'm coming from. This would also give me a much larger write cache when the active vm store is small. (I'm using a plugin that backs them up to the array nightly. My intension is just to delete them when I'm not using them and depend on the backups if I want them back.) Also debating on which drives to put on the sata3 ports, right now I have the 2 cache drives on it and the 500 on sata2 but the 500 is still running smart tests etc. Thoughts?
  5. WARNING: LONG AND RAMBLING AND I NO LONGER REQUIRE ASSISTANCE FEEL FREE TO IGNORE I have hit the milestone that my old dl380g5 server and md3200i san are shutdown and officially replaced by my unraid machine. I bought the plus licence which is currently overkill for my new disk layout. But gives me room to expand. After all, thats one of the big upsides of unraid. Out of space? add a drive! Networking: Yea, I knew about that and it works. But I wanted them dynamically assigned by my router. I tried leaving the ip blank but it apparently assigns them from its own pool. I ended up just going with a static ip and adding 'address' entries to dnsmasq on my router (which is now a vm on unraid!) Drives: the 128g ssd died on me. I went back and forth on a bunch of designs on what to do. From my old san I still have 3x 500g 15k rpm drives and 5 2tb 5200 rpm drives. SAS ofc. So I was thinking of getting a controller and popping those in. But the more I think of it, and knowing my own usage patterns, thats just overkill. And defeats my 'save money on power' objective. SAS drives do not spin down, do not sleep and are designed to have lots of fans(that sound like jet engines) on them at all times So here is the layout I'm going with. 3x 4TB ironwolf in the array. (only 1 in atm) 2X 256 ssd write cache NEW! 1X 512gb sdd as vm/docker store. Once unraid supports multiple caches, I'll add it as a cache as well. 2 of the 1TB 7200rpm hd's are in my desktop, I'm going to clean them out onto the array then get them out of there. The other 1tb hd is currently in my array because I filled the first ironwolf and the second hasnt arrived yet. Which tells me my current cold storage is 5TB, the 8TB capacity I'm going to have should be plenty for a long time considering I bet I could cut what I have down by at least 30% if I sat down and cleaned them out. I used to play web games in boring conference calls, my new boring conference call activity is cleaning/organizing data. I'm hoping to have most of it done before the parity drive arrives wednesday. I ran into another wrinkle. My Unraid machine is my old work desktop. Thinkstation D30, neat dual CPU 16 ram slot tower. Basically a server board serving as a desktop. I bought a thingy off amazon to turn the 3 5.25 bays into 5 hd bays with a fan and its my 10 bay unraid host! (11 actually with 2 ssds in one bay) However! When I went to start hooking up drives I noticed something odd. 2 red (Sata 3) 2 orange(sata 2) and 5 blue (sata 2) If I'm doing the math right the sata 2's are fine for the ironwolves but not for the ssds. I'm still deciding what to do about that. I think for now I'm just going to use my 2 sata 3's for the write cache and put everything else on sata 2. Later I might add another controller but I'm done spending money on this for now, I've bought 3 4TB ironwolves, a 512g ssd and a 1tb ssd (for my desktop, which freed up its 256 for write cache) and a new 8U wall mount rack to replace the 32U old AS400 cabinet that the old equipment was sitting in. My budget for this project is already blown, twice (4 blues are active, 2 dongles you cant get anymore 1 activated the 5th blue sata and the other activated it and turned them all into sata2/sas which is neat but they are impossible to find now and even if you can they are more then a new controller/cables would be)
  6. Your right, they don't add a lot of value but its non zero. And if they do die i'll replace them. My use case has always worked very well with having a 'fast' and 'large' array. I'm already stepping down from 15k rpm 'fast' and 7200 rpm 'large'. The main thing I want from unraid is the monitoring and telling me when something is degraded. I have tons of 'to be dealt with' that has to be sorted and stuff before archiving. VMS that are very sporatic use. dbs/development stuff im activly playing ith. More research and reading and it looks like going btrfs raid 5 in unassigned devices for the 1TBs will give me what i want for now at least. 256g raid 1 write cache should be plenty. I rarley add more then 100g in a day, and even if I do, from my reading it won't fail it'll just write to array directly. 120 for critical vms/dockers that need the performance. 'Fast' btrfs raid 5 in unassigned.
  7. Ok playing with the trial and I've almost hit the point where this will do what I need. Loving it so far with two exceptions. 1) Networking - I want both vms and docker to get ips/networking from the same nic group as unraid (refered to in most software as 'bridge mode'/bridgeing.) I got that working for vms but having trouble getting docker to do it. All I need is confirmation it IS possible and my config just isn't quite right, I can research more /mess with it. 2) This it he one I think might end up being a deal breaker. And the hardest to explain so let me go with planned HD layout. 3X 4TB Ironwolf (5900 RPM) 3X 1TB WD Black (7200 RPM) (cache? btrfs raid?) 2X 256G SSD (btrfs raid1 cache?) 1X 120G SSD (vm/docker store) (I know, I'll need the tier2 licence, I'm fine with that) I didn't realize you can only have one array till I went to create the second one. All the hd's are stuff I had lying around other then the ironwolf drives. So the ironwolves are my base unraid array, one parity 2 data, its all very clear there. The 120 vm/docker store it looks like my best bet is unassigned with a user script to run trim once and awhile (I've been researching/playing with this stuff for 2 days now) and run backups of it to the ironwolf raid periodically. Now the problem comes with 'what to do with the blacks' I want to use the blacks as a sort of 'downloads/to be organized/non critical vms' high write playground. I was origionally thinking I could set it up as a cache however, can I have multiple caches? Can a cache have parity? These drives are older and I would expect will die first, so I want 1 parity drive or raid 5 or what have you. Options I have explored: A) Just add them to the array with the ironwolves, it'll work. Absolutely it will but if I'm understanding how unraid works thats going to be speed capped by the ironwolf parity drive and beat the CRAP out of it because of the high write. B) Nested unraid, or go back to esxi and run 2 unraid vms. Requires two licencees and seems like WAY overkill/more management and takes away a lot of the reasons I wanted to goto unraid in the first place. The all in one-ness/simplicity. C) Just add the blacks to the ironwolf array and buy a 4TB 7200rpm nas drive Still sounds like an early death for the parity drive, and a now MORE expensive parity drive. D) Hardware raid 5 it and leave it in unassigned. Loses me the monitoring/notifications that led me to using unraid to replace free exsi in the first place. Lesson learned after Losing 3 iscsi drives. (2 in one raid5 (ow) and 1 in another) And this loses me spin control does it not? E) Hardware Raid 0 it and put it in the array. This one was a bit more interesting, still killing brand new ironwolf to parity old drives. F) Use the blacks as the cache and drop the ssds, and pray for new features. I was thinking this but a lot of the stuff I've come across is just 100 different use cases for 'multiple arrays would be nice' going back YEARS. So I'm guessing since the feature isn't there. That there is a reason for that, and its not coming any time soon. Ideas I'm still exploring: There must be something I'm missing, can I have more then one cache pool/array? btrfs, this sounds like software raid, which would be fine but how to get unraid to manage the drives. Which is why I keep coming back to making it the cache. Appreciate any tips/ideas I haven't explored. Links to documentation or google terms greatly appreciated. The first ironwolf arrived (ordered from 3 dif places with dif shipping speeds) and I started copying off my degraded SAS raid. Thats been going over 24hrs now I'm REALLY hoping I dont have to whipe this machine and start over with another solution. (Yea I know I have no parity right now, im counting my degraded SAS as a backup, once other ironwolves arrive and data is all transfered ill start the parity stuff)