Jump to content

Bizquick

Members
  • Posts

    96
  • Joined

  • Last visited

Everything posted by Bizquick

  1. I've been getting drives here an there ready for my backup box to test this. Hope to see a RC soon.
  2. no the dockers access the remote shares after the trueNAS VM is up. if I could get things working so Native ZFS worked and it was not as difficult. Then I might have a different way I could do all this. The Native ZFS video didnt really help with how to setup the shares and keep them working.
  3. OK that helps one issue Just need to figure out mounting the remote shares after the VM starts
  4. Not sure if this is the right place for this. But was wondering if there was a way to setup some start up delay's on maybe dockers. I recently trying some stuff setting up a VM for a ZFS pool. I followed both of space invaders ZFS videos. I tied the Native ZFS method and I didnt really like it that much. So I wend back and did his VM method and setup a TrueNAS VM instead. Anyway long story short. I found doing all this I need to use remote shares and my VM's has to load up before those can mount. Then my dockers will access those too. Anyway I can set a delay for the dockers and the mount for the remote shares to come up maybe 1 minute or 2 minutes later after the VM service starts?
  5. Great well maybe I can reload that box with the AMD. What Kernel version should I run?
  6. Wow first off this post explains so much its hard to take all of it in and takes a few times for me to read and figure out what all is wrong with my setup. Second I was looking to get some better speeds I have always liked the idea of using HBA cards vs the standard Sata ports off a main board at least for my spin disc's. SSD's on the other hand not sure that the speed difference mattered much. But lately I have came into a new issue with speed and a different CPU and main boards. I was thinking my old 9211 LSI cards were just too slow or something. So I search ebay for a smaller upgrade and got 9300 16i cards for not much more than 9211 cards. with them able to do SAS 12GB I figured it might be closer to true 6GB Sata 3 speeds. Now If I used this card in my AMD 3400G 470x chipset board. The speed were actually terrible vs using in my Intel B250 pro i7-6700. both had same ram amounts and same ram speed. But preforace on the AMD was terrible regardless which PCIe slot was used. I'm starting to think maybe the 9300 firmware might be the issue I'll have the check the firmware version when I get home. But I'm wondering how much impact that would cause on the speeds. Also should I have the party drive on the HBA with all the other spin disc's or should I put it on the Sata ports off the main board? I have 2 cache drives on sata ports and right now the HBA has all 3 other drives. I'm using 3 drive per a channel on the 9300 16i card ports 1 and 3. Speed on intel seams ok I get 200 to 212MB. but on AMD I cant break above 190MB I avg 165MB to 175MB at best. I would think the AMD would be better because the 470x chipset is much newer and I'm using the main PCIe slot to get 3.0 16x or so.
  7. That's some good point about electricity. But yeah good quality motherboard is good idea with 6 or 8 sata ports. The newer chipsets handle the Sata I/O better anyway now days. Like I said before I usually build alot of my Servers out of extra left over retired parts. So my experience with some of the newer things could be a little outdated. I just switched out my old 9211 8i HBA for a newer SAS3008 16i chip. I was getting slower I/O speeds till I reshuffled and used the drives on different channels. on my AMD motherboard but on my Intel board I could use it anyway I wanted and got good speeds but its either that or my backup box has some SMR drives and I didn't know.
  8. I personally don't use PCIE Sata cards. I use old 9211 LSI flash IT mode cards for SATA drives. Sata can't go above 6GB speeds on spin disk any way and I've found that those cards the Drives I/O is better when each drive is in use. Example if you make a share and all 3 drives use that share. Then unraid opens reads and writes the I/O on all 3 drives. Raid IT mode cards IMOP seam to handle that better. Now also doing that and adding 10Gig Networking into your mix you will need 2 PCIE slots. and I personally just use older intel CPUs with embedded graphics in them. Then your question is about PLEX and 4k and media streaming. you will get most likely better advice from other here. But thats all I use my stuff for more or less. And What I noticed it depends greatly on how many open running session are in use. I live in a condo with limited room so my old Intel i7 6700k with 32gigs ram runs both TV's and 4k Play back just fine. But if my friend from works plays one of my 4K movies (which I stopped sharing those) Then I might start to see some play back issues. I dont have another TV setup locally to see how it would do with 3 Local connections. But I have to assume it might put a strain on things. So I would say if you plan to run more than 2 4k sessions you might consider a graphics card and ditch drive I/O speed differences. I found finding mother boards with more than 2 PCIE 2x and above slots way to expensive just to get better I/O But I'm a older guy that likes to run cheap and let my wife kid drain my wallet. So your main build your suggesting to me would be great but only a little light on ram. I like to run min of 32gigs because I run all the regualr dockers for auto grabbing TV and movies (Sonar, Radarr, SABNZB and Plex) 32 gigs might be a bit over kill sometimes. I also run 3 1TB SSD's for Caching. 2 1tb's in a mirror for the dockers (NVME/M.2 Sata). and 1 TB Sata cache pool I use for the download caching. I dont mirror that because its just downloading and unpacking and moving the file over to the array for playback. I'll buy cheap Cache Sata drives for it because in case I over use the wear in tare its a cheap replacement. when I started they were almost 100 each now I can get them for under 50 to 55 bux. Not worth mirroring and burning out 2 of them.
  9. If I'm not going to go to the ZFS approch and just use unraid array like it is. should I format the drives for XFS instead of BTRFS. I know the defaults use XFS anyway but if I'm not going to play with the snap shot plugins or any of that stuff. just thinking the speed on XFS might be a little better. Oh also after I format all the drives and I don't have any data or shares. Do I need to let the system do a full parity check? I mean all the drives are blank and have been wiped several times over.
  10. Yeah but its not just unraid. what I noticed is ZFS in BSD is actually alot faster than ZFS on linux. how I did my comparison is a little off. But I used the same hardware and same files I backup. I run my backup to a Truenas Core build and my speed was 30% faster. than running the backup on TrueNAS Scale . (truenas scale is ixsystems linux build of TrueNAS.) Anyway I was thinking I could some what make simular preformance if I ran my ZFS array inside a TrueNAS core VM with passthur. But I still would need to pass alot of resources to make it almost equal to my test because I was testing a running system and off resources running on backup hardware. which is almost as equal and my primary only like 16gigs less ram. I kind of want to build a new backup box but I cant decide on motherboard /ram for the amount I want to pay. Well this is getting off my topic now. It sounds like I do have an option. And I already have the pro license on my main. this was going to be for my backup box.
  11. Well I think after reading a little bit I answered my own question. I need the higher license because I cant start VMs with out starting the array and what is says it would be all drives connected when I start the array. so even if I pass them thru the drives are connected. And I would have to start the array for the VM manager service. and the pass thru would happen after the VM manager service. so yeah I would need at least plus or better.
  12. This might sound like I'm trying to cheat the license system. But Question is if I only want to set up 1 drive and a cache drive for apps. but pass thru all my other drives to a VM for a ZFS Storage and I might have 7 drives going to that. can I just get the Low 6 drive license because I'm going to pass all the other drives to a VM? I just had a really intresting experince today with ZFS and transfer rates and they ran like 50 % better than my current unraid setup. But I like unraid for the app support and stuff but thinking for shares and array to use truenas core.
  13. Yeah looks like this will not work for what I'm looking for I guess I just need to play with trying to setup ZFS Native in unraid. Space Invader has a video on it I guess I just need to watch a few times and figure it out a little bit.
  14. Has anyone done a video on how to use this or how we could use this as a somewhat equivalent to ZFS snapshots? I got my array all changed to use BTFS so I could try to get snap shots going mostly because snap shots was the only thing unraid doesnt have that other NAS servers using ZFS has. TrueNAS has a great setup on the Array but everything else it sucks really bad. App's and VM setup is terrible. TrueNAS Scale showed some promise but its way to much in the infant stages and needs some basic interface support to help control the memory usage etc.. Not to mention the docker support kind of forcing people to use kubernetics and just network control is lacking. And TrueNAS Core is FreeBSD and all kinds of wrong on that.
  15. Is there a form for unraid OS upgrades. If not any special steps I should take or anything other problems I might run into doing a update?
  16. This is crazy I got some extra hardware and I wanted to setup a second Unraid box and test setting up a 4 drive server with all 2TB SSD's. Anyway for the life of me I cant make a working bootable USB drive. and I cant figure out what I'm doing wrong.
  17. I followed this to setup Pfsense in a VM today. And seams good it saved me using a old computer as a router. But I'm wondering how do I pass the LAN network from this VM back to my unraid Server. Because I got a 4 port 10gig interface card to use for this VM. and now I was to get 10gig networking going for my unraid server and dockers. is there a easy way to do that. if not can I get maybe one of the other interfaces passed back to unraid server and configure that? I'm thinking its not going to be easy to do this.
  18. Same here I bought into unraid too and was shocked this was not already done or included. I also would like to see ZFS native to setup as an array. That would make this system so complete. I know many would be expecting some sort of conversion tool too. But hell if I could get those 2 things I'll suffer and rebuild and restore my data.
  19. Both of those cards are going to run the same. I would take the cheapest of the 2. also only use those for the Data array. Not the parity or cache drive. best to use onboard sata 3 ports for SSD's. and parity you want good I/O too. you can put data drives on the add on cards. it will be a little bit slower than your on board. but generally sense you getting just the 2 port models you should be fine. I found out this weekend I had my drive setup all wrong. and I ended up speeding up my parity check for my 28tb array. use to take 1.5 days now I got it down to 17.5 hours. I really should have look into the forms and the manuals more a little bit more when they talked about sata ports and add on. it looks like you did more work than I did looking into that.
  20. I did something stupid. I bought a new motherboard for the CPU I had and I got it based on how fast I could get it. Not on what is most compatible with unraid. Anyway I got a MSI Z390 a Pro main board with a i5 8th gen and 32gigs ram. I'm using on board video. and I just cant reboot this thing with out my video and keyboard plugged in. Its really frustrating. Cause I've been also testing out which storage card I like on my system too and seeing what gives me better speeds ect... Anyway if any of you know any extra setting I can tweak in bios to help for getting this to boot easier let me know. If I dont hear anything I guess I should start looking for a different mainboard again.
  21. Well here is the thing when I was on TrueNAS I was using a small SSD for my jails. a single 240 gig. I noticed when I used a SSD for applications instead of the array. things seamed to run alot better. Now when I switched over to unraid I had that same question. But I found out why kind of quick a cache drive helps. Unraid is not like TRUENAS and running a ZFS Pool. You need to keep a good schedule for Parity checks. And your disc I/O on your array its not the same like a Zpool. I tend to see unraid run a little slower because The way the disc I/O works and with the parity. I mean if you just had Plex and 1 or 2 TV's yeah it could fine but if you wanted to watch a nice big fat movie file and your system is in the middle of doing a parity check. you will wish you did have a cache drive then. Also I find when I used a cache drive dockers used less memory. Not sure how that is actually related to anything. think it must have something to do with how it has to cache a little bit more application cause of I/O. And I'm not talking about a big memory savings. I mean I noticed my plex ap would use about 312megs sometimes. and when on the array disc it would run about 740megs. But its like I said you can spend a little bit or a lot. If I knew you in person I would give you a 240 gig SSD cause I have a few of them. and they are pretty cheap. But it would cost me more in time and postage to mail one to anyone. I also agree with ChatNoir Check out Space Invader's Video's I followed his for a few things I'm doing on my box. Once you see like all the stuff people can do and how really simple it is to setup. You might want to actually want to do more than what you planing and might want to rebuild or spend a few dollars. But I would check it out. cause it really is alot easier to get help and do stuff with unraid. I even tried Open Media Vault before choosing Unraid. Which I liked way better than TrueNAS. But to be honest I didnt like how the docker setup was I felt like it was missing alot of things. and I couldnt figure out for the life of me how to assign a seperate IP address to my 1 docker I setup. Cause I really like to separate somethings like that. Unraid made the whole thing so easy I couldn't go back and try to make it work I was already hooked.
  22. I switched over from True NAS almost a year now. you will like how much easier stuff is in unraid. after 30 days you have to pay and get a regular license. But its worth it for all the support you can get on the site. The only thing I miss from TrueNAS/FreeNAS is the snapshots and being able to setup shadow copies on my shares. That is a big let down for me. But going by what your saying If you just doing file shares and plex. You should be ok with what you listed. And just remember transcoding won't really come into play unless you got a big fat 5 or 10gig 4k HDR or 1080P movie and your going to throw the play back to a device that can only do 720p. My self all my TV's have the Plex app and can do 1080p or more. So I'm not worried about much transcoding. But again if your like down sizing the play back to 720p a lot for like Wifi pad's. Just start to get more 720p files and use a different share or library. That's what I do for the remote play back people I share my plex with. I figure if they want something better they can go get their own server and set something up. But I do suggest you get a cheap PCIE Sata card to get more sata ports like the one they linked above. I've seen that one on amazon for like 20 to 25. And it would be nice to have at least a small SSD for cache/dockers a 240gig should be under 40 dollars for a okay one. But if your doing to do like 4 ore more transcode sessions at 1 time. you might want more than 240gig's. You don't have to spend a lot unless you want to. Myself I'm always using hardware that's several years old for my NAS testing projects. Once you get this running you won't miss TrueNAS that's for sure. The guys on them fourms can be real jerks and they have a over excited opinion of telling people they need to always use ecc ram. I got extremely tired of hearing that. And FreeBSD sorry to say this but it sucks. Debian Linux is so much more easier to get help if your really stuck on something.
  23. Ok I'm thinking I might want a little bit more power for my unraid server. Not really sure I need it. Cause what I'm running is pretty small. At least it seams like its small. But I really don't want to reload everything and try to start from scratch. So my question is can I just replace the motherboard, proc and memory. I really want to avoid trying to reload all my Dockers. Cause to be honest I felt like I spent almost a whole day just getting Lets Eyncript up and my bit warden vault. I really need to Thank Space Invader one day his videos were extremely helpful. I just had to get a domain name and follow his steps for the reverse DNS and stuff. Even used one of his recent videos for setting up a minecraft docker for my Kid and her friends. (but I turned it off 2 weeks ago to see if they even use it. and no kids have screamed at me so must be all good) I'm not even sure if I'm running too much for the current hardware. My current box its older hardware a older AMD 10 core with APU on it. so 6 cores to the CPU and 4 to the GPU. I got Plex, Sonar, Sabnzbd, Radarr Cloudflare-DDNS, Swag and vaultwarden running with 28 TB array. I dont see the load get much over 35%. But I have noticed the CPU starting to run a little bit hotter. like last week I was 35 to 40C and now I'm hitting 50's after adding my second parity drive back. I have a Intel i7 7th gen and motherboard I'm not using so I'm thinking I should switch to that. I still only have 32 gigs of ram to use but at least I can go to 64gigs with buying new ram if needed. Anyway will it be as simple to just switch it all out and then boot up?
  24. Ok this is starting to make sense. So I really at this point I would have to blow away the array and start over and then restore data. I think I do have a few ideas on how to maybe do with out redoing the entire array and 1 disk at a time. But I'm still not sure that's what I want to do yet.
  25. well if parity doesnt help with data restore then I don't understand its purpose. I mean if I loose a drive I still have to restore my data manually then? If thats the case I wasted my money buying this and I will swear off this product going forward. I really don't understand what purpose this product does in that case.
×
×
  • Create New...