Jump to content

TheSnotRocket

Members
  • Content Count

    11
  • Joined

  • Last visited

Community Reputation

1 Neutral

About TheSnotRocket

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Just an update on this. I ended up with this config: SuperMicro 12x Bay 2U with SAS3 Backplane Processors: 2x Xeon E5-2620 v3 2.4GHz 6-Core CPU's Memory: 96GB (12x 8GB) DDR4 Registered Memory Storage Controller: LSI 9311-8i (IT MODE) PCIe Network Card: Intel X520-DA2 Dual Port 10GbE SFP+ PCIe Adaptor Power Supply: 2x 1000W 80+ Platinum Power Supplies Added in the above Intel SSDPE2KE076T801 SSD, running with that StarTech adapter. I'm able to write my full backup set to the SSD at full speed (sustained 930+MB/sec) and then blow the data out to the SAS drives at a whopping 135ish MB/sec sustained transfers. Super happy camper. Primary box is a 22 Disk + 2 Parity (All SAS) + 2tb SSD drive, 215TB usable server for home use, this second one is 5+1 (3 disk + 2 parity, 30TB usable all SAS currently) + 1 cache drive for business use. I'll probably throw in 5 more, 4 TB SATA's to this box as I pull them out of the QNAP it's replacing. Thank you folks. This makes for my second unRaid config. Both have been mostly super easy
  2. Thanks for the reminder on that - I guess I'm good with that. 10, 10TB drives, 8 useable gives me 16 backups of my SQL server (4tb) and I only keep 5 full weeks. My other backups are smaller large files (couple hundred gig each all the way down to 50gb). I can always add more drives into the array if I need.
  3. I'm aware of the cache limitations of the consumer nvme's - the sabrent Q was my first thought as well. So, maybe ideally, I'm looking at a pair of these: http://www.acmemicro.com/Product/16580/Intel-NVMe-7-6TB-Solid-State-Drive-SSDPE2KE076T801-DC-P4610-Series-U-2-15mm-3200-MB-s-Read-3D2-TLC-NAND?c_id=622 With... an adapter like this? https://www.amazon.com/StarTech-com-U-2-PCIe-Adapter-PEX4SFF8639/dp/B073WGN61Y?th=1 (edit) oh look, they even have a picture of a similar drive on that adapter - I can't find any info on the cache levels or sustained write on the Intel drives...
  4. On the topic of file sizes.. if I run MinIO, I can control the file chunk size up to 5gb I think. Same if I use SMB - I expect to be writing 1 very large file (lets say my bigger SQL server is 4tb compressed). I'm game for stacking the backup window one after another if it seems that is the preferred method to let the SSD bake a bit. My SQL server is 4TB compressed. Currently in one file for weekly, and then smaller daily chunks. Maybe run that 4tb backup job, let it rest for a bit, and then use the unraid mover service to get the file off the SSD, then run the other backup jobs? Seems that HHHL AIC would be my preferred installation method. PBlaze looks bad ass, but non existent inventory it seems.
  5. Looking for a bit of a gut check here. I'm needing a high performance storage solution - this unraid box will be doing nothing but minio or maybe SMB shares for backups in a corp setting. Inbound, I have 3 servers (all SSD) on 10g networks each. Currently, total nightly diff data backup is about 1tb, full weekends are 7tb-ish. My weekend backup windows are currently 35+ hours on a gigabit network. Plan is 4 port 10gbe nic (2 ports in use) in UnRaid, running 10, 10TB SAS He10 spinners on a 6 or 12gb backplane. My question is real speed. Cache side and saturation. Am I wasting my time and money on dual 8tb nvme's or SAS SSD's? I'm trying to get my backup windows down to a much more reasonable timeframe and would be fine with loading up the cache for speed purposes to then offload onto the spinning disks. What are the real world thoughts on actual speeds - both for 6 and 12g backplanes for the spinners, as well as possible 6 or 12g for the SAS SSD's, or all out nvme riser? Thanks for any input....
  6. san-diagnostics-20200629-2205.zipHere are my diags Just trying to figure things out... So... Sabnzbd docker container - if I'm in Sab's UI, no internet access. If I drop to the docker shell, I can ping places like yahoo.com Thoughts?
  7. So... I have this mostly working. My only current issue is that my docker networks aren't talking to the internet. I can hit some docker containers from my local lan.
  8. I posted this over on Reddit - but figured I'd ask here as well. Currently have a physical PFSense implementation with dual gig wan's and single gig LAN. LAN is oversaturated ( 2 into 1 ) PFsense on unraid VM question - Config - PFSense in unraid - physical 4 port gig NIC, dual gigabit WAN connections. If I build a multi gateway PFSense build, I have a question on routing. How can I keep internal traffic, internal to my unRaid server? I think ideally I'd like to have a NIC in PFSense that was on my Docker network without going out to a physical switch, and back in a separate interface - however, I only have br0 (along with my physical cards).... An Alternative would be to keep my LAN traffic within pfsense and only go out if needed. Ideally, I only need full bandwidth within my unRaid build ( Dual WAN in, to docker minio ) while still allowing the 3rd nic to communicate with the rest of my network (at gig is fine). I have the virtual nic configured in PFSense to be on my local LAN, but can't seem to find a way to tag it into the proxynet/docker network. Thanks for any input Would Dual WAN (physical) to PFSense, with a virtual NIC in UnRaid work out of my bond0 interface? Again, the goal is to not send traffic out of a physical interface, just to come right back in on another (or same) physical interface.
  9. Thanks for the reply squid. I'm an idiot. I was fighting so hard with the ubiquity plugin yesterday on 8080 that it was just now beyond obvious. Just needed more coffee and a nap.
  10. Hello folks... docker question. I have sab running flawlessly.. it's fast and doing everything it should... except... No matter what I seem to do with my config, I can't get docker to launch on anything other than 8080. What am I missing? I can configure docker to say... 8800... restart the docker container and it (the actual sab ui) is back to 8080. Host port 1: 8800, container port 8800 Host port 2: 8900, container port 8900 Launch it.. app is on 8080. Change the config in sab to 8800, restart sab.. and it's on 8800. Restart the container.. host port 1 and 2 still show 8800/8900, but the app is back running on 8080. What am I missing? Thank you
  11. Not sure if I missed this or not... but can I backup to remote SMB share? .... I added a remote SMB share as a mount point but not seeing it in my drop down.