1812

Members
  • Posts

    2625
  • Joined

  • Last visited

  • Days Won

    16

Everything posted by 1812

  1. I used an h220 to access mine, when I had one. I used 4tb disks in it.
  2. I don’t know if a 470 will work or not but I was able to resolve 2 580’s black screens by dumping rom and passing through with the card.
  3. its a 10 dollar try it and find out! If it stopped after the fan, then heat would appear to be an issue.
  4. I'll just stick them on the top and on the side (x marked in the pic below). they are operating within spec range, though towards the top end. Worst case I'm just making them more "Mad Max" themed in appearance. Best case I might get some longevity that I would have otherwise lost. If I have space in the switch I may add some to the sfp+ ports as well, as shown in the second picture denoted by a circle. too much? maybe, but also trying to keep myself busy and I don't see how it could hurt.
  5. amazon has a few https://www.amazon.com/Heatsink-Cooling-Stepper-Regulators-Raspberry/dp/B07V9XDJNF/ref=sr_1_4?keywords=heat+sink+kit&qid=1585487921&s=electronics&sr=1-4 https://www.amazon.com/gp/product/B082RWXFR2/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1 the flat area on my 10gbe rj45 transceiver is 15mmx13mm on top and 15mmx12mm on the side so I looked for kits that wold have something to fit there.. Shortly after posting that update, I rethought about which kit I bought, cancelled the 7 dollar one and went for the 16 dollar 100 piece kit because.... well, "sheltering in place" is going to become "put heat sinks on everything to fight boredom."
  6. Over a year later and I've posted an update in the top post with my experience.
  7. updated original thoughts/observations after 11 months of use.
  8. because poweline adapters are inconsistent and suck. you can't think of it as an even flowing pipe of data with them. its always uneven and can be influenced by interference occurring from outside your house. for now, I'd just go from Mac to switch to server and work on that. once you have acceptable speeds and are maxing out gigabit (assuming you're writing from an ssd in a Mac to ssd cache in the server and vice versa) then start adding back in other networking components. I'd normally suggest starting by going direct from Mac to server and cut everything out, but that implies you have an extra network port on the server and can setup static ip addresses in the same range on both devices. But thats a little more than basic so it's better to focus on ease of testing first.
  9. this is your problem. even though power line adapters are "rated" or even show as 1gb, they rarely do it. I've put them on a clean line short electrical run (2 sockets on the same breaker in the same room) and never saw more than 20-30MBps. They should never be used as a critical route for connection, nor should be expected to deliver anything more than a crawl. please detail what your cache disk(s) are, and what else is running on the server. I use Macos and run 4-500MBps transfers to my server. If you're connecting ethernet cable-router-ethernet cable-server and still getting slow speeds, you may need to look at what smb version you're connecting to the server via. and also consider disabling SMB signing as described here:
  10. 1812

    Squid is 50!

    have a great one!
  11. mine was set that way. I just left it alone for a bit and it eventually figured itself out. all good now!
  12. this helps recognize the gpu. thanks. but now I'm seeing this error: 16:03:24:WU01:FS01:Connecting to 65.254.110.245:8080 16:03:24:WU01:FS01:Assigned to work server 155.247.166.220 16:03:24:WU01:FS01:Requesting new work unit for slot 01: READY gpu:0:GP107 [GeForce GTX 1050 LP] 1862 from 155.247.166.220 16:03:24:WU01:FS01:Connecting to 155.247.166.220:8080 [91m16:03:24:ERROR:WU01:FS01:Exception: 10001: Server responded: HTTP_SERVICE_UNAVAILABLE[0m 16:03:24:WU01:FS01:Connecting to 65.254.110.245:8080 16:03:24:WU01:FS01:Assigned to work server 155.247.166.220 16:03:24:WU01:FS01:Requesting new work unit for slot 01: READY gpu:0:GP107 [GeForce GTX 1050 LP] 1862 from 155.247.166.220 16:03:24:WU01:FS01:Connecting to 155.247.166.220:8080 [91m16:03:24:ERROR:WU01:FS01:Exception: 10001: Server responded: HTTP_SERVICE_UNAVAILABLE[0m problem with F@H?
  13. even with that, mine only uses cpu..... logs show 15:52:30: GPUs: 0 15:52:30: CUDA: Not detected: cuInit() returned 100
  14. No. Don’t be a cheapskate.
  15. do you even google bro? Bing? If you assume they ask, search.
  16. I run virtualized sophos utm on my main server in my office which dumps out gigabit internet access to a mikrotik CRS305-1G-4S+in 10gbe switch (also in the office.) That then runs back to my main server via 10gbe dac to a mellanox connect-x 2, 2 dacs to my workstation/backup server to a solar flare 2 port card which is split between backup server access and a work vm with graphics output for video/photo editing. the last port on that switch has a copper transceiver which connects to a cat 6a cable (I ran 2 cat 6a drops into every room of the house except for the office which has 4). that cat 6a connection then serves as a trunk line running to a CSS326-24G-2S+RM 10gbe switch in the pantry for house lan. That has 2 10gbe ports and 24 1 gig ports. The rest of the devices in the house are 1gb connections but I went ahead and ran 10gbe to future proof, as the cost was not that much more vs cat 5e. in a year or two, I’ll upgrade the house switch to have more 10gbe ports if needed. But for now any one connection (or several connections) on the lan can saturate their link to the server and still not bog down internet access, and vice versa. I could have simplified a little bit by having the sophos firewall use the 10gbe connection in the server through a virtual bridge but I didn’t. For some reason I like having separated hardware for the vm without direct access to the server itself. Plus it also keeps internet traffic off that interface maximizing latency for gaming and throughout of data.
  17. My physical disks can hit 165+MBps on reads, so when moving large video projects and photo collections off the array, it’s 50% faster. Then on the way back, it hits 400+MBps to the ssd cache, making it 3-4x faster. Eventually I’ll go raid 10 on the cache and it should hit 600MBps. Or maybe I’ll go nvme and try to saturate the connection (if I can free up a pcie slot). its not really something everyone needs. If you only move 1-3GB, it’s not really worth it. But with 200GB video project files, it helps save some real time. (but it is fun to play around with)
  18. I have 2 different mikrotik switches and they run without issue in switch mode. I've never had the need to try them as a router, so I can't speak to that user experience, though I have dug through menus and it appears fairly comprehensive.
  19. I’ll just leave this here.... I’ve personally run several 380 g6 servers and never had nic issues. Either it’s new to 6.8.x or you’re not unRaiding right.
  20. I use to run pfsense in a vm and also used openvpn(the built in). Anytime I didn’t follow a guide to set it up, I would always screw up something, usually with firewall certificates, or user certificates not linked to the correct server cert. the same goes with port forwarding, easy to misconfigure as the labels and descriptors are not always intuitive. so my first question would be did you follow a guide or go commando in setting up the vm? Same question with the poet forward? i currently use sophos in a vm so I don’t have anything I can directly share. But I think my older pfsense vm is still on the server if you wanted me to look at some settings. also your ip is visible in the second picture.
  21. Why? Don't want the authorities seeing what you've been doing?