Jlarimore

Members
  • Posts

    17
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Jlarimore's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. Ooh got it! Pinging it made me realize my blunder. The two computers were on different networks.
  2. Hmm. I tried dropping all pretense of security in Chrome and I still get nothing but a blank, white, empty webpage. Is there any way I can verify this thing is actually on my network? (like figure out its IP address) Update: Ooh did get it to change the error message. Now says "This site can't be reached. DNS_PROBE_FINISHED_NXDOMAIN"
  3. Yes. I get to a login prompt and I believe I just typed "root" and it seemed happy. I did get to the blue screen and tried both safe mode and normal boot. Both GUI options demonstrated identical behavior. Good to know that I don't need 6.10 to be able to use the web interface. Now I just need to figure out how to access it from my other machines. http://tower.local just brings me to a blank web page that says. "not secure." Should I learn enough Unix to figure out what its IP address is?
  4. Alright. Not having a very fun time trying to get going. Just a little background: Not a confident Unix user. Background is almost entirely DOS/Windows. (Although experience with Home Assistant) The sooner I can get into a GUI, the better. New system with a 12900k + ASUS z690-i motherboard. First problem is that the system will not boot into the UI. A bunch of bootup configuration text scrolls down and then I am dumped to a blank screen with a flashing cursor. OK. This seems to be a common problem. Maybe something to do with running on an Intel integrated graphics chip. Alright, I guess I will run the thing headless straight out of the gate. Seems like I should be able to connect to the machine via my Windows machine and configure it through the web client. But, I am having no luck connecting to the machine. Any tips on how to get going? The guide says connect via the web client. But, it also says "works on version 6.10 onwards." How do I get version 6.10? Seems like 6.9 needs to be running in order to download a prerelease. I'm stumped.
  5. I guess if all of the input/output channels out of the computer are slower than 1GB/s it doesn't much matter for most use cases. In my case it looks like my wireless transfer rate is fastest (Wifi 6) which I think would be a little over that rate. I guess I will debate whether it would be worth using one drive as a cache disk.
  6. Thank you for tempering expectations. I do hate let downs. I'm assuming those are gen 3 NVMe drives. So, roughly 3.4GB/s reads are to be expected. Yikes. Only reading like 35% of expected speeds. Are you telling me it might actually be beneficial to use one drive as a cache drive even if my cache and array devices have identical read/write speeds? Seems silly. But, okay.
  7. Sweet. I think that's probably the route I will eventually go. I think I'll slowly collect these expensive 8tb drives and once I'm about to exceed 4 of them, I'll switch to a PLX board and slowly go all the way to 10 drives with 1 or 2 being parity. Hopefully at that point a pcie 5.0 variant of the PLX card will exist. It looks like that 8 drive 4.0 card would just narrowly fit in my Ghost S1 ITX case. That's a boat load of lightning fast, well protected storage in a tiny, tiny space.
  8. I had seen these types of cards before. I was trying to get by with direct bifurcation mainly because I have made some assumptions that it would be harder to manage the drives if I have to deal with them being obfuscated behind a layer of drivers and software. Would information about the 8 drives be passed along to the machine? Like could I keep an eye on individual drive temperatures and pool them however I see fit in Unraid? Before I lay down like a grand for one, I'm curious about the pros and cons of a card that has its own bifurcating logic on it. Obviously, one advantage is this could take me all the way up to 10 drives on the machine... And of course now I'm getting greedy and wanting a PCIe 5.0 one to double the bandwidth.
  9. So you're telling me to build a real server? I guess I could. But, I fear the cost would have me so pot committed that it'd be hard to ever update it. Cramming 6 NVMes in a tight ITX space (you know... flexing their advantage over magnetic storage) is a lot more challenging than I thought it was going to be. Looks like all the AMD apus are limited to pcie 3.0 for power consumption reasons. Poking around my BIOS, it looks like my 5950x could do this. But, then I have no graphics output whatsoever... I hadn't considered that. Is it possible to build a computer with no graphics output at all? (Maybe swap the graphics card out for the nvme card after getting unraid set up and managing the whole setup remotely)
  10. Well, I guess Alder Lake is out. (Unless I want to be limited to 4 drives instead of 6) That destroys this build idea. Was really hoping to use DDR5 RAM to get at least a little ECC protection. I guess the alternative is falling back on an AMD apu.
  11. I'm seeing some scary threads out there implying that the CPU may only bifurcate to 8x 8x. I guess we'll see soon enough.
  12. So, my eventual goal is to build a mini ITX NAS with 6x8TB NVME drives for storage. (Once the Rocket 8TB TLC drives release) I am well on my way to collecting the parts to create this build. But, I am starting to worry about PCIe bifurcation wonkiness that might limit the number of drives I can get to run simultaneously. As I understand it, NVMe drives require 4 PCIe lanes each. My plan was to build this on the ROG Strix z690-i mobo with the 12900k processor. (major overkill i know) The main idea here was to use a processor with onboard graphics to leave open the 16x slot for an 4x m.2 splitter card. (ASUS Hyper m.2 gen 4) I guess I was expecting with ASUS pushing bifurcation support so much that their two devices would almost surely be fully compatible. (unable to tell from existing documentation) But, it seems like with a little back-of-the-napkin math, the CPU supports 20 lanes and I am asking for 24 lanes worth of drives. Are the two m.2 drives built into the mobo sharing 4 lanes or something? It'll be another week or so before I can tell if this mobo supports 4x4x4x4 bifurcation. And, even if it does, I won't immediately have the drives to verify functionality. As I understand it, sometimes some of those bifurcated branches can be disabled at a hardware level if the lanes are being shared elsewhere or something. It also seems like AMD might be much more on top of bifurcation support than Intel. If that is the case, and an AMD build is going to be a lot less of a headache, I do have a recent x570 build I could easily pivot to be this machine. (would need a new CPU with integrated graphics support) Repurpose the 12900k to my main machine and turn the AMD machine into the NAS. Anyone know if the Intel build will work? If not, will the AMD?
  13. Thank you for the detailed explanation. I think I get it. I guess my question is, how much do parity calculations slow down the writing to my parity protected storage pool? Are they going to slow down writes from 6600MB/s to 6300MB/s or from 6600MB/s to 300MB/s. If it's the latter, I can see how I might still want to use a mover. If it's minimal, why waste a drive on it, right?
  14. If I, knowing the risks, go ahead and make a 4 x 8tb nvme data pool, is there a reason to still use an nvme cache drive? As I understand it, the write speed bottleneck is going to come from computing/writing parity. I only have 6 m.2 slots to work with. So, the more of those that go towards storage the better. Can't RAM act as the cache?