moogoos

Members
  • Posts

    15
  • Joined

  • Last visited

Everything posted by moogoos

  1. Are you sure those files didn't exist on the "other disks" possibly before a change? assuming the "other disks" are a cache pool...
  2. Started looking at Cockpit on Ubuntu running ZFS. It's not bad. Pretty basic UI and has some core server admin functionality. Can do NFS, docker and KVM natively side by side. Overall it works but it's kind of boring. I moved all my old Ceph disks into an 8 drive Raid 10 (4 mirror x 2 disks) ZFS pool setup on Unraid and also tested a 4 drive on the Ubuntu/Cockpit setup. I can now copy 4Gbps on my 8 drive setup without using a parity disk or array, and can take advantage of all the other utility of Unraid for shares and some widgets and app management and now docker/VMs against those shares. It's not what Unraid was intended for I guess but getting excellent performance on the main 40TB array is what I need as all my media is served and synced there. Before I was getting 300mbs on a 4 drive with 1 parity array I guess cause its one drive writes and a parity write, which to me wasn't acceptable. Unraid isn't for everyone with how they do the main store of their mass array data, but ZFS performance is hard to pass up too. Oddly this is the fastest over network copy Ive ever been able to achieve. I'll have 12 disks in the final build if testing keeps going well.
  3. This is fascinating: UnRaid is almost perfect for me. I want something that has disk management and storage with native docker and VM management that is somewhat mature OMV doesn't have VM but has docker via portainer TrueNas has VM but no native docker Proxmox is something very different but has VM over ZFS < Currently run OMV VM on this setup with a 25TB virtual volume. Unraid has both docker and VM but no RAID really, just single disk access array which I feel can get slow with accessing media that I have stored on there or search or scan the videos So, I'm thinking of just going with a single disk array so it will start, and move to a pool 8 HDDs and do cache only on that share, then have my nvme pool still for docker and vms. Is there anything in this setup that is problematic? I have backups of everything so that's not a huge concern, as long as I can repair the pool should an HDD fail under btrfs I would think i'm good. Will HDD performance be better than especially for reads over the single disk array then? I think I have enough resources to test this out for a bit so might give it a go.
  4. Thinking about this further I think I'm going to sell my 1TB nvme drives and just get 2 larger ones and put them on the onboard 2xm.2 slots, then I can keep my 2 PCIe slots free for a graphics card and something else, maybe NVMe down the road expansion. The 5950x should be fine and a still a lot cheaper this way. Maybe an HBA too for more HDDs
  5. I am right on the fence of needing more PCIe lanes for a bunch of m.2 ssds. Im also considering going up to the Fractal 7 for 12 drives over the CS381 with 8 hot swap and mATX. Managing PCIe lanes and bifurcation on the x570 is kind of a pain as there are tradeoffs when using certain slots at 16x where the next slot drops to 8x and things like that so I'm thinking I'll go up to a TRX40 threadripper. Anyone go from Ryzen to TR or even vise versa, and care to share experiences? All my old stuff is mostly ASRock Rack and ive been ITX for almost 8 years exclusively but might go back to my old old ways and go to ATX again. I dont see the server anyway. I need 10Gb I need 12 HDD drives I need 9 m.2 ideally, can do 4 but I want to utilize my existing drives if I can I would like a GPU for transcoding although I try to offer native sources as much as possible. < Not high priority I'm thinking asrock rack TRX40D8-2N2T at the moment.
  6. so cache:no means straight to the array and cache:only means use the pool as a persistent store. Spaceinvader video, I think it was, questioned the need for cache:only but to me I see value in it as I dont want the mover to attempt anything on some larger pools is my thinking.
  7. Personally, I would never expose it this way. I started accessing my internal resources only over a VPN connection for administrative functions on my services.
  8. It's relevent to understand it. You don't need to be a linux programmer to want to know the simple idea of how this works and what technology they're implementing to do it. My personal experience using Fuse, although this was 10 years ago, was not favorable. Im just trying to learn how it manages the data so I don't screw something up myself.
  9. That's interesting. So /mnt/user is a path to the files combined over the two physical disks? Disk 1 and Disk 2 both have a media folder obviously with different assets. How does the system merge these things under a different path into a single resource? How does this magic work?
  10. So how does that work if the share is for a docker container volume that now needs data from the old pool and the new array data being written? I'm beginning to think I dont want an array at all and would rather just move all my HDDs to a btrfs pool? This concept of "cache" shares which are really raid volumes that are persistent, and not temporary really. The term cache kinda throws me off.
  11. Curious why your choice on the GT 710?
  12. Toggles meaning the different cache options. I still dont really understand the behavior, because If I toggle from "prefer" to "no", so I can then run the mover so its moved off the pool to the array, data is still being split among the two shares until the mover is complete. How does an application, like a docker container that has data persisted on a preferred share in a pool like this deal with that?
  13. Hello. I don't have an end goal perse. I'm just trying to understand the behavior and trying to avoid corruption until I understand these toggles before the "mover" runs. Ive never used this type of setup before and am understanding why the name now. lol. I come from a Proxmox and OMV world and standard RAID usages of various types. If data was on the NVMe pool as "prefer" and I change this to "no" without the mover job running and I write data still, I would assume then data is split across the NVMe share and the array? This is the part I'm struggling with because if the mover hasn't run and I change the share behavior then I would assume I would have data in two places now. Then at some scheduled time the mover job will run and will move the old data to the array.
  14. You need to change the applications log to go to stderr and stdout for them to show up in the docker's log management typically.
  15. I created a share to mount persistent data for a docker container on my nvme pool set as prefer. What happens if I change this to “No”? if I haven’t moved it yet what is the expected behavior at this stage when that docker container writes new data to its new share? How does it read data if this was toggles? when parity is complete when parity isn’t complete