dhendodong

Members
  • Posts

    11
  • Joined

  • Last visited

Everything posted by dhendodong

  1. I'm sorry i forgot to mention i am transcoding 24/7 using TDarr and it seems after around 24 hours i max out the second ssd's cache(the ssd itselfs cache not unraids) and put the performance of the second ssd to a crawl will the raid0 system suffer the same? or will it solve this issue? Thanks!, Dhen
  2. Hello regarding ZFS i want to ask a performance related concern What would be the best cache-pool setup: I have two samsung 980 pro 2tb, should i go with, i prefer to get the most performance and data protection is not the priority a.) seperate the load, 1st 980 dedicated to VM, Docker 2nd 980 as array cache for *Arrs downloads b.) put them both and raid0 i am afraid that maybe b.) scenario will increase my acess time and latency so i am currently using a.) i want to be enlightened Thanks!, Dhen
  3. Well even with the most capacity drives available today, say 20TB on the 30 drive limit you cant still reach a PB of storage, say you want to use it in a small business, in my opinion a single mass pool is better than simplifying User Share implementation because its easier to manage it that way because everything is on a single pool and at the same time you can maximize ZFS' full potential e.g. SnapShots, ZRAID... and so on, but that is just my opinion.
  4. Yeah thats why like i said, they should only allow us to bypass the said 30 drives limit per pool if we are not using unraids' default implementation of parity, only when using OPEN-ZFS' implementation eg raid z1,z2,z3 and so on, that way we have unlimited potential for drive expansion and thus unraid will be a real "NAS" competitor. Remember NOKIA got bankrupted because they did not embraced change and stick to their comfort zone.
  5. Yes i think they have a plan to support multiple pools in the future, however in my opinion why not just combine all in a single pool to easily manage the shares and when having more than 30 drives in a single pool just disable the unraid built in parity which is what i think is preventing us to have more than 30 drives on a single pool, or only allow more than 30 drives on a single pool when zfs/btrfs etc. is the filesystem.
  6. Now that zfs is almost out and it seems the 30 drive limit is bottlenecking its full potential and specially now that hard drives are more affordable, imho it should be increased to atleast 60 specially now that zfs is imminent we there will be a choice that we wont be relying on unraids built in parity because we have zfs now z1/z2/z3.. but i understand not everyone wants/needs beyond 30 drives and thus i propose that, there should be a new license beyond unraid pro prolly a SOHO edition which makes sense so unraid can become a REAL nas os competitor beyond unraids target market which is home users, where prolly in the long run when they outgrow unraid will likely switch to more capable nas os thus this proposal can evolve unraid and can be started to be used on small/startup business infostructures imho its the drive limit that is bottlenecking both unraid and zfs' full potential like i already said before.
  7. Hi i plan to expand my current drive count and i would like to ask help in which setup is the best as i do not really how Expander works. Currently my motherboard has: X8 GPU x8 LSI-9201 x4 Free Here is my Plan(s) Plan A. X8 LSI-9201 x8 LSI-9201 x4 GPU(For Display/Transcoding Only) Plan B. X8 GPU X8 LSI-9201 X4 LSI 9211-4i -[FULLY LINKED TO]->(PCIE Riser For Power) Adaptec AEC-82885T 36-Port SAS-3 12Gbps Expander firmware B059 (Sold by Art of Server) Plan C. X8 GPU x8 LSI-9201 x4 LSI 9211-4i What is my best option and what speed/bw would i get? i think 250MB/ Drive is enough for mine, feel free to suggest a better expander/route where i can get more had drive expansion Thanks in advance
  8. In this new "major" update? now that zfs is supported is the maximum supported drive still 30? Thanks!
  9. Hi! ich is the unraid new kernel that support the intel arc still available to test? i just got my arc today and i was hoping i can test it we talked in reddit several months ago regarding the test build. Thanks!
  10. Got it working with windows vm, not that hard to implement, only one problem is its worst than SAMBA, does it matter that i have a passthrough'ed 10G NIC? SAMBA SPEED - 500-650MB/s VIRT-IO SPEED - 100-120MB/s Large or Small File it seems that is the maximum i can get, any way to tune this? Regards, Dhen
  11. no matter what i do i can't access my hpx server on the companion app(android) i tried using guest/created admin acc to no avail tried different host formats 192.168.1.2 //192.168.1.2 http://192.168.1.2 //nas /nas nas to no avail but using android chrome browser i can access it, what am i missing here?