jonathanselye

Members
  • Posts

    46
  • Joined

  • Last visited

Everything posted by jonathanselye

  1. Don't think about it, they have already forgotten about it even though they have been teasing it since January 2023. What a joke.
  2. Is there no holiday surprise or some fun thing to get back to the community? *ehem* unraid 6.13 private beta(which they seem to forgot about ---announced way way back and no update until now) *ehem* *ehem*
  3. Hi! when is the earliest version we will be able to raid0 xfs on pool devices? Thanks!
  4. I think it will be introduced in 6.13 which they said will be Sooner™
  5. anybody got it to work with tdarr hevc with thor kernel?
  6. thor already released a new kernel with intel arc drivers built in
  7. Any rumors on the street if/when Soon™️ 6.13 Series might be released? Soon™?
  8. in this release is it possible to start pool without a device on main array?
  9. Yes it has all the server feature built in 10gb and rich pcie on a consumer motherboard, found the updated post and thanks for updating it, one question and im sorry for the dumb question how do i get my ideal bar size?, and will it work if i already unbinded the gpu to unraid?[using vfio-pcie] and i only have </devices> line, is that the same? UPDATE: Tried it and i think it is working but the driver is not yet available when i did what you instructed in device manager there is a new device called "NF I2C Slave Device" when i tried updating the driver both online and latest virtio the drivers seems not available yet so i think it is working the driver just isnt available yet.
  10. I see i will try your method later and will comeback for results. Yes it benefits greatly from rebar even in gaming, my mother board is x570 proart cpu is ryzen 5950x everything is already enabled in your instruction in bios level, because when i used proxmox i was able to utilize rebar, there are still some transcoding in the queue so i cant do it now i will comeback later with results. Thanks!
  11. Apologies for my lack of information i actually did not apply the patch for 6.12 because i might break some things and i am asking this thread for the go signal if it is safe. For the hardware: the gpu is intel arc a770 i am using if in VM for TDarr and ReBar increase its performance for around 10-30% in transcoding so it is nice to have, and as per the page 1 of the thread it is said that it should be coming in in 6.1 so i am wondering if there is just some codes i need to enable or type in the command line so the reply. Thanks for the quick reply!
  12. actually the stable is already released but unfortunately it is not natively supported(rebar)
  13. How to make this work with unraid 6.12? which natively has 6.1 kernel.
  14. Thank you Limetech for granting our wish of more than 30 device ❤️ now we can all have a PB unraid server ❤️ ❤️ ❤️ and why is nobody talking abount Linux Multi-Gen LRU? what does it do will it improve ram usage in our usecase? specifically xfs file caching to ram.
  15. Hooray for RC-7, it It lucky 7? Probably the "real" last as Limetech is targetting a mid June release.
  16. Sweet Thanks! I'm so excited with the stable release.
  17. Tried with unbalance copy/move the folder created seems to be not recognized as a ZFS file system, but now i understand it how dumb of me to forgot to force the share to only the disk i want thanks its all clear to me now. One last question i know you are not allowed to release more info yet on the public but is it safe to migrate my data now? or better wait for the stable release will there be more changes to the zfs? i will understand if you are not allowed to release information yet. Thanks for the great help you answered my question perfectly!
  18. I wish to transfer data from eg. disk1 to disk2 on xfs this can be simply done via unbalance in the new zfs array how can this be done if the share(zfs folder) still does not exist on the destination disk? is there a way to somehow force gui to create the zfs empty share on the destination disk?
  19. apologies for the dumb question, but how does one create on a certain disk via GUI?(when creating via share it only exist on one disk)
  20. Oops sorry for the confusion, i mean create the ZFS using ZFS Master plugin(does that count as GUI?) then put data into it will UNraid treat it the same as the auto created ZFS when making a new share? i plan on migrating to zfs from xfs moving data from disk one by one by creating the zfs via ZFS Master then moving data into it i tested on a test server when making a share unraid automatically creates a ZFS share and i tried also making a ZFS via ZFS master in my test server on another disk where the share does not exist yet, the SHFS seems to work fine on the auto created and the manually created ZFS i just want to make sure the data integrity and if it is fine specially, im migrating a lot of data.
  21. Now that we can use ZFS on main array, what is the difference of automatically created zfs share vs manually created zfs with the same name on the disk, share, will it affect data integrity or will it be technically the same?
  22. Its better to wait than have a rushed uncooked OS, just trust the unraid/limetech team :)) just like you im also excited for the stable release but im on the RC now on my main server and so far it is rock solid.
  23. It means another RC, Hello RC6!!
  24. Hi currently i am Using LSI-9300-16i HBA+Adaptec 82885T Expander, and maximizing my Array Slots. My Questions are as follow 1.) In my head if i keep all the drives connection in the HBA+Expander combo i will get better speed and latency vs The MotherBoard Built In SATA Ports is there a reality to this? because i am thinking that if every operation is being handled by the hba then everything is being handled by the PCI-E and no need to communicate with the motherboards PCH and i think that will add a small delay or maybe a performance penalty. 2.) Now that 6.12 is imminent and multiple array support is incoming i plan to further expand my array and add another expander(the same expander as i have now) the question is what would be the better connection for the optimal performance? a.) Should i plug both expander to the HBA using dual connection and balance the HDD SAS connection on both Expander? b.) Use MotherBoard SATA + "a.)" approach to reduce the load and so i can add more HDD's c.) Use The First Expander's (External)SFF-8644 Ports to Connect with the second Expander (dual link) then connect the First Expander to the HBA another dual link. Thanks!