Jump to content

competent-bailout3425

Members
  • Posts

    21
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

competent-bailout3425's Achievements

Noob

Noob (1/14)

0

Reputation

  1. 自动化缓存以他们团队的开发能力估计没戏,不过借用zfs的那个arc倒是有可能。而且我描述的问题和他说的压根不是同一个,unraid的IO性能就我是用下来至少是不比单盘差的(zfs的bug不算),我描述的问题时shfs文件系统遍历的速度不符合预期的慢,在我遍历一个本来就只在固态盘上存储的目录树的时候,相比于直接遍历/mnt/cache,/mnt/user节点的速度下降了二十倍之多,而且是在我使用巨量内存(192G的内存arc)的zfs的情况下(zfs的arc会缓存你的目录树,实际遍历文件系统压根不会产生硬盘io)。
  2. 硬件有什么问题?unraid这个系统算是硬件比较友好的了吧?除了前两年2.5G刚出的时候unraid由于内核版本的问题不支持2.5G的网卡,其他时候应该都是支持的吧?
  3. 就上个版本和之前的版本,还是颇为稳定的,新出的zfs功能稳定性和可用性十分堪忧,我迁移到zfs的时候还遇到了unraid的zfs限制了最大文件名长度的问题,导致我有部分文件无法正确拷贝过去,加上前段时间zfs出的那个恶性bug。官方真应该给zfs功能打个experiment标,最坑的是https://unraid.net/blog/zfs-guide对zfs还给予了一个比较高的评价。其他方面勉强可以达到凑合能用的水平的。
  4. 另外我解释下,可能在你看来,不吱声默默的下个y版本修好,和承诺下个y版本修好是没有区别的。但是实际上这是一个可能可以绕过的bug,因为我以前在用btrfs的时候是这个问题没有这么严重,一个G的吞吐量还是有的,xfs很可能没有这个问题。如果他下个z版本修,我会选择等,如果他下个y版本都不修,我就会选择全部刷回xfs。所以我十分在意承诺修复时间。
  5. 原因很简单,这只是小型的公司,技术和人力都是不足的,家用用户出的钱太少,也不足以养活一个多人技术强劲的团队,只能通过一些简单的办法凑合一个能用的方案和实现出来。比如unraid目前的cache就是这样,只能说在家用端适当的配置之后能用。当然这套也确实低成本的切中了大部分家用用户的需求,比如非条带化带来的最差情况也只会损失坏掉硬盘上的数据,这种企业用户完全不需要但是家用很实用的需求。 至于装死这个问题,我不太理解你如何持有一个谨慎的态度,假如你买了个东西,出了个问题,你联系卖家售后,卖家收到的货之后说,确实有问题,你慢慢等吧,我什么时候有时间了给修好再给你寄回去,大概你觉得这是一个正常可以接受的回复吧,至少我是接受不了。
  6. 至少我在一个软件开发者的视角来看,这俩都是代码写的挫导致的,io性能损失指的是不能条带化,因此多盘性能不会增加,不代表盘越多性能越差,遍历文件系统的问题显然可以通过在内存中维护一个shfs的目录树解决,即使考虑到内存占用的问题,每次都遍历访问所有硬盘文件系统我也不认为应该有如此之大的性能差距,很可能代码上是依次同步io导致的,换成异步io多半也能缓解这个问题(8+2盘的情况下,20倍的性能差距,这显然是不符合预期的性能差距)。 至于我链接里的那个帖子,至少在我所任职的公司所有任务都是有一个排期的流程的,优先级低的任务一会明确在什么版本提供,而不是就不提上日程了。不能给出修复时间,在我看来和装死没啥区别。
  7. 等zfsraidz动态扩容只是发布之后我可能就换成truenas scale了
  8. 比如这个https://forums.unraid.net/bug-reports/stable-releases/unraid-zfs-hybrid-mode-read-slow-read-multipe-disk-r2833/?tab=comments#comment-27175。 开发者还装死,我都发现unraid不止一个性能问题,我之前还发现user目录的文件系统驱动似乎是没有做缓存,每次目录遍历操作都要访问所有硬盘的文件系统,导致通过/mnt/user/节点遍历文件时性能十分糟糕。
  9. 不是你运行的久就是稳定啊,bug满天飞
  10. It seems that when Unraid uses ZFS as the file system for a disk in an array, it automatically creates and shares a volume with the same name as the directory. It then mounts the corresponding subvolume onto the directory. For example, if disk5 is using ZFS, it will have a subvolume named "asd" mounted on /mnt/disk5/asd. Similarly, if there is /mnt/disk5/qwe, it will be mounted from the "qwe" subvolume of disk5. What is the purpose of this behavior? This behavior can cause issues when trying to move files across top-level directories or create hard links.
  11. the origin page is here https://forums.unraid.net/topic/152052-unraid-zfs-hybrid-mode-read-slow-read-multipe-disk/,I was advised report bug here.
  12. I have 9 disks for zfs format data disk and 2 parity disk for unraid array.When I read data from 9 data disks simultaneously through /mnt/diskx, the overall throughput is very low, only 300M/s, slightly higher than the throughput of a single disk (260M/s). At the same time, I can see that my 32-core 7d12 CPU is almost fully loaded in the background. By using the "top" command, I can see that processes like unraidd1 and unraidd2 are consuming a significant amount of CPU. I have confirmed that my hard disk bandwidth is sufficient to support all my disks being fully loaded for reading simultaneously. During parity check, all 11 hard disks can operate at speeds above 250M/s. some test: 3disks 9disks It is highly likely that Unraid has assigned a separate kernel thread for each disk to handle IO requests,but miuse spin locks during those thread(more disk,slow thorughput,but higher cpu usage).
  13. I don't believe that checksum verifications are the main culprit in this situation. Nowadays, even single-core CPUs have computing power measured in GOPS (Giga Operations Per Second), and a data flow of less than 1G/s should not cause significant CPU usage. Additionally, based on the "top" command, it is evident that the CPU-consuming processes are kernel threads of Unraid. It is highly likely that Unraid has assigned a separate kernel thread for each disk to handle IO requests,miuse spin locks make that happen(more disk,slow thorughput,but higher cpu usage. .
  14. I have 9 disks for zfs format data disk and 2 parity disk for unraid array.When I read data from 9 data disks simultaneously through /mnt/diskx, the overall throughput is very low, only 300M/s, slightly higher than the throughput of a single disk (260M/s). At the same time, I can see that my 32-core 7d12 CPU is almost fully loaded in the background. By using the "top" command, I can see that processes like unraidd1 and unraidd2 are consuming a significant amount of CPU. I have confirmed that my hard disk bandwidth is sufficient to support all my disks being fully loaded for reading simultaneously. During parity check, all 11 hard disks can operate at speeds above 250M/s.
×
×
  • Create New...