Jump to content

JorgeB

Moderators
  • Posts

    67,125
  • Joined

  • Last visited

  • Days Won

    703

Everything posted by JorgeB

  1. They are very common on lower end board from all manufactures since they're cheap, higher end Asus board usually use Intel NICs.
  2. On the GUI, upper right corner. by typing mc on the console/terminal
  3. You can use the builtin terminal, and you don't need to run any commands, mc is like norton commander for dos, if you ever used that.
  4. Yeah, but like I said it doesn't make sense that you're getting full 10GbE with iperf, that's 1GB/s (assuming that the test was done with a single transfer, not with like 10 simultaneous transfers), and then can't even transfer at 100MB/s, whatever the problem I'm sorry but I'm out of ideas.
  5. If iperf uses the full 10GbE bandwidth and even when doing a direct transfer you can only copy at 65MB/s that would imply the network isn't the problem, either read speed on source or more likely the write speed to the destination server is the problem, unless you can transfer quicker if you transfer from the desktop to the same server, but if that's the case your problem doesn't make much sense, you'll need to do some testing to rule things out.
  6. Try deleting them directly on the server using midnight commander (mc on the console)
  7. It can depend on the hardware used (NICs and switch) but it's perfectly normal, you should transfer directly from one server to another, you can use for example the Unassigned Devices plugin to mount one server share(s) on the other and use mc or krusader to transfer directly.
  8. This is using your desktop to do the transfer or directly from one server to another, i.e. without the desktop being involved? If you're using your desktop, and the desktop isn't 10GbE, those are perfectly normal speeds, since the data goes from one server to the desktop and then from the desktop to another server, limited by the gigabit NIC on the desktop, and worse, since the same NIC is receiving and sending data simultaneously it will never even reach full gigabit speeds.
  9. Sorry, I read that when you originally posted but forgot, re-reading your post this not clear to me: Is your desktop also on 10GbE, and if you transfer directly from one Unraid server to the other using the cache pool on both do you get the expected speeds?
  10. Use iperf to test network bandwidth.
  11. It can be, IMO when working with btrfs and while fsck is practically non functional, everyone using it needs to be prepared to, when corruption happens, backup, restore and re-format, very few situations can be fixed with btrfs repair, and it should only be used as a last resort or if a btrfs maintainer toll you to. On the other hand the data recovery options, like mount ro or btrfs restore, work most of the time, so not all bad.
  12. No, sorry, soon™ is a running joke in the community, it can mean a few days, or a few years.
  13. Sorry can't point to a specific post but Tom mentioned several times that multiple cache pools are on the roadmap, thought can't tell if it's going to happen on the next release or v8, but it should be soon™
  14. Agree, not something I would likely use but there are potentially enough users, especially those running Unraid mainly as a hypervisor, to make it worthwhile.
  15. Thanks for clarifying, that's how I though it worked and I believe the that explanation is on the forum somewhere (though likely before dual parity, so this one is more complete) but very difficult to search the forum for that.
  16. I don't remember if Tom ever clarified that, but unless you're using a >10 year old single core CPU there won't be any performance impact in the parity calculation, even with larger arrays and a modest dual core CPU you can enable turbo write and parity calculations doen't cause any performance penalty, i.e. you can write as fast as your disks/controllers/bus can handle, so for writing performance it will act as a mirror, on the other hand reads are only still done from the data disk, unlike other true raid1 scenarios that read from both mirrors to improve performance.
  17. When using more than one data disk try would turbo write enable, should get closer to 100MB/s.
  18. Sure: https://forums.unraid.net/topic/51362-pq-or-dual-parity-in-detail/?do=findComment&comment=506258
  19. Yes, when there's only a single data drive it works as raid1, not parity raid.
  20. Yes, with only 2 drives Unraid uses them like a mirror automatically, same with 2 parity drives and a single data drive, it will work as a three way mirror.
  21. This is an old issue, I reported this over two years ago during the 6.2 beta cycle, though it doesn't happen with all disk brands/models, e.g., on my main server currently I have one of each WD 6TB green, blue and red and they don't have this issue, but most Toshibas do, looking at your screenshot looks like some WD do too, easy giveaway is the disk icon stays grey but temperature starts showing, and it won't spin down again on its own. P.S. diags from this server are useless for this, but if the OP doesn't post his I will post later from one of my other servers.
  22. There are reports that the issue remains even with the newer SAS3 models, like the 9300i, 9305, etc.
  23. Yes, at least the checksum part isn't needed, though it can still be used for e.g. file list export.
  24. Main disadvantages are: not as stable as XFS and the lack of a functioning fsck, though I myself use btrfs in all my Unraid servers as IMO the pros outweigh the cons.
×
×
  • Create New...