Jump to content

bubbaQ

Moderators
  • Content Count

    3488
  • Joined

  • Last visited

Community Reputation

5 Neutral

About bubbaQ

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I run 10GbE on two workstations, each connected directly (i.e. no switch) to one of two 10GbE cards in unRAID. Both workstations have large NVME and RAID0 SSDs. My cache in unRAID is three 4TB SSD drives in a btrfs RAID0 array, for a total of 12TB. So in theory, the hardware will support 10GbE speeds. Spinners in unRAID top out at about 200MB/sec. Xfers to/from the workstations to cache are very fast (but I need to turn off realtime virus scanning on the workstation to get the absolute fastest performance) and they get within 80% of wireline. Workstation backups go to cache, as well as certain datasets I need to have fast access to are kept only on cache. For ransomware protection, the entire server is read only except for "incoming" data on cache. Anything I want to copy to the server I copy it to cache first, and then manually log into the server and move it from cache to its ultimate destination on the array. Cache is rsynced to a 12TB data drive (spinner) in the array periodically, after confirming data on cache is valid..
  2. Added some drives (xfs encrypted) and added them in slots 10,11,12. Disk 7 was the highest slot in use. FWIW slots 4, 8, and 9 were empty. After a reboot, the user shares tab in the GUI did not list any drives above disk 7 as available for inclusion in the share. So I stopped the array, and moved one of those new drives back to an empty lower slot (disk4) and restarted the array, and the shares GUI sees it and created a share on it. I even tried manually editing one of the share .cfg files as a test to include one of the new drives (disk12) and it would still not see it in the GUI. I added an unencrypted test drive at disk13 and formatted it, and created some files/directories on it, restarted the array and same problem. Image and Diagnostics attached. Suggestions?
  3. By "haste" I meant abandoning the project permanently. I completed my tests and all the hashes all match, so even on files over a TB, there was no corruption.
  4. Don't be hasty.... I'm running hashes against them now to compare to backups. You might just need to use long integers and recompile it.
  5. Have you tested it with *large* files? I get overflows with large files. See attached.
  6. The "multi-stream" changes are really killing me. I had to revert to 6.7 as I lost 40% of my throughput when copying disk to disk in the array. Can we get back the ability to set reconstructive write ON and have it *stay* on?
  7. How can this NOT be a bug? Both the setting in the GUI and the command line option are broken of you can't set it to always use turbo write.
  8. What did you do to get it running?
  9. Reverted to Version 6.7.2 and turbo-write seems to be working better. Still some oddness though -- it seems to read for several seconds from the source with no writing to the destination, then stop reading from the source while writing to the destination and reading from the other data disks to reconstruct parity.
  10. Had to downgrade back to Version 6.7.2 due to bug in turbo-write not working properly in 6.8.x
  11. Is there a thread somewhere that explains this bug?
  12. I don't think so. I tried setting write to r/m/w and got very DIFFERENT disk activity... the same that I normally see with r/m/w. Same amount of reads and writes from target and parity, and reads with no writes from source disk... all other disks no activity.
  13. You are correct... I corrected the OP to reflect xfs. My bad.... I was working on some btrfs issues on another server the last few days.... and had googled btrfs issues a mission times.
  14. I first stopped and restarted the copy and got the same results. Then I restarted the server, and got the same results. It's been running for several hours and the results have been the same throughout the copy.