Jump to content

glennv

Members
  • Content Count

    148
  • Joined

  • Last visited

Community Reputation

8 Neutral

About glennv

  • Rank
    Advanced Member

Converted

  • Gender
    Male
  • URL
    https://posttools.tachyon-consulting.com
  • Location
    amsterdam

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I have two of those (the Back-UPS Pro 1500VA BR1500GI version from the same range) and they work great out of the box with unraid.
  2. Yeah i am with you. The only thing you could theoreticaly do is use some hardware raid under a normal xfs cache for redundancy, but its not realy the most ellegant way and has its own issues . Guess we better stick with btrfs for now and wait it out untill zfs comes there. Eventualy it will have to.
  3. Nope zfs is only for unassigned devices. Dont mess with the cache !! Cache is part of the array. Keep that btrfs if you want a fault tollerant option either raid1 or if more drives raid10. It was on my wishlist (1st sentence)
  4. With copy on write filesystems and readonly snapshot its pretty hard for ransomewhare to affect old snapshots. As even deleing all data curently accessible stil follows the copy on write rules, not actualy deleting your old data. But it is possible for sure when your host is fully compromised with root access.But to be totaly safe you can do what i do and have a second mostly isolated system that "exposes/shares NO data via any sharing protocol" smb/nfs/etc) and "pulls" snapshots(deltas via zfs/btrfs send recieve over secure ssh connection) from the primary every night for example. Dont mistakenly "push" data to it from primary as that means if your primary is compromised a hacker or hacker software has access to stored credetials that you use on that primary to acess the backup host and will just use those to jump to it.
  5. As snapshots are already possible with btrfs and zfs (if using the plugin and as i do since a while with just scrips and commandline incl send recieve to remote unraid box and works great), i vote for zfs support on cache and array. It would unlock more supported and rock solid zfs raid features on the cache then the current limited (stable) options on btrfs and finaly would allow zfs features for the array itself. btrfs is nice and all and i dabled with it for a while , but after the 3rd unrecoverable corruption issue i moved to unaasigned devices and zfs plugin for all docker,appdata,vms etc and never looked back since nor ever had an issue anymore. Lastly and maybe the most important of all, zfs is extremely user friendly and btrfs is not. Specialy when stuff goes wrong. Then is "the" moment where you need simplicity and not complexity. Anyone who had to deal with cache corruption issues on btrfs by now will now this. Typicaly in most cases its sorry just rebuild the cache. You have to almost nuke zfs from orbit to get to that point while a simple cable connection hickup or bad shutdown on a raided btrfs is easily busting you up. Did dozens of distructive tests and would not trust my critical data to btrfs anymore. Unfortunately for multi ssd cache its my only option currently. Multiple cache pools is a close second.
  6. Nah, moving back again. Still a 200MB/sec drop for my shares read speed accessing cached data (raid10ssds) from 800+MB/s to about 600MB/s. So far 6.7.2 is the end station for me.
  7. Good to hear. Had been putting off 6.8 as massive drop in 10G based share performance and stability. Maybe i will try again
  8. Sorry but could not resist laughing a bit about the NFS outdated comment. I beg to differ. Millions of servers in datacenters around the world rely totaly on it. Including the multi billion dollar company's i am working with running insane fast NFS attached storage clusters to run their entire enterprise systems on. Old yes, ideal surely not, outdated , absolutely not and there to stay for a while. It does require skill to tune for specific workloads and get it all right for sure.
  9. What i love most, wow that a hard one. I would say STABILITY. A server that you can rely on to be there every second of every day for years to come , hosting your valuable data, serving you with dockers and VM's galore and does it without beeing noticed. Love it .... Wishes: zfs suport for cache pool. Love the zfs plugin to death but would be great if i could use zfs intead of btrfs for the cache.
  10. p.s. Tnx for your little testing spreadsheet which at least confirmed i am not the only one seeing huge performance drops accessing the cache over the network (10g in my case) between 6.7.2 and 6.8. Did a gazilion tests and all came back the same . There is something fishy in 6.8 but still no clue what it is but involves smb definately and does not hit everyone. So in your case also stick with 6.7.2 example of others reporting issues with smb performance degradation
  11. I put a small noctua blowing on the dual 10g in my supermicro so it stay at between 45-50C during load. They can get crazy hot fast and even shutdown when too hot. Upwards from 70's is pretty toasty. Not sura at what temp i saw it shutdown.
  12. Not a direct yes or no answer but check this forum as is a often asked question. Some time ago i looked for it myself for my X9 board and could not find it other then some mention of custom bios etc but have not looked for a while. You likely have more chance there to find anyone with experience on that subject as is not unraid specific https://forums.servethehome.com/index.php?search/43257472/&q=supermicro+x9+bifurcation&o=relevance
  13. @Tucubanito07 Nope , you can like i have use a large SSD based cache and set your share(s) to cache=yes/only. Then after the initial memory cache the writes go to the fast ssd's (or nvme's Like Johnnie has) . You can then at a convenient time uffload to spindels (mover) or keep in in the cache, whatever suits your workflow
  14. I am not sure why we are even discussion yuor code when you clearly state that the same dir list calls are much slower on the new release then on the old. If its a slow call due to large directory , fine, so be it, but then it should be just as slow in the new release. I see similar issue on my OsX cllients and on top of that read performace has dropped about 200+MB/s over my 10G with the only difference beeing an upgrade or downgrade so also moved back. There is definately something there that affects some but clearly not all people. When you do the exact same thing and the only difference is the release we should focus on what has changed that can cause the changed behavior and not questioning the code other then in its great capability to identify a weakspot that was not there before.