mytime34

Members
  • Posts

    10
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

mytime34's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Hello, Is there any update if Unraid will support Infiniband through Mellanox Connect-x3 cards?
  2. I have tried running the tests individually and simultaneous with the same results. Bandwidth is always capped on these 2 drives. My 2 other sas ssd drives do not cap out, they run without issue.
  3. To All, I am running 4ea 7.68tb Enterprise SAS 12gb SSD drives and when I run Disk speed, I get the following message. Bandwidth was capped on the folloiwng drives: Cache (sdm), Cache 2 (sdn), test (sdo), test2 (sdp) 2 of these drives are hitting 700mbps, but the other 2 are only hitting 350mbps. Is there a way to uncap the bandwidth? These drives are capable of 900-1100mbps each, and has been tested on a windows machine. Any thought or fixes would be great. Thanks
  4. Hello, I currently am running the following setup and plan to upgrade: 2920x (12core/24thread)(Watercooled) 128gb ram 10gb dual port Fiber 1600w PSU Avago 6gbs external SAS Controller LSI 12gbs Internal SAS Controller (Cache) 3x2tb Intel 660p NVME drives (Array setup, Netapp DS4246) 6 x 6tb HGST 12gbs SAS drives (2 parity, 4 array) 9 x 3tb HGST Sata drives ZFS array (3x3x3 Raizd1) I do IT work from home, along with streaming, gaming, messing around. The current setup works great and I get really good transfer speeds (if going to cache I get 1200mbs read/write)(If going to the regular array it is 90-130mbs R/W) The ZFS array is for testing mainly and I get 1100mbs read and 900mbs write Now onto the upgrade: I recently came across some deals I could not pass up on, so I bit the bullet. 1ea Samsung 1635a SAS 12gbs 6.4TB SSD drive (Very limited power on hours and almost no data transfered to it) 2ea HPE (Toshiba) RM5 7.68TB SAS 12gbs SSD Drives (some data usage and almost a year of powered on) I want to use these new drives as my array with the following config: 7.68tb Parity Drive 7.68tb Array drive 6.4tb Array drive (or cache drive) 6x6tb drives in a ZFS array that is only powered on once a month for full backup to spinners (enclosure would be powered off) OR another config: 7.68tb, 7.68tb & 6.4tb drives in a cache setup (all apps, VMs, data would get stored here) 6 x 6tb drives in array (2 parity, 4 array) for long term storage that can be turned off I know there will be a lot of people taking about cost, but cost is not the problem here, I want speed, unraid features, low power usage and long term backup. Since these are enterprise drives from 2018, they should have the built in Trim/garbage collection, so that should not be an issue. I am getting more details from Samsung and Toshiba on this right now. Thank you
  5. How is the server doing? Any failed SSDs yet?
  6. I am running into an issue with accessing the ZFS share from windows. I am able to see the path to the ZFS share, but it says I do not have permission to create/delete, etc I get the following error when I try to enable SMB share root@Pughhome:~# zfs set sharesmb=on dumpster cannot share 'dumpster': smb add share failed cannot share 'dumpster/test': smb add share failed Here is my SMB script global] ... usershare path = /dumpster usershare max shares = 100 usershare allow guests = yes usershare owner only = no [data] path = /dumpster browseable = yes guest ok = yes writeable = yes writelist = read only = no
  7. Process to use ZFS in Unraid (Thanks to Wendell from Level1 techs for the baseline) Unraid needs to be running with at least 1 data drive, pref 1 parity and 1 data drive. Install Unraid options and set it up how you want Backup the data from Unraid. All following processes have to be installed using the GUI (monitor connected to the Unraid server, do not try to do it over remote operation) Install ZFS Unraid Plugin (Done via the Unraid Install plugin option) https://raw.githubusercontent.com/Steini1984/unRAID6-ZFS/master/unRAID6-ZFS.plg Check the drives you have available “Unassigned Devices” that are not parity, data or cache drives In my case I am using 9 drives /dev/sdb-/dev/sdj Using terminal in unraid use the following commands: (list drives) lsblk This will list all harddrives in your system. (Create Zpool) Zpool create dumpster raidz(1,2,3,x,x,x) /dev/sdb /dev/sdc /dev/sdd /dev/sde depending on how many drives you have, this may take a few seconds (View Zpool Status) zpool status pool: dumpster state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM dumpster ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 sdb ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd ONLINE 0 0 0 sde ONLINE 0 0 0 errors: No known data errors Your Zpool is setup Using the GUI goto https://slackware.pkgs.org/14.2/slackonly-x86_64/fio-3.7-x86_64-1_slonly.txz.html scroll down and click on Binary Package fio-3.7-x86_64-1_slonly.txz Make sure you save this file to the ROOT folder (no other folder will work) Using terminal run the following command upgradepkg –install-new ./fio-3.7-x86_64-1_slonly.txz Now onto setting up the ZFS dataset (file system and folder creation) Using terminal (ZFS Dataset) zfs create dumpster/test -o casesensitivity=insensitive -o compression=off -o atime=off -o sync=standard (ZFS verification) ZFS list NAME USED AVAIL REFER MOUNTPOINT dumpster 32.0G 7.65T 140K /dumpster dumpster/test 32.0G 7.65T 32.0G /dumpster/test Now onto the testing of the Zpool array Using terminal (FIO Disk / pool testing) fio --direct=1 --name=test --bs=256k --filename=/dumpster/test/whatever.tmp --thread --size=32G --iodepth=64 --readwrite=randrw --sync=1
  8. Threadripper 2920x (12core/24thread) 32GB ram 10TB NMVE Cache 33TB spinners (24 drive array and SAS HBA, 8ea 3tb, 6ea 2tb) 10GB Ethernet Watercooled CPU (and soon video cards) Soon to be added (2ea GTX1070 FTW cards for VMs and rendering)