j44rs4

Members
  • Posts

    25
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

j44rs4's Achievements

Newbie

Newbie (1/14)

1

Reputation

  1. Not sure if I understand your question. Do you want to mount your existing QNAP raid because you have files on it, and want to run UR as OS? In that case, the answer is NO. I just set up my TVS-871u-rp with UR, and it runs flawless. Added an extra pci-e card with M.2 drive as cache. Also upgraded RAM and CPU. Will try to run UR only from DOM, since the DOM is a USB-drive. Does anyone now if these DOM's have uniqe UID?
  2. UPDATE. After many ways of testing, I figured out that none of the suggestions mentioned in this thread had any impact on the system. I also tried out with a Perc H730, to see if that could make any difference, since it has a PCIe 3.0 bus confirmed by DELL, and do support 12Gb/s bandwith. The test turned out to have no impact. Did some more testing, and reset my BIOS to FACTORY settings, and still nothing. Solution: Made some minor changes in Bios, as you can see from posted pics. 1. Changed the System BIOS Settings -> System Profile Settings -> Performance 2. Changed the System BIOS Settings -> Processor Settings -> Dell Controlled Turbo - ENABLED 3. Changed the System BIOS Settings -> Integrated Devices -> Memory Mapped I/O above 4GB - ENABLED Setting number 3. made the biggest impact.
  3. if you already have 112MB/s read/write, you will gain nothing from a cache disk, as your network is the bottleneck here.
  4. You could flash your raidcontroller to IT mode, but make sure you have a backup of your files before you do that. https://fohdeesha.com/docs/perc/
  5. Have you tried to connect directly to your UR, without go through the switch? Normally you will have to configure your switch to handle jumboframe, or else the transfer will not work optimal. If your switch is layer 3, I'd add another VLAN than the default VLAN 1 to allow higher jumboframes. Becaus if you have at least one device on your network that does not support jumbo frames (network printer, SIP phone, etc.) then you cannot use jumbo frames at all. Otherwise you won't be able to talk to that device.
  6. Sorry, no. I'm working out a new way to test, the speed. I've figured out that Perc h710p controller only supports PCIe 2.0 even though they say, "When you flash your H710p to IT-mode, it will start to work as a PCIe 3.0" Maybe that's for the PCIe adapter card and not the internal module. https://www.dell.com/support/article/no-no/sln292279/list-of-poweredge-raid-controller-perc-types-for-dell-emc-systems?lang=en I Also important to notice , is that this controller only supports 6Gb/s, so there is no way this controller can do 10Gb/s alone. As for the m.2 disk I inserted, I used a PCIe-adapter, and still no perfomance. I will try to add a H730P PCIe-card to my server, which indeed is a PCIe 3.0, and handles 12Gb/s. I have 4x 2.5' SAS SSD, that also support 12Gb/s, and will try and see if that makes any impact on my performance. Will report back, when I'm done testing, so it might take a while. Hopefully before X-mas.
  7. You are describing exactly the same problem that I'm facing also. Please tell me more about your server/system. Like what kind of Raid controller are you using, RAM, CPU, Motherboard etc.
  8. Why would you need that, when all you have to do, is to get a M.2 drive as cache? I already have a 10GbE card installed in a ASUS PRIME Z390M-PLUS and 64GB of RAM, with a 512GB M.2 cache drive. I can transfer a 40GB file to my UR with a speed of >1GB/s without droping speed. I would also like to have a TB3 support, since I already have a QNAP QNA-T310G1S Thunderbolt 3 to 10GbE Adaptor, and would like to setup a new NAS built on a ASRock Fatal1ty Z370 Gaming-ITX/ac.
  9. Replaced my Intel Xeon E5-2670v2 2.5GHz with a Intel Xeon E5-2637v2 3.5GHz, and the writing is still the same. Any other suggestions? Anyone?
  10. 64Gigs of RAM And that was a 20GB file. But still not the same same, as I have on another system, with cache and 10Gbe
  11. I'm talking MegaBytes (MB) Hmmm, let med post you some pics then. Pictures 1-3 is copy to share Test, without cache drive, on a 10Gb/s ethernet. As you can se on pic2, you'll see that I've disabled writing to cache. If enable cache on the share Test, I get lower speed.
  12. Sorry forgot to mention that I did the test without parity disks. You are totally correct. With parity I'm down to 276MB/s.
  13. Another test done this moment: Writing to share Without cache drive, gives me a speed of 450-500MB/s, while activating the cache only give me 260MB/s. Can someone explain to me why writing to share with cache enabled, is slower than writing directly to share? I have 3 different cache disks I've been playing with. The first one, is a NVMe WDS250G3X0C, and the second one, is a SATA SSDd from Dell MZ7LH960HBJR0D3, and the third one is a SAS 300Gig disk 15000RPM. All these disks can only perform a speed of 260MB/s As for now, my system is slower with cache disk than without.
  14. @Tekminute Sorry, no fix so far. the HW seems to be handling speed as you can see, but not when writing to cache. The only bottleneck I can think of must be the CPU (Intel Xeon CPU E5-2670 v2 @ 2.50GHz) I have 64gigs of RAM, so that should be plenty enough. But still I don't understand why writing to share through cache could make such a problem, while writing directly to all other disks, give me full speed.
  15. Is there no way I can tune the config/hardware?