Jump to content

bmac6996

Members
  • Content Count

    5
  • Joined

  • Last visited

Community Reputation

6 Neutral

About bmac6996

  • Rank
    Newbie
  1. Update: Was able to install 6.2.2-24922 DSM as a VM. This is what I did: - Used bootloader 1.03b. - Used DS3615 - Installed DMS version 6.2.2-24922 (also tried DSM 6.2.1-23824 Update 6 and that worked too!) - Created VM as "CentOS". Used Seabios and highest version of machine (Q35-3.0) - After I configured the bootloader, I loaded the img as the first bootable primary vdisk first but set the type to "USB". (doing it this way, as "usb", DSM won't see the bootloader image file as a hard drive and will show only your other vdisks attached) - I created a vdisk, and used qcow (other type could be used, but I didn't try). Must be at least 5gb I found. (anything less it won't see and/or will fail during setup) - The main part!! -- manually go into the XML and change the nic to "e1000e". This worked for me on my Supermicro - X9SRH-7F E5-2690 v2. YMMV. Good luck.
  2. What bootloader version did you use? for which synology hardware device? What DSM version did you originally install? did you update to 6.2.1 update 4 and was at a lower version or were you already there when you originally installed it? Having the original mac and serial combo could one of those X factors though.. can't rule it out. I know 6.2.2 people are having issues, so i defin wouldn't jump to there. And it all could be hardware that we are using that could be when why some can get it working and some cannot. This is what i'm running. M/B: Supermicro - X9SRH-7F/7TF CPU: Intel® Xeon® CPU E5-2690 v2 @ 3.00GHz Memory: 32 GB Multi-bit ECC
  3. Anyone got Jun's Loader v1.04b DS918+ working? I can get 1.02b with ds3615xs working with the settings stated in the earlier threads but if I use the same settings for v1.04b, find Synology won't pick anything up and VM isn't picking up an IP from DHCP. Thoughts? Edit: This information is really good to know... to figure if something may or may not work on your unraid server. https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/ Another edit: Been trying for the past few hours, been trying to install and this is what i found. - on my hardware, only loader ds3617 will actually boot and install. (e1000/seabios). and only using dsm at version 6.2 only. 6.2.1 and won't finish install after the first reboot. - i read and seems that ds3617 is more compatible. YMMV - 6.2.1 and higher just won't work under unraid as far as i can tell. (they are now at 6.2.2 and i tired that with ds918 with no luck). Even if i installed 6.2 and upgraded within, fails after first reboot.
  4. Hi, been reading through this thread and I can't find an exact answer to my situation: I have a PCI (LEGACY) USB 2.0 card i want to passthrough to VM. I've tried different methods to separate the PCI card it is own IOMMU group. No luck. ACS Override, downstream option, vfio PCI. Combination of all of them Rebooted in between all those changes and the VIA USB Controller in group 11 will not move. CPU/Mobo is compatible. Version 6.5.3 Will this ever work or is this a limitation because it's PCI (Legacy)? Does the PCI card need to be separated from the group in order to attempt OP's workaround? Thoughts/ideas? IOMMU group 11: [8086:244e] 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a6) [1106:3038] 07:00.0 USB controller: VIA Technologies, Inc. VT82xx/62xx UHCI USB 1.1 Controller (rev 61) [1106:3038] 07:00.1 USB controller: VIA Technologies, Inc. VT82xx/62xx UHCI USB 1.1 Controller (rev 61) [1106:3104] 07:00.2 USB controller: VIA Technologies, Inc. USB 2.0 (rev 63) [8086:1010] 07:02.0 Ethernet controller: Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) (rev 03) [8086:1010] 07:02.1 Ethernet controller: Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) (rev 03) [102b:0532] 07:04.0 VGA compatible controller: Matrox Electronics Systems Ltd. MGA G200eW WPCM450 (rev 0a)
  5. Hi I recently added a single 10gb Mellanox NIC card to my Unraid server and my Freenas server. As a test I'm trying to see if i can hit that mark. I'm not getting anywhere close to it. This is how I'm testing: On the unraid server, i have a single SSD cache drive (rated 500MBs). SSD is connected to a 2308 LSI controller (IT mode). A windows 10 VM on it. Network settings in the VM and in Unraid has MTU to 9000. Freenas server has a two intel ssds (also rated 500 MBs) set for Stripping. SSD is connected to a 2308 LSI controller (IT mode). Created a share on it. MTU is set to 9000. Setup these tunable options per the internet to optimize settings: (enter via CLI and even rebooted) sysctl kern.ipc.somaxconn=2048 sysctl kern.ipc.maxsockbuf=16777216 sysctl net.inet.tcp.recvspace=4194304 sysctl net.inet.tcp.sendspace=2097152 sysctl net.inet.tcp.sendbuf_max=16777216 sysctl net.inet.tcp.recvbuf_max=16777216 sysctl net.inet.tcp.sendbuf_inc=32768 sysctl net.inet.tcp.recvbuf_inc=524288 sysctl net.route.netisr_maxqlen=2048 I tested by coping a big file between the VM in cache and the share on freenas. Maybe i hit 500 MBs according to Windows copying GUI but it didn't last very long. On average i'm getting anywhere between 150 - 200 MBs. Then I tried iperf. Setup the freenas as the server and the VM as the client. Ran the tests a bunch of times and basically getting about the same. Based on my Hardware and setup, is this the MAX I can get? In order to get the 10gb transfer speeds, are do i need a nvme on an m2 in order to get that type of transfer? is the sata SSDs the bottleneck? Thoughts? Thanks in advance!!