bmac6996

Members
  • Posts

    9
  • Joined

  • Last visited

Everything posted by bmac6996

  1. just wanted to add, also having the same issue. i usually wait for a while to do the next update, and jumped straight from 6.11.5 to 6.12.4. Within 24 hours, i've have 2 crashes.. ip unpingable and can't wake server up. Have to reboot. I'l try to figure out turning on syslog to see if i can catch anything useful. If this is macvlan issue, i'll just have to roll back to 6.11.5. Most of my dockers are running on the host network.
  2. Figured it out. If anyone needs to know, it's adding the version number at the end of the docker name like so: "binhex/arch-minecraftbedrockserver:1.18.33.02-01"
  3. Is there a way to set a specific version? I upgraded too quickly and the client hasn't caught up. Need to roll back one version. Thanks
  4. Hello. Just wanted to update this since I had to wipe and redo my install of xpenology and share what I learned. Most is the same from my original post at: except for some updates and things I tried: 3615xs - My hardware is the same as before but on version 6.9.2 of unraid. - Changed machine to Q35-5.1. Kept it at SeaBIOS - Because this is a new install, I rebuilt the synoboot from scratchexcept I kept the same s/n and mac in the grub. Verison 1.03b of the bootloader - Also created a minimum 5GB vdisk using SATA and qcow (didn't test different type) - STILL had to manually change the network model/card to "e1000e" (still the big one) - Installed version 6.2.3-25426. Tried 6.2.4 and that just didn't work at all and that wouldn't boot. Once in 6.2.3 I couldn't manually update to the minor updates -2 or -3. Every time I uploaded the files, it would say it was "corrupt". Tried to manually update to 6.2.4 and same thing. So far everything is working. I'll update if I find something that doesn't. Hope this helps anyone that's still trying to do this in unraid. Good luck.
  5. Update: Was able to install 6.2.2-24922 DSM as a VM. This is what I did: - Used bootloader 1.03b. - Used DS3615 - Installed DMS version 6.2.2-24922 (also tried DSM 6.2.1-23824 Update 6 and that worked too!) - Created VM as "CentOS". Used Seabios and highest version of machine (Q35-3.0) - After I configured the bootloader, I loaded the img as the first bootable primary vdisk first but set the type to "USB". (doing it this way, as "usb", DSM won't see the bootloader image file as a hard drive and will show only your other vdisks attached) - I created a vdisk, and used qcow (other type could be used, but I didn't try). Must be at least 5gb I found. (anything less it won't see and/or will fail during setup) - The main part!! -- manually go into the XML and change the nic to "e1000e". This worked for me on my Supermicro - X9SRH-7F E5-2690 v2. YMMV. Good luck.
  6. What bootloader version did you use? for which synology hardware device? What DSM version did you originally install? did you update to 6.2.1 update 4 and was at a lower version or were you already there when you originally installed it? Having the original mac and serial combo could one of those X factors though.. can't rule it out. I know 6.2.2 people are having issues, so i defin wouldn't jump to there. And it all could be hardware that we are using that could be when why some can get it working and some cannot. This is what i'm running. M/B: Supermicro - X9SRH-7F/7TF CPU: Intel® Xeon® CPU E5-2690 v2 @ 3.00GHz Memory: 32 GB Multi-bit ECC
  7. Anyone got Jun's Loader v1.04b DS918+ working? I can get 1.02b with ds3615xs working with the settings stated in the earlier threads but if I use the same settings for v1.04b, find Synology won't pick anything up and VM isn't picking up an IP from DHCP. Thoughts? Edit: This information is really good to know... to figure if something may or may not work on your unraid server. https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/ Another edit: Been trying for the past few hours, been trying to install and this is what i found. - on my hardware, only loader ds3617 will actually boot and install. (e1000/seabios). and only using dsm at version 6.2 only. 6.2.1 and won't finish install after the first reboot. - i read and seems that ds3617 is more compatible. YMMV - 6.2.1 and higher just won't work under unraid as far as i can tell. (they are now at 6.2.2 and i tired that with ds918 with no luck). Even if i installed 6.2 and upgraded within, fails after first reboot.
  8. Hi, been reading through this thread and I can't find an exact answer to my situation: I have a PCI (LEGACY) USB 2.0 card i want to passthrough to VM. I've tried different methods to separate the PCI card it is own IOMMU group. No luck. ACS Override, downstream option, vfio PCI. Combination of all of them Rebooted in between all those changes and the VIA USB Controller in group 11 will not move. CPU/Mobo is compatible. Version 6.5.3 Will this ever work or is this a limitation because it's PCI (Legacy)? Does the PCI card need to be separated from the group in order to attempt OP's workaround? Thoughts/ideas? IOMMU group 11: [8086:244e] 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a6) [1106:3038] 07:00.0 USB controller: VIA Technologies, Inc. VT82xx/62xx UHCI USB 1.1 Controller (rev 61) [1106:3038] 07:00.1 USB controller: VIA Technologies, Inc. VT82xx/62xx UHCI USB 1.1 Controller (rev 61) [1106:3104] 07:00.2 USB controller: VIA Technologies, Inc. USB 2.0 (rev 63) [8086:1010] 07:02.0 Ethernet controller: Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) (rev 03) [8086:1010] 07:02.1 Ethernet controller: Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) (rev 03) [102b:0532] 07:04.0 VGA compatible controller: Matrox Electronics Systems Ltd. MGA G200eW WPCM450 (rev 0a)
  9. Hi I recently added a single 10gb Mellanox NIC card to my Unraid server and my Freenas server. As a test I'm trying to see if i can hit that mark. I'm not getting anywhere close to it. This is how I'm testing: On the unraid server, i have a single SSD cache drive (rated 500MBs). SSD is connected to a 2308 LSI controller (IT mode). A windows 10 VM on it. Network settings in the VM and in Unraid has MTU to 9000. Freenas server has a two intel ssds (also rated 500 MBs) set for Stripping. SSD is connected to a 2308 LSI controller (IT mode). Created a share on it. MTU is set to 9000. Setup these tunable options per the internet to optimize settings: (enter via CLI and even rebooted) sysctl kern.ipc.somaxconn=2048 sysctl kern.ipc.maxsockbuf=16777216 sysctl net.inet.tcp.recvspace=4194304 sysctl net.inet.tcp.sendspace=2097152 sysctl net.inet.tcp.sendbuf_max=16777216 sysctl net.inet.tcp.recvbuf_max=16777216 sysctl net.inet.tcp.sendbuf_inc=32768 sysctl net.inet.tcp.recvbuf_inc=524288 sysctl net.route.netisr_maxqlen=2048 I tested by coping a big file between the VM in cache and the share on freenas. Maybe i hit 500 MBs according to Windows copying GUI but it didn't last very long. On average i'm getting anywhere between 150 - 200 MBs. Then I tried iperf. Setup the freenas as the server and the VM as the client. Ran the tests a bunch of times and basically getting about the same. Based on my Hardware and setup, is this the MAX I can get? In order to get the 10gb transfer speeds, are do i need a nvme on an m2 in order to get that type of transfer? is the sata SSDs the bottleneck? Thoughts? Thanks in advance!!