Jump to content

dnLL

Members
  • Content Count

    159
  • Joined

  • Last visited

Community Reputation

5 Neutral

About dnLL

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Worked perfectly for me with a X11SCH-LN4F motherboard, didn't even have to reboot and no need to edit syslinux.cfg. pfsense VM immediately recognized both ports (be careful to use i440fx, if you use Q35, the ports will only be detected if they have active connections which is not exactly what we want for pfSense). Haven't rebooted yet, unsure if Unraid will try to take the ports for itself, if so I'll do some syslinux.cfg magic but don't think that will be necessary at this point.
  2. Sorry if it's already been discussed (couldn't find in the original post) but my motherboard has 4 Intel I210 controllers each with their separate port, with lspci -n I see this: 02:00.0 0200: 8086:1533 (rev 03) 03:00.0 0200: 8086:1533 (rev 03) 04:00.0 0200: 8086:1533 (rev 03) 05:00.0 0200: 8086:1533 (rev 03) Now, adding 8086:1533 to my syslinux.cfg, will Unraid still be able to use the first 2 ports (0200 and 0300, which I'm curently using in bond0) while I'd like to passthrough 0400 and 0500 to a pfSense VM?
  3. I have a X11SCH-LN4F and as soon as I type modprobe i915 in the console, I lose video output in the console. I didn't have that issue on my older ASRock Rack motherboard but that one had a separated GPU chip onboard on top of the IGP from the CPU. Anyways, personally I don't need the console image once Unraid is booted up so that's not too much of an issue. If you really want a workaround, I guess that would be to get a dedicated GPU and have both the IGP and the GPU enabled, the IGP being used for the console video output.
  4. Found the issue. Aug 8 19:25:45 server root: Starting go script Aug 8 19:25:45 server kernel: Linux agpgart interface v0.103 Aug 8 19:25:45 server kernel: i915 0000:00:02.0: enabling device (0000 -> 0003) Aug 8 19:25:45 server kernel: i915 0000:00:02.0: can't derive routing for PCI INT A Aug 8 19:25:45 server kernel: i915 0000:00:02.0: PCI INT A: not connected Aug 8 19:25:45 server kernel: [drm] VT-d active for gfx access Aug 8 19:25:45 server kernel: checking generic (91000000 300000) vs hw (80000000 10000000) Aug 8 19:25:45 server kernel: [drm] Replacing VGA console driver Aug 8 19:25:45 server kernel: [drm] Supports vblank timestamp caching Rev 2 (21.10.2013). Aug 8 19:25:45 server kernel: [drm] Driver supports precise vblank timestamp query. Aug 8 19:25:45 server kernel: i915 0000:00:02.0: BAR 6: can't assign [??? 0x00000000 flags 0x20000000] (bogus alignment) Aug 8 19:25:45 server kernel: [drm] Failed to find VBIOS tables (VBT) Aug 8 19:25:45 server kernel: [drm] Initialized i915 1.6.0 20180719 for 0000:00:02.0 on minor 0 Aug 8 19:25:45 server kernel: [drm] Cannot find any crtc or sizes Aug 8 19:25:45 server kernel: [drm] Cannot find any crtc or sizes Aug 8 19:25:45 server kernel: [drm] Finished loading DMC firmware i915/kbl_dmc_ver1_04.bin (v1.4) Which comes from: root@server:~# cat /boot/config/go #!/bin/bash #Set up drivers for HW transcoding in Plex modprobe i915 chmod -R 777 /dev/dri # Start the Management Utility /usr/local/sbin/emhttp & Might be useful for someone else, if you enabled modprobe i915 and then suddently swaps hardware... it might become a problem. Now I will need to figure out how to do hardware transcoding with this new server based off the E-2278G IGP but that's for another thread ;-).
  5. I can still type from the console, CTRL+ALT+DEL works just fine. I copied my config from an older one I was using on another server, that's when the issue started, I highly suspect there is something wrong in there, I will keep looking.
  6. Pretty much title, new Supermicro X11SCH-LN4F motherboard, I do everything from the HTML5 or JAVA console, and it works fine to POST, I get the blue Unraid bootloader, then I see the drivers loading in and... boom, no more video signal. WebUI works just fine, but I'd like to keep my console working ideally. Any ideas? It does work fine if I plug a monitor directly to the motherboard (no discrete GPU). At the same time, the console works just fine with other OSes.
  7. For some reason, the mover didn't move everything, some 10GB of data (mostly appdata and VM disks that weren't currently in use) wouldn't be moved, so I used Unbalance, moved the rest, stopped the array, made a new config without the cache drives and started it. Parity is currently syncing so it's kinda slow but the VMs are all working (including the ones that I moved with Unbalance) so it seems fine. I had a backup just in case. So Sunday morning... I will turn off the server, take all 4 HDDs and the USB flash stick and put them in the new server, start the array as is and check that everything is fine, then I will pre-clear the 2 new SSDs and 2 other new HDDs that I'm adding to the array, stop the array again once it's done so that I can add all 4 disks (2 to the cache pool, 1 as data and 1 as a second parity drive). That should ensure a minimum offline time. Hopefully.
  8. I'm eventually replacing. Basically I'm changing my whole server, the plan is just to take the HDDs and put them in the new server with the same flash drive and hope all goes well. Because the new mobo supports 2x M.2 NVME SSDs I also ordered new SSDs and will get rid of the old SATA ones. So I can't replace it on the current server since it doesn't support M.2 drives... and after I switch every disk to the new server I want it up and running as soon as possible with a minimum downtime since I'm running a couple of websites and other stuff that require maximum uptime so I thought my easiest option would be to empty the cache, remove it from the pool, check that everything is still fine, then put all my HDDs in the new server, boot up and check if everything is still fine, and then add a new cache with the 2 new SSDs.
  9. Hi, I'm replacing the 2 drives that are currently configured in RAID1 on my cache pool. There was around ~60GB on my cache pool, so I changed the cache settings for my shares from "Prefer" or "Only" to "Yes" where applicable and started the mover. It's currently moving everything out of the cache. Can I just make a new config after this without the 2 cache drives and done?
  10. I did some more tests, with or without cache I'm getting around 115 MB/s read and write with CrystalDiskMark. That's pretty much gigabit's practical limitation. I read that WD Red drives can do more than that so I guess I'm just getting bottlenecked by the network and that's all. That pretty much means that my cache is only useful for loading times for VM disks and such.
  11. Those SSDs are rated for over 500MB/s r/w performances. You are saying that Unraid could be bottlenecking my drives to the point where they can't even maximize Gigabit bandwidth? What about turbo write, should I disable it since I have a cache? And what about the internal HDD cache that we can also write to, any reason not to use it considering I do have a UPS?
  12. Raw numbers. With cache: 83.04 MB/s Without cache: 109.57 MB/s This is on gigabit so I'm pretty much getting to the limit of the network (which should be 125 MB/s for gigabit). Also note that I timed the transfer, not really trusting anything else. Oh and the two SSD cache drives are mirrored... maybe that's the issue? Trim is done on a daily basis at 1:00am.
  13. I have an array with 4x4TB WD Red drives (CMR) and 2x500GB WD Blue SSDs, all hooked up to the same SATA controller (onboard). Here is my "issue": it's faster to write on shares with cache disabled. For example, I have a share named qbittorrent that has cache enabled and another named www with cache disabled. Copying the same ~2GB file to both, starting with the share with cache off, it copied 25% faster than on the share with cache on. root@server:~# ls -l /mnt/*/qbittorrent/myfile* -rw-rw-rw- 1 nobody users 2910110155 Jan 2 2016 /mnt/cache/qbittorrent/myfile.zip -rw-rw-rw- 1 nobody users 2910110155 Jan 2 2016 /mnt/user/qbittorrent/myfile.zip root@server:~# ls -l /mnt/*/www/myfile* -rw-rw-rw- 1 nobody users 2910110155 Jan 2 2016 /mnt/disk3/www/myfile.zip -rw-rw-rw- 1 nobody users 2910110155 Jan 2 2016 /mnt/user/www/myfile.zip -rw-rw-rw- 1 nobody users 2910110155 Jan 2 2016 /mnt/user0/www/myfile.zip What's going on? I do have md_write_method set to rewritable_method, which is specifically why I'm doing these tests as I'd like to maybe try reverting it to default to spare my drives a little bit.
  14. I currently have 4x WD Red drives, 1 acting as parity and the 3 other as data. I'm planning to double up that array with a total of 8x4TB, probably going from 1 to 2 parity drives. I will buy more WD Red drives for my data while the CMR drives are still available (WD40EFRX, not WD40EFAX). At the same time, I was wondering if I would be better putting the parity on 7200rpm drives? Or just "better" drives in general? I was thinking WD Red Pro or WD Gold. The way I see it, and correct me if I'm wrong... every time I write to a drive, I'm also writing to the parity drive... every time I read from a drive, I'm only reading from that drive. So the parity drives get less read overall (except for parity checks) but more writes than any other single drive. That's until a drive fails, in that case every drive is stressed on every read to calculate the missing bits.
  15. I currently have 4x WD Red (CMR) drives, 1 acting as parity and the 3 other as data. I'm planning to double up that array with a total of 8x4TB, probably going from 1 to 2 parity drives. I will buy more WD Red drives for my data while the CMR drives are still available (WD40EFRX, not WD40EFAX). At the same time, I was wondering if I would be better putting the parity on 7200rpm drives? Or just "better" drives in general? I was thinking WD Red Pro or WD Gold.