dnLL

Members
  • Posts

    218
  • Joined

  • Last visited

Everything posted by dnLL

  1. I'm still learning when it comes to networking (I started this pfSense project from scratch), what I currently have is my LAN network on 10.1.1.0/24 and I have a couple of VLANs on different /24 subnets. All of my "safe" LAN devices (such as my desktop, my server and most of its VMs and dockers) are in that same subnet. In Unraid, I have eth0 with "VLAN number" set to 2 because I have one VM using br0.2 instead of br0. That VM is in its own separate VLAN. When it comes to the routing table however, I only have what I consider to be the default settings: Not sure what to add/edit exactly. I wouldn't mind the docker to be in the 10.1.3.0/24 subnet but I guess it would require some additional route and/or something on the pfSense side since that subnet just doesn't exist currently for pfSense. I guess this is more of a general Unraid and routing question that has nothing to do with this thread at this point.
  2. Ah, and why is that? It works with the other containers. I guess I'm just going to put it in a separate VLAN then. What if I change the lan network to a smaller subnet? I'm not using the default bridge because it's easier for me to monitor the docker when it has a different IP address than the server (I use CheckMK to monitor) and also because I would have other dockers trying to be on port 8080 which would be a problem as well (since most templates aren't designed to modify that setting even when it's there).
  3. I disabled the rule just to be safe. It didn't fix the issue. I guess I'm gonna run some wireshark diag next...
  4. FWIW 10.1.1.54 responds to ping. Here are the screenshots requested, I removed the user/pass again: https://imgur.com/a/ccdLClH
  5. If I put ENABLE_VPN to false, webUI works. Which made me think it isn't a firewall issue. I don't have a pi-hole, I do use pfSense however as my router (and pfBlocker-NG is disabled). 10.1.1.54 and 10.1.1.102 are in the same VLAN so they don't go through pfSense at all, shouldn't be a firewall issue (especially considering it works with VPN disabled in the docker settings). However, the DNS settings... I am really not sure of as I do have a rule that redirects all the trafic on port 53 to pfSense itself.
  6. Right. So I installed new gen, it did fix the error, in fact I have no error at all anymore in my log... but the webUI won't work from local network (trying to access 10.1.1.54:8080 from 10.1.1.102). Here is the full log, with user/pass removed: https://hastebin.com/yorasuyoxe.swift
  7. This docker has been a source of frustration forever because of my inhability to make the VPN part of it working. I gave up multiple times in the past but would like to try again and make it work this time. So, installed the docker from scratch... I added the crt, perm and ovpn files from PIA in the /config/openvpn folder. I tried Toronto and Montreal, both supporting port forwarding. The docker template settings I have: Network Type: br0 Fixed IP: 10.1.1.54 Privileged: On Host Ports: all default VPN_ENABLED: yes VPN_PROV: pia STRICT_PORT_FORWARD: yes LAN_NETWORK: 10.1.1.0/24 NAME_SERVERS: 1.1.1.1 DEBUG: true The error: 2020-10-13 14:55:24,366 DEBG 'start-script' stdout output: [info] PIA endpoint 'ca-toronto.privateinternetaccess.com' is in the list of endpoints that support port forwarding ... 2020-10-13 14:55:54,612 DEBG 'start-script' stdout output: [warn] Unable to download json for dynamically assigned port, exiting script... [info] Port forwarding failure, creating file '/tmp/portfailure' to indicate failure... The portfailure file is empty. I would really like to make this work eventually.
  8. Worked perfectly for me with a X11SCH-LN4F motherboard, didn't even have to reboot and no need to edit syslinux.cfg. pfsense VM immediately recognized both ports (be careful to use i440fx, if you use Q35, the ports will only be detected if they have active connections which is not exactly what we want for pfSense). Haven't rebooted yet, unsure if Unraid will try to take the ports for itself, if so I'll do some syslinux.cfg magic but don't think that will be necessary at this point.
  9. Sorry if it's already been discussed (couldn't find in the original post) but my motherboard has 4 Intel I210 controllers each with their separate port, with lspci -n I see this: 02:00.0 0200: 8086:1533 (rev 03) 03:00.0 0200: 8086:1533 (rev 03) 04:00.0 0200: 8086:1533 (rev 03) 05:00.0 0200: 8086:1533 (rev 03) Now, adding 8086:1533 to my syslinux.cfg, will Unraid still be able to use the first 2 ports (0200 and 0300, which I'm curently using in bond0) while I'd like to passthrough 0400 and 0500 to a pfSense VM?
  10. I have a X11SCH-LN4F and as soon as I type modprobe i915 in the console, I lose video output in the console. I didn't have that issue on my older ASRock Rack motherboard but that one had a separated GPU chip onboard on top of the IGP from the CPU. Anyways, personally I don't need the console image once Unraid is booted up so that's not too much of an issue. If you really want a workaround, I guess that would be to get a dedicated GPU and have both the IGP and the GPU enabled, the IGP being used for the console video output.
  11. Found the issue. Aug 8 19:25:45 server root: Starting go script Aug 8 19:25:45 server kernel: Linux agpgart interface v0.103 Aug 8 19:25:45 server kernel: i915 0000:00:02.0: enabling device (0000 -> 0003) Aug 8 19:25:45 server kernel: i915 0000:00:02.0: can't derive routing for PCI INT A Aug 8 19:25:45 server kernel: i915 0000:00:02.0: PCI INT A: not connected Aug 8 19:25:45 server kernel: [drm] VT-d active for gfx access Aug 8 19:25:45 server kernel: checking generic (91000000 300000) vs hw (80000000 10000000) Aug 8 19:25:45 server kernel: [drm] Replacing VGA console driver Aug 8 19:25:45 server kernel: [drm] Supports vblank timestamp caching Rev 2 (21.10.2013). Aug 8 19:25:45 server kernel: [drm] Driver supports precise vblank timestamp query. Aug 8 19:25:45 server kernel: i915 0000:00:02.0: BAR 6: can't assign [??? 0x00000000 flags 0x20000000] (bogus alignment) Aug 8 19:25:45 server kernel: [drm] Failed to find VBIOS tables (VBT) Aug 8 19:25:45 server kernel: [drm] Initialized i915 1.6.0 20180719 for 0000:00:02.0 on minor 0 Aug 8 19:25:45 server kernel: [drm] Cannot find any crtc or sizes Aug 8 19:25:45 server kernel: [drm] Cannot find any crtc or sizes Aug 8 19:25:45 server kernel: [drm] Finished loading DMC firmware i915/kbl_dmc_ver1_04.bin (v1.4) Which comes from: root@server:~# cat /boot/config/go #!/bin/bash #Set up drivers for HW transcoding in Plex modprobe i915 chmod -R 777 /dev/dri # Start the Management Utility /usr/local/sbin/emhttp & Might be useful for someone else, if you enabled modprobe i915 and then suddently swaps hardware... it might become a problem. Now I will need to figure out how to do hardware transcoding with this new server based off the E-2278G IGP but that's for another thread ;-).
  12. I can still type from the console, CTRL+ALT+DEL works just fine. I copied my config from an older one I was using on another server, that's when the issue started, I highly suspect there is something wrong in there, I will keep looking.
  13. Pretty much title, new Supermicro X11SCH-LN4F motherboard, I do everything from the HTML5 or JAVA console, and it works fine to POST, I get the blue Unraid bootloader, then I see the drivers loading in and... boom, no more video signal. WebUI works just fine, but I'd like to keep my console working ideally. Any ideas? It does work fine if I plug a monitor directly to the motherboard (no discrete GPU). At the same time, the console works just fine with other OSes.
  14. For some reason, the mover didn't move everything, some 10GB of data (mostly appdata and VM disks that weren't currently in use) wouldn't be moved, so I used Unbalance, moved the rest, stopped the array, made a new config without the cache drives and started it. Parity is currently syncing so it's kinda slow but the VMs are all working (including the ones that I moved with Unbalance) so it seems fine. I had a backup just in case. So Sunday morning... I will turn off the server, take all 4 HDDs and the USB flash stick and put them in the new server, start the array as is and check that everything is fine, then I will pre-clear the 2 new SSDs and 2 other new HDDs that I'm adding to the array, stop the array again once it's done so that I can add all 4 disks (2 to the cache pool, 1 as data and 1 as a second parity drive). That should ensure a minimum offline time. Hopefully.
  15. I'm eventually replacing. Basically I'm changing my whole server, the plan is just to take the HDDs and put them in the new server with the same flash drive and hope all goes well. Because the new mobo supports 2x M.2 NVME SSDs I also ordered new SSDs and will get rid of the old SATA ones. So I can't replace it on the current server since it doesn't support M.2 drives... and after I switch every disk to the new server I want it up and running as soon as possible with a minimum downtime since I'm running a couple of websites and other stuff that require maximum uptime so I thought my easiest option would be to empty the cache, remove it from the pool, check that everything is still fine, then put all my HDDs in the new server, boot up and check if everything is still fine, and then add a new cache with the 2 new SSDs.
  16. Hi, I'm replacing the 2 drives that are currently configured in RAID1 on my cache pool. There was around ~60GB on my cache pool, so I changed the cache settings for my shares from "Prefer" or "Only" to "Yes" where applicable and started the mover. It's currently moving everything out of the cache. Can I just make a new config after this without the 2 cache drives and done?
  17. I did some more tests, with or without cache I'm getting around 115 MB/s read and write with CrystalDiskMark. That's pretty much gigabit's practical limitation. I read that WD Red drives can do more than that so I guess I'm just getting bottlenecked by the network and that's all. That pretty much means that my cache is only useful for loading times for VM disks and such.
  18. Those SSDs are rated for over 500MB/s r/w performances. You are saying that Unraid could be bottlenecking my drives to the point where they can't even maximize Gigabit bandwidth? What about turbo write, should I disable it since I have a cache? And what about the internal HDD cache that we can also write to, any reason not to use it considering I do have a UPS?
  19. Raw numbers. With cache: 83.04 MB/s Without cache: 109.57 MB/s This is on gigabit so I'm pretty much getting to the limit of the network (which should be 125 MB/s for gigabit). Also note that I timed the transfer, not really trusting anything else. Oh and the two SSD cache drives are mirrored... maybe that's the issue? Trim is done on a daily basis at 1:00am.
  20. I have an array with 4x4TB WD Red drives (CMR) and 2x500GB WD Blue SSDs, all hooked up to the same SATA controller (onboard). Here is my "issue": it's faster to write on shares with cache disabled. For example, I have a share named qbittorrent that has cache enabled and another named www with cache disabled. Copying the same ~2GB file to both, starting with the share with cache off, it copied 25% faster than on the share with cache on. root@server:~# ls -l /mnt/*/qbittorrent/myfile* -rw-rw-rw- 1 nobody users 2910110155 Jan 2 2016 /mnt/cache/qbittorrent/myfile.zip -rw-rw-rw- 1 nobody users 2910110155 Jan 2 2016 /mnt/user/qbittorrent/myfile.zip root@server:~# ls -l /mnt/*/www/myfile* -rw-rw-rw- 1 nobody users 2910110155 Jan 2 2016 /mnt/disk3/www/myfile.zip -rw-rw-rw- 1 nobody users 2910110155 Jan 2 2016 /mnt/user/www/myfile.zip -rw-rw-rw- 1 nobody users 2910110155 Jan 2 2016 /mnt/user0/www/myfile.zip What's going on? I do have md_write_method set to rewritable_method, which is specifically why I'm doing these tests as I'd like to maybe try reverting it to default to spare my drives a little bit.
  21. I currently have 4x WD Red drives, 1 acting as parity and the 3 other as data. I'm planning to double up that array with a total of 8x4TB, probably going from 1 to 2 parity drives. I will buy more WD Red drives for my data while the CMR drives are still available (WD40EFRX, not WD40EFAX). At the same time, I was wondering if I would be better putting the parity on 7200rpm drives? Or just "better" drives in general? I was thinking WD Red Pro or WD Gold. The way I see it, and correct me if I'm wrong... every time I write to a drive, I'm also writing to the parity drive... every time I read from a drive, I'm only reading from that drive. So the parity drives get less read overall (except for parity checks) but more writes than any other single drive. That's until a drive fails, in that case every drive is stressed on every read to calculate the missing bits.
  22. I currently have 4x WD Red (CMR) drives, 1 acting as parity and the 3 other as data. I'm planning to double up that array with a total of 8x4TB, probably going from 1 to 2 parity drives. I will buy more WD Red drives for my data while the CMR drives are still available (WD40EFRX, not WD40EFAX). At the same time, I was wondering if I would be better putting the parity on 7200rpm drives? Or just "better" drives in general? I was thinking WD Red Pro or WD Gold.
  23. It is a USB drive that was causing the "error". I don't think it's particularly a big issue either but at the same time I hate ignoring error messages so I decided to post here just in case. Check_MK by default poke every host every minute, I changed it to 2 minutes. Yes, I like having detailed information about what's happening with my VMs, dockers and physical hardwares such as my server, my pfsense router, my printer and so on very often. the big adavantage of check_mk is data aggregation, it produces charts that are good for 400 days with a very minimal amount of disk space needed. It's just not as "live" as some other monitoring solutions, I can miss CPU spikes within those 2 minutes, but it's accurate enough for my needs. It also scans my logfiles... so check_mk is the reason why I saw that error in syslog and why I'm here now. I can just filter it out in check_mk itself. I added a conf file for rsyslog to discard the useless messages however, UserScripts will create the conf file on every start and restart rsyslog. Anyways, that's beyond the scope of this thread.
  24. I can do a test tonight, mount the drive, do some writes, unmount it and wait 5 minutes before unplugging. I did unplug it like 10 seconds after unmounting. So yea, maybe syncing before unmounting would help, I don't mind doing some tests with you if need be. As for Check_MK... well, it's my monitoring solution, I should probably tune syslog to filter out these infos, but it's not a Check_MK problem per se, it's just the way syslog is configured on Unraid with xinetd. It can definitely be filtered but I never really saw any reason to do it, logrotate does its job and I can just egrep -v xinetd|rpcbind if that bothers me that much while looking at logs. With that said, I'll look into it as it is, like you said, useless logging =).