greg2895

Members
  • Posts

    49
  • Joined

  • Last visited

Everything posted by greg2895

  1. I do have mcelog installed on my machine: cron for user root /usr/bin/run-parts /etc/cron.daily 1> /dev/null sh: -c: line 0: unexpected EOF while looking for matching ``' sh: -c: line 1: syntax error: unexpected end of file
  2. Attached is my diagnostics file. Is this something I should be worried about? servernas-diagnostics-20180519-1828.zip
  3. Get the Fix Common Problems plugin and run in troubleshooting mode.
  4. None of my VMs that are set to use br0 appear to be getting my default gateway 192.168.1.1. I'm not sure if I have something configured wrong. It worked fine in the past and then suddenly stopped working. Any advise would be great. VM xml file: <mac address='52:54:00:bd:36:ab'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface>
  5. Writing to the cache with multiple ssds or an nvme drive is the only way you will see close to 10gbit speeds. Most likely what you are seeing is the transfer caching to ram in the beginning and then slows down when using the array. Try out iperf3 to check out your network link speeds.
  6. If rc17 was the last version you had before 6.4 you can roll back. Just pop the USB stick in another computer and copy the contents in the folder named previous to the root of the drive. I suggest making a copy of the drive before doing this.
  7. My windows VM is telling me that it can not get a valid ip address nor will a static address at 192.168.1.x work when I assign br0 to the VM. Is the routing table correct? What else should I look for?
  8. I have found no difference with raid 10 and raid 1 with my 4 ssds.
  9. After the last 2 updates I have not been able to get my container to run. Nothing else has changed to my knowledge besides the update. I am getting the following from my log: cont-init.d] 10-adduser: exited 0.[cont-init.d] 20-config: executing...[cont-init.d] 20-config: exited 0.[cont-init.d] 30-keygen: executing...using keys found in /config/keys[cont-init.d] 30-keygen: exited 0.[cont-init.d] 50-config: executing...2048 bit DH parameters presentSUBDOMAINS entered, processingOnly subdomains, no URL in certSub-domains processed are: -d xxxxxxxxx.comE-mail address entered: xxxxxxxxx.comDifferent sub/domains entered than what was used before. Revoking and deleting existing certificate, and an updated one will be createdusage:certbot [SUBCOMMAND] [options] [-d DOMAIN] [-d DOMAIN] ...Certbot can obtain and install HTTPS/TLS/SSL certificates. By default,it will attempt to use a webserver both for obtaining and installing thecertificate.certbot: error: argument --cert-path: No such file or directoryGenerating new certificateSaving debug log to /var/log/letsencrypt/letsencrypt.logPlugins selected: Authenticator standalone, Installer NoneObtaining a new certificatePerforming the following challenges:Client with the currently selected authenticator does not support any combination of challenges that will satisfy the CA.Client with the currently selected authenticator does not support any combination of challenges that will satisfy the CA.IMPORTANT NOTES:- Your account credentials have been saved in your Certbotconfiguration directory at /etc/letsencrypt. You should make asecure backup of this folder now. This configuration directory willalso contain certificates and private keys obtained by Certbot somaking regular backups of this folder is ideal.ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container[cont-finish.d] executing container finish scripts...[cont-finish.d] done.[s6-finish] syncing disks.[s6-finish] sending all processes the TERM signal.[s6-finish] sending all processes the KILL signal and exiting.
  10. I have had this issue for months now. 10gbe transfer speeds are under 350mb/s. Both server and client show 10gb links. I have tried the following and saw no changes: Removed the switch and used p2p, different 10gb Nic on client(made it even worse 200mb/s), server direct I/O, jumbo frames 9000 on server and client, transmit and receive buffer sizes. I have been using iperf3 to remove storage bottle necks. Server has 4 ssds in raid 10 with build in 10gbe Nic and the client has an nvme ssd with an Asus and also tried an intel 10gbe nic. Is there anything else I can try that I am forgetting?
  11. You are doing better than me! Im getting 300mb windows to unraid and about 400mb unraid to windows. I also setup a p2p connection to bypass the switch and nothing changed.
  12. I still haven't solved my issue either. Transferring from an nvme drive to 2 samsung 850 pro ssds in raid 0. Iperf3 shows bandwidth around 3gbit. MTU 9014 is enabled on both sides.
  13. Turned out to be an issue with my network switch. Quick reboot of the switch fixed the issue. Uggh...
  14. Is there a way to stop files from deleting after reboots in the /etc/ssh directory? I am storing rsa keys there and it is very annoying to reload them after every update or reboot. Thanks
  15. Nothing else comes up on the ping with the server unplugged.
  16. No go. I dont think I have any communication to unraid. If I run ping tests to local machines or websites they all fail from the console.
  17. I rebooted my server and now I have no network access at all. Not sure what my options are as I have no gui access. Attached is what ifconfig spits out.
  18. I have the exact same setup. I use openvpn from pfsense to access my local network when away. IPMI is a great tool for monitoring your system and can be configured to a port but I highly do not suggest facing IPMI to the internet. Instead again, openvpn into your local network and you can access IPMI from there.
  19. I had call traces that turned out to be a bad ram stick. Reboot and run memtest and see if any errors come up.
  20. I have not been able to enable jumbo frames on unraid. NIC supports jumbo frames but the kernel refuses any mtu over 1500. The same NIC on windows works fine with jumbo frames and the NIC description shows support for Linux. I'm not too sure if that's the issue because I've heard other people saturating 10gbe without jumbo frames enabled.
  21. I currently have 4 Samsung 850 pro ssds in btrfs raid 10. That should equal around 1100mb read/write. I am getting 350mb max over the network.
  22. I am having the same issues here. Can't saturate 10gbe at about 350mb/s. Direct I/O is giving me call traces and all docker apps had to be changed from mnt/usr/appdata to mnt/cache/appdata for them to be able to read/write. To top it off i am still only getting 350mb over 10gbe! I am out of ideas.