DieFalse

Members
  • Posts

    432
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by DieFalse

  1. I have attempted to install Linux VMs and Windows VMs. When creating I set 120-240GB depending on which I want at the time. Loading the Client OS Installer, selecting the created virtual hard drive - it shows no space available or partitionable. Vdisk.img = 56kb in size. I believe that the VM Creation tool is not provisioning QCow or raw drives space correctly. Image from WIn10 install post-driver installation shows the drive with zero space for easier clarity.
  2. Occasionally the USB Logitech receiver for a mouse/keyboard, but during this event - nothing else.
  3. Update, The Exclamation Mark was my fault. I didn't put the USB in the same spot and it reverted boot order. After correcting the boot order to the USB - it booted just fine. the root=sda did infact fix the issue, however, This concerns me that it will be needed going forward - and why would it be needed now?
  4. Ok, Recreated the USB = Same issue Adding root=sda to the syslinux.cfg as suggested by Squid = new issue. Now I get a black screen with cursor and an Intimidating exclamation mark only.
  5. Updated my AMD and Intel boxes to trial [6.9.0-beta25] and Intel boots fine, AMD Threadripper 1950x Kernel panics. Unable to boot.
  6. Ok, I deleted the Network config files and was able to restore connectivity to the machine. I have no idea why I would need to delete those as nothing else changed. I will be trying the mellanox card info simply to get the card info and update them if I need to, but will run on official images as I rather have full support instead of just community support. Since this problem was resolved by deleting the network configs (I did check the config in the GUI and it is identical to how it is setup again now, I wanted to report it. Since its fixed - in case it gets reported by others - this can be closed unless you want further diag/troubleshooting for this.
  7. In the Tools Menu, I am unable to roll back to Stable directly. Have to back peddle. This occurs on both of my servers, Intel and AMD. I created a GIF to save space of the screen record, showing what occurs. I have used a clean install on one server, to test rolling up and back, and the symptoms are the same. Steps to reproduce - Goto Tools - Update OS - click dropdown of "next" change to "stable" it refreshes back to Next every time.
  8. This was either my fault or the beta enabled a feature I did not expect causing loss of LAN connectivity due to misrouting. Under Docker Settings (Array stopped) Host access to custom networks: Enabled Preserve user defined networks: Yes was corrected to: Host access to custom networks: Disabled Preserve user defined networks: No And now I have access again.
  9. Ok, I have reloaded both dockers, and still can not access the WebUI for them, There is no error indicating why I can not access. One box is on 6.9.0-beta24 and the other is on stable. Any ideas of what I can look at? I verified the correct port and protocol. I simply can not access the WebUI as if it wasn't there, but it shows it started.
  10. I have two DeludgeVPN dockers, I can no longer access the WebUI in either of them, but run command shows the WebUI started. 2020-07-09 13:17:30,298 DEBG 'watchdog-script' stdout output: [info] Starting Deluge Web UI... 2020-07-09 13:17:30,299 DEBG 'watchdog-script' stdout output: [info] Deluge Web UI started
  11. Upgraded two servers - one is an AMD board and the other is Intel - R420 Dell server. The R420 no longer recognizes the Mellanox card at all. the PCI riser is working, as the USB connected to it is the boot device, but the card itself is a no go in this version. It worked in 6.9.0-beta22 but not in 24, and when I rolled back to 22, it doesn't work anymore. I had to roll back to Stable for the card to begin working again.
  12. Jul 6 13:22:43 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:22:48 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:22:52 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:23:03 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:23:03 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:23:13 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:23:18 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:23:18 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:23:49 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:23:49 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:24:04 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:24:08 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:24:14 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:24:19 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:24:19 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state I am getting spammed with the above error, and have been attempting to google the source cause, nothing has been helpful. I have a 4 port NIC bonded with switch settings also 802.3ad and a single on-board nic set to dhcp (failback incase the bond breaks to allow mgmt access. arcanine-diagnostics-20200706-1330.zip
  13. Jul 6 13:22:43 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:22:48 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:22:52 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:23:03 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:23:03 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:23:13 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:23:18 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:23:18 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:23:49 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:23:49 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:24:04 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:24:08 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:24:14 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:24:19 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state Jul 6 13:24:19 Arcanine dhcpcd[2752]: route socket overflowed - learning interface state I am getting spammed with the above error, and have been attempting to google the source cause, nothing has been helpful. I have a 4 port NIC bonded with switch settings also 802.3ad and a single on-board nic set to dhcp (failback incase the bond breaks to allow mgmt access.
  14. I found the culprit that was writing to docker.img. Apparently an update to deluge set downloads to /home/nobody/ instead of /mnt/downloads/. now to sort the corruption on cache and segfaults.
  15. I had set the 100g back when I had one docker verbose logging and writing to the docker image section incorrectly. Once fixed, I left the 100G image and never downsized it. I had the extra space and left it that way anyways. Nothing was supposed to be writing anything to the image itself, but to /mnt/user and /appdata only. So I am unsure what was writing incorrectly. Unless I missed something a while back, I don't recall anything writing to the docker image at all in over a year. I will dig into the CSRF error later after the rest is settled. I am now also getting segfaults Jun 20 17:23:31 Arcanine kernel: vnstati[15525]: segfault at 20 ip 0000000000407f7a sp 00007ffddc1149d0 error 4 in vnstati[400000+16000]
  16. Well, im in a world of hurt now. My Docker.img decided at 3am to have something write to it, until it was full. corrupting it. so I have to rebuild all my dockers (Thank you CA Previous apps for helping make this easier!) I am now also getting these in my logs: Jun 20 14:15:46 Arcanine root: error: /update.php: missing csrf_token Jun 20 14:15:46 Arcanine root: error: /update.php: missing csrf_token **** Docker image file is getting full (currently 100 % used) **** **** Unable to write to Docker Image **** Jun 20 14:16:26 Arcanine emhttpd: shcmd (259): /usr/local/sbin/mount_image '/mnt/user/docker/docker.img' /var/lib/docker 600 Jun 20 14:16:26 Arcanine root: /mnt/user/docker/docker.img is in-use, cannot mount
  17. Hi Johnnie, Thanks for confirming that - it is odd that all my SSD's one that controller are trimming except 512SSD-TOP which is a different model ssd I believe. I think there is corruption on my Cache pool after all of this. Jun 19 12:47:35 Arcanine root: mount: /var/lib/docker: mount(2) system call failed: File exists. Jun 19 12:47:35 Arcanine root: mount error Jun 19 12:47:35 Arcanine emhttpd: shcmd (478): exit status: 1 Jun 19 12:47:35 Arcanine kernel: BTRFS warning (device loop2): duplicate device fsid:devid for 5a56f8e9-9eec-4ee0-9bb4-9d88a7c04293:1 old:/dev/loop2 new:/dev/loop3 Jun 19 12:47:35 Arcanine kernel: BTRFS warning (device loop2): duplicate device fsid:devid for 5a56f8e9-9eec-4ee0-9bb4-9d88a7c04293:1 old:/dev/loop2 new:/dev/loop3 Jun 19 13:00:01 Arcanine speedtest: Internet bandwidth test started Jun 19 13:00:01 Arcanine speedtest: Host: Jun 19 13:00:01 Arcanine speedtest: Jun 19 13:00:01 Arcanine speedtest: Internet bandwidth test completed Jun 19 13:08:45 Arcanine kernel: BTRFS warning (device sdah1): csum failed root 5 ino 22976309 off 48177983488 csum 0x98f94189 expected csum 0x3fe1c9c2 mirror 1 Jun 19 13:08:45 Arcanine kernel: BTRFS warning (device sdah1): csum failed root 5 ino 22976309 off 48177983488 csum 0x98f94189 expected csum 0x3fe1c9c2 mirror 1 Jun 19 13:08:45 Arcanine kernel: BTRFS warning (device sdah1): csum failed root 5 ino 22976309 off 48177983488 csum 0x98f94189 expected csum 0x3fe1c9c2 mirror 1 Jun 19 13:08:50 Arcanine kernel: BTRFS warning (device sdah1): csum failed root 5 ino 22976366 off 6451195904 csum 0x98f94189 expected csum 0xce9bfe79 mirror 1 Jun 19 13:08:50 Arcanine kernel: BTRFS warning (device sdah1): csum failed root 5 ino 22976366 off 6451195904 csum 0x98f94189 expected csum 0xce9bfe79 mirror 1 Jun 19 13:08:50 Arcanine kernel: BTRFS warning (device sdah1): csum failed root 5 ino 22976366 off 6451195904 csum 0x98f94189 expected csum 0xce9bfe79 mirror 1 Jun 19 13:08:50 Arcanine kernel: BTRFS warning (device sdah1): csum failed root 5 ino 22976366 off 6451195904 csum 0x98f94189 expected csum 0xce9bfe79 mirror 1 Jun 19 13:09:36 Arcanine kernel: BTRFS warning (device sdah1): csum failed root 5 ino 22976083 off 24481468416 csum 0x98f94189 expected csum 0xa7fe654f mirror 1 Jun 19 13:09:36 Arcanine kernel: BTRFS warning (device sdah1): csum failed root 5 ino 22976083 off 24481468416 csum 0x98f94189 expected csum 0xa7fe654f mirror 1 Jun 19 13:09:36 Arcanine kernel: BTRFS warning (device sdah1): csum failed root 5 ino 22976083 off 24481468416 csum 0x98f94189 expected csum 0xa7fe654f mirror 1 Jun 19 13:09:38 Arcanine crond[2763]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
  18. I wanted to rule out everything, I swapped to a pci controller with all new cables, and shifted the drives in the chassis. I am still getting errors, and added a new one. Guess my PCI controller doesnt handle trim fully. fstrim: /mnt/disks/512SSD-TOP: the discard operation is not supported arcanine-diagnostics-20200618-1448.zip
  19. I will try new cables this evening ( good thing I ordered extras ). I will pull one off another drive not erroring and put that drive on this port with a new cable. As for the BTRFS, The only BtrFS that I can recall in my system is the cache drives, so something is occuring with those as well as the unassigned drive.
  20. Hi Johnnie, That is an unassigned drive, Im wondering if its the drive itself. I use that drive for SQL databases and img's. It is a Marvell based ssd. I know previous versions had issues trimming the Marvel based but thought it was resolved. The sata cable, housing, backplane are all new. The only remaining is the drive and the sata port itself on the motherboard. Jun 18 11:23:35 Arcanine unassigned.devices: Mount of '/dev/sdo1' failed. Error message: mount: /mnt/disks/250GB_BAY2: wrong fs type, bad option, bad superblock on /dev/sdo1, missing codepage or helper program, or other error.
  21. Hi Johnnie. This seems to happen every time my cron calls for fstrim (hourly) or mover. This has been occurring for about 7 days now. I have replaced all the components above trying to track down the root.
  22. Hello, I have recently had some CRC errors pop back up, and I am beginning to think its drive or motherboard controller related. I have replaced all the sata cables (the only ones doing it are Sata to the MB and not on my expander cards), the enclosure (5.25" to 6x2.5" hot swap bays), and trays. I will be ordering an 8 port pci card and trying it soon. I wanted to check some things here as I am getting errors I do not understand fully. fstrim: /var/lib/docker: FITRIM ioctl failed: Input/output error Jun 18 10:33:30 Arcanine kernel: print_req_error: I/O error, dev loop2, sector 21048920 Jun 18 10:33:30 Arcanine kernel: BTRFS warning (device loop2): failed to trim 30 block group(s), last error -5 Jun 18 10:33:30 Arcanine kernel: BTRFS warning (device loop2): failed to trim 1 device(s), last error -5 I have my array started but all dockers and vm's off at the moment. I think I may have some corruption going on. I am also still getting other Cron Job errors I can't figure out. I have uninstalled and reinstall TinC and the error persists with or without TinC: error: stat of /var/log/tinc.* failed: No such file or directory Do I just need to make this directory? An alternative to the changing the 6x2.5" ssd's to a PCI controller is moving cache to 2x8tb available drives I have on the existing 42bay enclosure. Any ideas? Diags attached. arcanine-diagnostics-20200618-1041.zip
  23. To expand on my answer, If #6 / #7 occur, my two server licences will instantly be joined by 6 more so I can convert my entire server farm from ESXi to unraid. I would also likely add more servers if expansion worked.