Jump to content

joedotmac

Members
  • Content Count

    79
  • Joined

  • Last visited

Community Reputation

0 Neutral

About joedotmac

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I figured it out. Added the drive back to the cache slot, then the grayed out selection for number of slots was now selectable, changed to one slot, and xfs is now available as a format option. Thanks
  2. I ran into the issue described in the thread where btrfs formatted cache drive trim operations are not occurring. This results in the cache drive space filling up. It's mentioned in the thread there's a fix possible with a kernel update which hasn't yet been implemented. The current fix is to change the format of the cache disk from btrfs to xfs. I have a full backup of the data, performed a rsync -av to an unassigned drive with everything that was on the cache drive. I'm hung up around step 5,6 and the result. Unable for the system to make available the xfs option for the cache drive. How does someone eliminate the two empty slots of my cache drive configuration so that xfs is available as a format selection? I can change the amount of slots for the array disks, but the cache drive section has three slots and the option to change the available slots to a single cache drive is grayed out. The selection currently indicates three, and I'm using a single cache drive.
  3. I did these instructions exactly. Now upon restarting the array the system complains "Unmountable: No file system (no btrfs devices)" I stop the array go into the drive config and the xfs as a format option is non-existent. Only options are auto, btrfs and btrfs-encrypted. xfs is nowhere to be found. I'm hesitant to remove the drive from the config and re-add the blank xfs formatted cache drive.
  4. Experiencing inconsistent results from configuring UNRAID with bond interface and a standalone management interface both on same LAN 192.168.199.0/24 Bond0 (br0) MAC Addr 29:30 IP 192.168.199.111 (eth0, eth1, eth2) eth4 MAC Addr 29:34 IP 192.168.199.100 Seems the configuration of network interfaces with associated interfaces as indicated in the GUI are not consistent with whats being advertised on the layer 2 data link layer. From a client machine on the same 192.168.199.0/24 LAN, displaying the arp cache will indicate both the .111 and .100 IP's are using the same MAC address of 29:30. Then I can initiate a file transfer to .111 and it will use eth3 and not the expected bond interface. Are there any files an I inspect or methods of which force compliance of MAC address being advertised to the switch out of a physical ethernet interface?
  5. Was able to figure out why my SATAII 6 Gbps SSD drives was resetting. Plagued for months replacing cables, different SATA ports, adding PCI SATA card. Regardless the SSD drive would eventually reset down to 1.5 Gbps and would still reset multiple times a day. Solution power supply. 450 watt power supply wasn't enough to drive the six drives, four fans, and water pump. Swapping in a 550 watt supply did the truck to resolve these messages pr 9 11:36:32 unraid kernel: ata5.00: exception Emask 0x10 SAct 0x0 SErr 0x4090000 action 0xe frozen Apr 9 11:36:32 unraid kernel: ata5.00: irq_stat 0x00400040, connection status changed Apr 9 11:36:32 unraid kernel: ata5: SError: { PHYRdyChg 10B8B DevExch } Apr 9 11:36:32 unraid kernel: ata5.00: failed command: READ DMA EXT Apr 9 11:36:32 unraid kernel: ata5.00: cmd 25/00:20:88:b5:86/00:00:17:01:00/e0 tag 9 dma 16384 in Apr 9 11:36:32 unraid kernel: res 50/00:00:97:ab:5f/00:00:d1:00:00/e0 Emask 0x10 (ATA bus error) Apr 9 11:36:32 unraid kernel: ata5.00: status: { DRDY } Apr 9 11:36:32 unraid kernel: ata5: hard resetting link Apr 9 11:36:37 unraid kernel: ata5: SATA link up 1.5 Gbps (SStatus 113 SControl 310) Apr 9 11:36:37 unraid kernel: ata5.00: ACPI cmd ef/10:06:00:00:00:00 (SET FEATURES) succeeded Apr 9 11:36:37 unraid kernel: ata5.00: ACPI cmd f5/00:00:00:00:00:00 (SECURITY FREEZE LOCK) filtered out Apr 9 11:36:37 unraid kernel: ata5.00: ACPI cmd b1/c1:00:00:00:00:00 (DEVICE CONFIGURATION OVERLAY) filtered out Apr 9 11:36:37 unraid kernel: ata5.00: NCQ Send/Recv Log not supported Apr 9 11:36:37 unraid kernel: ata5.00: ACPI cmd ef/10:06:00:00:00:00 (SET FEATURES) succeeded Apr 9 11:36:37 unraid kernel: ata5.00: ACPI cmd f5/00:00:00:00:00:00 (SECURITY FREEZE LOCK) filtered out Apr 9 11:36:37 unraid kernel: ata5.00: ACPI cmd b1/c1:00:00:00:00:00 (DEVICE CONFIGURATION OVERLAY) filtered out Apr 9 11:36:37 unraid kernel: ata5.00: NCQ Send/Recv Log not supported Apr 9 11:36:37 unraid kernel: ata5.00: configured for UDMA/33 Apr 9 11:36:37 unraid kernel: ata5: EH complete Apr 9 13:00:34 unraid kernel: ata5.00: exception Emask 0x10 SAct 0x0 SErr 0x4090000 action 0xe frozen Apr 9 13:00:34 unraid kernel: ata5.00: irq_stat 0x00400040, connection status changed Apr 9 13:00:34 unraid kernel: ata5: SError: { PHYRdyChg 10B8B DevExch } Apr 9 13:00:34 unraid kernel: ata5.00: failed command: READ DMA EXT Apr 9 13:00:34 unraid kernel: ata5.00: cmd 25/00:18:e8:0e:c2/00:00:af:00:00/e0 tag 2 dma 12288 in Apr 9 13:00:34 unraid kernel: res 50/00:00:df:0e:c2/00:00:af:00:00/e0 Emask 0x10 (ATA bus error) Apr 9 13:00:34 unraid kernel: ata5.00: status: { DRDY } Apr 9 13:00:34 unraid kernel: ata5: hard resetting link Apr 9 13:00:39 unraid kernel: ata5: SATA link up 1.5 Gbps (SStatus 113 SControl 310) Apr 9 13:00:39 unraid kernel: ata5.00: ACPI cmd ef/10:06:00:00:00:00 (SET FEATURES) succeeded Apr 9 13:00:39 unraid kernel: ata5.00: ACPI cmd f5/00:00:00:00:00:00 (SECURITY FREEZE LOCK) filtered out Apr 9 13:00:39 unraid kernel: ata5.00: ACPI cmd b1/c1:00:00:00:00:00 (DEVICE CONFIGURATION OVERLAY) filtered out Apr 9 13:00:39 unraid kernel: ata5.00: NCQ Send/Recv Log not supported Apr 9 13:00:39 unraid kernel: ata5.00: ACPI cmd ef/10:06:00:00:00:00 (SET FEATURES) succeeded Apr 9 13:00:39 unraid kernel: ata5.00: ACPI cmd f5/00:00:00:00:00:00 (SECURITY FREEZE LOCK) filtered out Apr 9 13:00:39 unraid kernel: ata5.00: ACPI cmd b1/c1:00:00:00:00:00 (DEVICE CONFIGURATION OVERLAY) filtered out Apr 9 13:00:39 unraid kernel: ata5.00: NCQ Send/Recv Log not supported Apr 9 13:00:39 unraid kernel: ata5.00: configured for UDMA/33 Apr 9 13:00:39 unraid kernel: ata5: EH complete Apr 9 13:11:07 unraid kernel: ata5: exception Emask 0x10 SAct 0x0 SErr 0x4090000 action 0xe frozen Apr 9 13:11:07 unraid kernel: ata5: irq_stat 0x00400040, connection status changed Apr 9 13:11:07 unraid kernel: ata5: SError: { PHYRdyChg 10B8B DevExch } Apr 9 13:11:07 unraid kernel: ata5: hard resetting link Apr 9 13:11:12 unraid kernel: ata5: SATA link up 1.5 Gbps (SStatus 113 SControl 310) Apr 9 13:11:12 unraid kernel: ata5.00: ACPI cmd ef/10:06:00:00:00:00 (SET FEATURES) succeeded Apr 9 13:11:12 unraid kernel: ata5.00: ACPI cmd f5/00:00:00:00:00:00 (SECURITY FREEZE LOCK) filtered out Apr 9 13:11:12 unraid kernel: ata5.00: ACPI cmd b1/c1:00:00:00:00:00 (DEVICE CONFIGURATION OVERLAY) filtered out Apr 9 13:11:12 unraid kernel: ata5.00: NCQ Send/Recv Log not supported Apr 9 13:11:12 unraid kernel: ata5.00: ACPI cmd ef/10:06:00:00:00:00 (SET FEATURES) succeeded Apr 9 13:11:12 unraid kernel: ata5.00: ACPI cmd f5/00:00:00:00:00:00 (SECURITY FREEZE LOCK) filtered out Apr 9 13:11:12 unraid kernel: ata5.00: ACPI cmd b1/c1:00:00:00:00:00 (DEVICE CONFIGURATION OVERLAY) filtered out Apr 9 13:11:12 unraid kernel: ata5.00: NCQ Send/Recv Log not supported Apr 9 13:11:12 unraid kernel: ata5.00: configured for UDMA/33 Apr 9 13:11:12 unraid kernel: ata5: EH complete
  6. Problem is now solved. Not certain what actually solved it. Removing, reinstalling all the dockers, upgrading to 6.3.5 are possiblities
  7. Wasn't sure how to properly migration. Was under the assumption that a cache drive was truly a cache like one would find on a CPU or similar as buffer with high speed flash or the like. That assumption was wrong. The result by removing in the method I did, was a server config with hostnames and the like of over a year old. Was able to recover by removing the docker, VM, configurations, adding "new" VM definitions in the GUI using the *,img disks and re-applying dockers as new.
  8. Too late Remove it. Looked in syslog found the current and native sizes. Jun 30 12:23:37 unraid kernel: ata8.00: HPA detected: current 468860015, native 468862128 Then used hdparm to remove the HPA. hdparm -N p468862128 /dev/sdi hdparm -N /dev/sdi Removed the HPA. Though I munged something else as the GUI shows no dockers, and my VM's listed in the GUI appear to be a set I was running a year ago. Going to search the forum to see about finding a similar symptom.
  9. I have a single cache drive in Unraid. HPA is enabled. I would like to disable the HPA. What would be the recommended method. I tried: hdparm -N /dev/sdi Command line comes back and tells me HPA is enabled. Any ideas? Could I simply remove it from being selected as cache drive, preclear the drive, and re-select it as the cache drive? Thanks
  10. Full snapshot and syslog since May 3rd. Thanks. unraid-syslog-20170505-0955.zip
  11. I had never setup the email notification settings. The default email notification settings appear to select gmail, but there was no address, creds of any type configured. The settings for notification only were selected for browser notification, email wasn't checked to engage on any of the message levels. Regardless I updated the settings to pop my email domain. Have to see if anything changes, didn't see a test button or the like. Thanks for the insight. Will be looking to see if there are any changes.
  12. Another interested error find is this. Seems whichever plugin is associated with Dynamix plugin "monitor" when it fails it seems to try and call out google page to possibly post event details to a developer support page? The creds per the error message connection attempt aren't valid so it barfs even on that. Apr 30 05:05:01 unraid kernel: monitor[17472]: segfault at 0 ip 00000000005f42ad sp 00007fff178379e0 error 4 in php[400000+724000] Apr 30 05:05:16 unraid crond[1737]: exit status 139 from user root /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null Apr 30 05:05:16 unraid sSMTP[17495]: Creating SSL connection to host Apr 30 05:05:16 unraid sSMTP[17495]: SSL connection using ECDHE-RSA-AES128-GCM-SHA256 Apr 30 05:05:16 unraid sSMTP[17495]: Authorization failed (535 5.7.8 https://support.google.com/mail/?p=BadCredentials l46sm5351894ota.0 - gsmtp) Apr 30 05:06:01 unraid crond[1737]: exit status 139 from user root /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null Apr 30 05:06:01 unraid sSMTP[17560]: Creating SSL connection to host Apr 30 05:06:01 unraid sSMTP[17560]: SSL connection using ECDHE-RSA-AES128-GCM-SHA256 Apr 30 05:06:01 unraid sSMTP[17560]: Authorization failed (535 5.7.8 https://support.google.com/mail/?p=BadCredentials w6sm5454754ota.19 - gsmtp) Apr 30 05:06:16 unraid crond[1737]: exit status 127 from user root /usr/local/emhttp/plugins/dynamix.system.stats/scripts/sa1 1 1 &>/dev/null
  13. Thanks Frank. I made the suggested change to the syslog view to narrow down any indication of flash drive issues. I'm not seeing any thing that would indicate flash drive problem. Haven't had much of any change in mitigating of issues. VM's and plugins stop randomly, CPU percentages on dashboard still hang after 30 minutes or so of system up-time. Current snapshot of errors and warnings over the last 24 hours. I keep researching looking for a release note or some glimmer of hope to find any even small success that may lead to something more considerable. syslog050417-errors.txt
  14. Thanks Frank. Will this effectively make greater use of available RAM, lessening IO impact with the drives? I made the suggested change. Reloaded unraid for a fresh boot and initialization of all processes, docker, and VM's. Would anything here be indicative of the unraid USB flash drive integrity being questionable?
  15. Mover finishes. Ten minutes later VM stops for reasons not established. Last error in syslog was six hours previous. May 3 03:41:43 unraid root: mover finished May 3 03:52:08 unraid kernel: br0: port 2(vnet0) entered disabled state May 3 03:52:08 unraid kernel: device vnet0 left promiscuous mode May 3 03:52:08 unraid kernel: br0: port 2(vnet0) entered disabled state May 3 03:52:08 unraid kernel: br0: port 3(vnet1) entered disabled state May 3 03:52:08 unraid kernel: device vnet1 left promiscuous mode May 3 03:52:08 unraid kernel: br0: port 3(vnet1) entered disabled state