aim60

Members
  • Posts

    135
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by aim60

  1. Thought it might be related to the NICs being virtio, so I changed them to e1000, but no difference.

     

    root@Tower-VM:~# ifconfig -s
    Iface      MTU    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
    br0       1500      607      0      0 0           384      0      0      0 BMRU
    br1       1500        2      0      0 0            32      0      0      0 BMRU
    eth0      1500      607    242      0 0           499      0      0      0 BMPRU
    eth1      1500        2      1      0 0            61      0      0      0 BMPRU
    lo       65536        2      0      0 0             2      0      0      0 LRU

  2. I'm running the RC series in an unRAID vm, but I don't think that matters.

     

    I did a clean install of RC4, and Interface Rules was not an option in the GUI. No change after upgrading to RC5.  There was no network-rules.cfg in /config, so I brought over a copy from a 6.3.3 system, and edited it to reflect the MAC addresses in the vm.  That caused the GUI to show Interface Rules.  However, after rebooting RC5, network-rules.cfg was gone.  So I assumed that Limetech had removed the feature.

  3. UEFI support makes running unRAID in a vm easier.  Prepare the usb key as you normally would, and set the vm bios to ovmf.  No need for a disk image of the usb key.

    I was even able to upgrade from RC4 to RC5 via the plugin.

     

    On every boot, I'm dropped into the UEFI shell, and have to kick things off with "fs0:/EFI/boot/bootx64.efi".  I added a boot entry using bcfg, but it gets wiped out on boot.  Was this by design to keep people out of trouble? 

  4. Looks like support for Interface Rules got dropped somewhere in 6.4.  I found it useful.

     

    My motherboard has 4 NICS.  eth0 and eth1 were physically harder to get to than eth2 and eth3, so I swapped them around.

     

    IMHO, if this was done to to reduce clutter in the GUI, its ok to that little-used features are only available by editing files.

  5. 7 hours ago, bonienl said:

     

    Do you have a reference?

     

    Not sure if it is possible, but if it is, then it can be added.

    Don't know enough to know if these are kluges or solutions good enough for a production system.

     

    This is one example:

    https://www.furorteutonicus.eu/2013/08/04/enabling-host-guest-networking-with-kvm-macvlan-and-macvtap/

    A Google search of "assign macvlan to host" comes up with several hits.

     

    Hopefully, a solution can be implemented so that dockers with their own IPs, VMs, and the host can all talk.

  6. On 6/5/2017 at 7:28 AM, bonienl said:

    br0 must have an IP address either static or by DHCP

    There's a use case for the host not having an IP address on a bridge.  A second nic connecting the wan interface of a firewall docker or vm to the outside world is a good example.

     

    Its also useful at times to have a bridge with no host interface.  An example is a test environment where Dockers and/or VMs can talk, with no access to the outside world.  I needed to start a Windows VM with no network access.  Since I couldn't find a way to disable the virtual nic, I created a bridge with no interface, and edited the XML file to access this bridge.

  7. Request:

    Please implement Safe Powerdown so it can be activated from the console/telnet session.

     

    Reason:

    For the past several years I have been using only Cyberpower PFC UPS's.  They are the reasonably affordable and designed for PFC power supplies.  Unfortunately, they don't work correctly with APCUPSD when "Power Down UPS After Shutdown" is set to yes.  I would like the ability to run Cyberpower's UPS software in a Docker container or VM, and have it initiate a power down via a scripted telnet session to UnRAID.

     

    Thinking ahead.  Still on UnRAID 5.

     

  8. I really would be much happier if the system didn't automatically turn every top-level directory into a user share

    +1

     

    Feature Request - The option for

        Automatically Created (the way it works now) or Manually Created Only user shares

     

    Would rather not have to start directory names with a "." to have unRaid ignore them.

     

    Why not put those directories on a disk that is excluded from user shares?

    I keep the contents of each disk categorized by function.  Its not a big deal either way, but the sysadmin in me would like to have the choice.

     

    Been wanting to throw this one out there for years.  Just chimed in now because I discovered that I'm not the only one who would like this.

     

    Tom, there are many other things with higher priority.  I'm just requesting that you keep this on your list of things to consider for a rainy day when you have nothing else to do.

  9. I really would be much happier if the system didn't automatically turn every top-level directory into a user share

    +1

     

    Feature Request - The option for

        Automatically Created (the way it works now) or Manually Created Only user shares

     

    Would rather not have to start directory names with a "." to have unRaid ignore them.

     

    Why not put those directories on a disk that is excluded from user shares?

    I keep the contents of each disk categorized by function.  Its not a big deal either way, but the sysadmin in me would like to have the choice.

  10. I really would be much happier if the system didn't automatically turn every top-level directory into a user share

    +1

     

    Feature Request - The option for

        Automatically Created (the way it works now) or Manually Created Only user shares

     

    Would rather not have to start directory names with a "." to have unRaid ignore them.

     

     

     

     

  11. dgaschk

      One disk has 4 pending sectors, but they're the same 4 that have been there for years.  And the parity check (with all new cables) shows no hardware errors.

     

    garycase

      I will definitely look into check-summing the files.  A backup server is also worth thinking about, as is segregating the really important files so a copy can be taken off-site.

     

    I've also been slowly coming to the conclusion that the only way forward is to do a correcting parity check.  And I've realized that 3000 blocks with errors is only about 3MB of data, so damage may be minimal.

     

    Thanks guys for the input.

  12. As a result of Black Friday purchases, I’ve been upgrading disks, and retiring the oldest ones.  I’ve had a few disk problems in the last 9 months, which turned out to be sata cable related.  My plan was to disturb things as little as possible, do all of the disk upgrades, and when things were stable replace all of the sata cables. Bad decision.

     

    I replaced disk6, a ST31500341AS 1.5TB, with a 2TB WDC_WD20EARX-00PASB0, and initiated a rebuild.  The result was many disk read errors on disk2.  I canceled the rebuild.  From the errors in the syslog I concluded that all of the errors were related to the sata cables.  I replaced all of the older sata cables.  While I was in the case, I also noticed that the power connector to disk2 was not fully seated, and fixed it.

     

    Before continuing, I successfully ran smartctl short disk tests on all disks.

     

    I re-initiated the rebuild on disk6.  This time the result was many read errors on disk5, and unraid marked disk5 as missing.  The syslog again indicated to me that the errors were cable/power related.  Disk5 still had one of the older sata cables, and in hindsight, it was on the loose side.  So I replaced the remaining sata cables in the system.

     

    At this point, I needed to establish confidence in the hardware.  I re-installed the original disk6, and replaced super.dat with the one from before the first disk6 replacement.  The array was set to not auto-start, and I powered up the hardware.

     

    I successfully read over 1GB from each disk with

      dd if=/dev/sdx of=/dev/null bs=65536 count=20000, then initiated a nocorrect  parity check.

     

    The hardware seems stable. The results of the parity check were:

    49 sync errors within 1 second (housekeeping area?)

      1 sync error sometime later

        3000+  sync errors after sector 2930245632

     

    If my calculations are correct, the 3000+ errors all occurred within 16GB of the end of a 1.5TB drive (disk6).  An fdisk of disk6 is attached.

     

    My Question - Since the parity disk reflects the rebuild of a 1.5TB disk6 onto a 2.0TB disk6, might the 3000+ errors all reflect the reiserfs housekeeping of increasing the size of the disk? Or do I have corrupted data?

     

    In other words, can I run a correcting parity check and be reasonably confident that I have no data corruption?  I have no backups and would l like as much as possible to avoid further corruption.

     

    Any suggestions on how to proceed would be greatly appreciated. 

     

    I’m thinking that once things are stable, I’ll run a reiserfsck on all of the data drives.

     

    An observation – anyone running a server without removable drive bays, that does a fair amount of moving/replacing drives, should strongly consider replacing their sata cables regularly.  The ones I just installed are Monoprice sata3 cables, and they seem more secure than any other cables I’ve used.

     

     

    5.0.2RC1, C2SEE, Celeron 1400, 4GB, Corsair VX450, (1) SIL3132 PCIx SATA controller, Intel PCI NIC, 7 drives in total.

    Syslogs_etc.zip

  13. For the people who can resolve the by host name when the server is using dhcp, but not with it is static, this is expected behavior.  Your router is able to resolve the host name as long as the dhcp lease is active.

     

    The following works for me using a DD-WRT router, but it may not work for everybody:

     

    See if your router will allow you to assign a static DHCP address.  That's enough for DD-WRT to permanently resolve the host name.  Since you now know what IP would always be assigned to your nic by dhcp, you can go ahead and assign the address statically.

  14. Hi Joe, found a quirk.  Running unRaid 5.0-rc16c and preclear 1.13.

     

    In the unRaid disk settings, set Enable Auto Start to No, then reboot.  Run preclear_disks.sh -l.

     

    All disks, whether assigned to the array or not, are listed as available for clearing.

     

    Start the array.  Only the correct disks are listed for clearing.

     

    Stop the array.  The correct disks are still listed.

     

  15. If you are planning on using the APCUPSD plugin with a CyberPower, read this.

     

      http://lime-technology.com/forum/index.php?topic=13411.msg127182#msg127182

     

    The plugin shuts down unraid without a problem.  Suggest setting "Power Down UPS after shutdown" to NO, and dedicating the UPS to unraid.

     

    On my CP1500PFCLCD, if you set it to YES, the UPS doesn't shutdown when you would expect it to.  At power fail, it sets an internal 60 minute timer, and shuts down the UPS when the timer expires.  And the timer doesn't reset if utility power is restored!  If you power up the server before it expires, be prepared for an unexpected server crash.  A manual power cycle of the UPS will clear the timer.

     

    There's an alternative to APCUPSD that seems to have Cyberpower compatibility.  No idea what it would take to get it running under unraid.

     

      http://www.networkupstools.org