• Posts

  • Joined

  • Last visited


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

mattkhan's Achievements


Apprentice (3/14)



  1. No comment from them on when they will upgrade openvpn-as to openvpn 2.4 but it seems it is not necessary anyway as the config options to avoid this are available now. They are described in as either a client side only option reneg-bytes 64000000 Alternatively, if you control the server and client, then you can set on both the server and client config directives (via the Advanced VPN page) cipher AES-256-CBC It doesn't seem there is a way to set this via the cli or in config so I don't suppose there is anything you can do to set this in the container. I suppose you could add something to the setup docs though. FWIW further reading suggests to use a few other directives, namely to set server and client as follows for a reasonably hardened config cipher AES-256-CBC auth SHA512 tls-cipher TLS-DHE-RSA-WITH-AES-256-GCM-SHA384:TLS-DHE-RSA-WITH-AES-256-CBC-SHA256:TLS-DHE-RSA-WITH-AES-128-GCM-SHA256:TLS-DHE-RSA-WITH-AES-128-CBC-SHA256 This seems to work fine for me. The support guy also commented on the possibility of an attack via the embedded twisted web server which I'll just post here for reference
  2. I can't find a public repo for openvpn-as & it seems they use a trac instance on their site instead of github for issues so I logged a support ticket.
  3. I've added this to my setup recently, v easy to get going so thanks for providing it. I'm using a 2.4.3 openvpn client and I notice it complains about WARNING: INSECURE cipher with block size less than 128 bit (64 bit). This allows attacks like SWEET32. Mitigate by using a --cipher with a larger block size (e.g. AES-256-CBC).' This seems to be The container logs indicate this is a 2.3.17 server 2017-08-04 17:14:46+0100 [-] OVPN 0 OUT: 'Fri Aug 4 17:14:46 2017 OpenVPN 2.3.17 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [EPOLL] [MH] [IPv6] built on Jun 27 2017' indicates this is the old stable version, 2.4.3 is the current stable and this seems to be the fix (e.g. picking some other random docker openvpn container - . I notice that your dependency is on for ubuntu 16 and it's not immediately obvious how this relates to the openvpn version. Do you have a plan to close this gap?
  4. thanks it seems would be a solid/safe choice then
  5. I need to get a UPS for my unraid box but I know basically nothing about UPS's. Searching tells me pure sine wave + AVR (voltage regulation) = good but I don't know what it means if I don't have those things nor do I know how big a UPS I would need or whether there are other ancilliary features that would be nice to have. I'm in the UK btw in case of any product specific suggestions. Thanks!
  6. why would you need to copy the entire server over again instead of just rsync'ing from the main share to the backup share?
  7. bts buffer is the branch trace store buffer, google indicates there was a patch last year to reduce the frequency of such logs & whether it happens or not is a function of memory fragmentation. I would think that seeing this is just a sign that this plugin is working your CPU hard. Any other problems noted or just that warn?
  8. fair point the diagnostics I posted in cover from the same time period
  9. I have an array that is currently all reiser and made up of 6 disks inc parity. There is 1 user share that spans all disks and a few shares that are pinned to a single disk. I would like to achieve 3 things in 1 go; convert to XFS, add some capacity, consolidate some drives. I would like to keep the user shares available during this work and I do have a complete backup on a separate machine in the same network. I was thinking of tackling this as follows, are there any flaws in this plan? Will this keep the user shares available during this time? what happens to a usershare when there are files on 2 underlying drives? Current State ------------- Disk1 3T Disk2 3T Disk3 2T Disk4 2T Disk5 2T 1) Add Drives Disk6 4T Disk7 4T 2) Remove Parity & put shares into read only mode 3) Execute transfers using rsync from /mnt/disk to /mnt/disk round [source] [dest] notes ------- --------- -------- ------------ 1 Disk4 Disk6 2 * 2T -> 4T Disk5 2 Disk1 Disk7 3T to 4T 3 Disk2 Disk1 3T to 3T 4 Disk3 Disk2 2T to 3T 4) Create new config - Remove Disks 3-5 - Move Disk 6 to Disk 3 - Move Disk 7 to Disk 4 - fix disk to user share mappings 5) Rebuild Parity
  10. Fair point, i was thinking of it syncing a disk at a time which, as you say, it doesn't. Well that would explain it then anyway, preclear zeroing is constantly reading from urandom to generate data to write to the disk so attempting to sync is doomed to sit there forever, ie sync is trying to flush memory to disk while another process of constantly generating data in memory to write to disk.
  11. FWIW I checked the logs this evening and can see that zero'ing the drive completed at 0950 this morning # stat /tmp/zerosdh File: ‘/tmp/zerosdh’ Size: 231873 Blocks: 456 IO Block: 4096 regular file Device: 2h/2d Inode: 123856 Links: 1 Access: (0666/-rw-rw-rw-) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2016-01-06 20:43:32.097103305 +0000 Modify: 2016-01-06 09:50:50.200307361 +0000 Change: 2016-01-06 09:50:50.200307361 +0000 and at the same time in /var/log/syslog we see Jan 6 09:50:49 zalaga-unraid emhttp: shcmd (122): rm -f /boot/config/plugins/dynamix/mover.cron Jan 6 09:50:49 zalaga-unraid emhttp: shcmd (123): /usr/local/sbin/update_cron &> /dev/null Jan 6 09:50:49 zalaga-unraid emhttp: Unmounting disks... Jan 6 09:50:49 zalaga-unraid kernel: mdcmd (131): stop Jan 6 09:50:49 zalaga-unraid kernel: md1: stopping Jan 6 09:50:49 zalaga-unraid kernel: md2: stopping Jan 6 09:50:49 zalaga-unraid kernel: md3: stopping Jan 6 09:50:49 zalaga-unraid kernel: md4: stopping Jan 6 09:50:49 zalaga-unraid kernel: md5: stopping Jan 6 09:50:49 zalaga-unraid emhttp: shcmd (124): rmmod md-mod |& logger Jan 6 09:50:49 zalaga-unraid kernel: md: unRAID driver removed Jan 6 09:50:49 zalaga-unraid emhttp: shcmd (125): modprobe md-mod super=/boot/config/super.dat slots=24 |& logger This looks pretty conclusive that the array shutdown sync is on all disks in the system not just array disks
  12. ok thanks, I'll go that route in future then.
  13. seems unfortunate that preclear affects stopping the array (and then makes the web ui completely unresponsive to boot) is there any reason why preclear has to be run on the unraid host as opposed some random linux box? I've read through the script and it seems to just make use of a few unraid config files in a few places but that would be easy enough to stub.