dannen

Members
  • Posts

    26
  • Joined

  • Last visited

Everything posted by dannen

  1. I had the same problem. Removed the plugin as recommended and installed it again. Now every thing works as it should Thanks!
  2. This is a really nice addition to unraid! I'm so happy to be able to get rid of OpenVPN I noticed that I must add the DNS server manually on the client(computer/phone). Would it be possible to add the possibility to enter a DNS server manually into the GUI when configuring a Client(so no additional configuration is needed on the client)?
  3. Is the file corrupt? It complains about the md5: plugin: updating: unRAIDServer.plg plugin: downloading: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.6.4-x86_64.zip ... done plugin: downloading: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.6.4-x86_64.md5 ... done wrong md5 plugin: run failed: /bin/bash retval: 1
  4. I used the Update Assistant before upgrading to pre-release/next, so I think that adding the dynamix plugin to the equation(or updating dynamix to remedy this) would be great, for this release that is
  5. Thank that did the trick! I had SSD Trim and Statistics, no idea which one will stop to work but as long as unraid boots i'm happy at this point Thanks again!
  6. After updating to the pre-release from the latest stable(all plugins were updated before upgrading) I get this screen: Dont really know what to do, i've rebooted the server(i can login to it via SSH). I've tried Safari and Firefox and i've cleared the browser cache. I see this on the server screen via iLO: This is also spammed into the syslog: Sep 1 20:18:37 mandarin vsftpd[6242]: connect from 127.0.0.1 (127.0.0.1) Sep 1 20:18:47 mandarin vsftpd[6276]: connect from 127.0.0.1 (127.0.0.1) Sep 1 20:18:57 mandarin vsftpd[6310]: connect from 127.0.0.1 (127.0.0.1) Sep 1 20:19:07 mandarin vsftpd[6343]: connect from 127.0.0.1 (127.0.0.1) Sep 1 20:19:17 mandarin vsftpd[6377]: connect from 127.0.0.1 (127.0.0.1) Sep 1 20:19:27 mandarin vsftpd[6411]: connect from 127.0.0.1 (127.0.0.1)
  7. Hi I just checked in Firefox and this is shown: "The character encoding of the plain text document was not declared. The document will render with garbled text in some browser configurations if the document contains characters from outside the US-ASCII range. The character encoding of the file needs to be declared in the transfer protocol or file needs to use a byte order mark as an encoding signature."
  8. Hi I got some strange errors just after the scheduled Trim job was started: Feb 23 07:02:42 mandarin kernel: ata5.00: status: { DRDY } Feb 23 07:02:42 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Feb 23 07:02:42 mandarin kernel: ata5.00: cmd 61/28:68:40:6d:24/00:00:0c:00:00/40 tag 13 ncq dma 20480 out Feb 23 07:02:42 mandarin kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Feb 23 07:02:42 mandarin kernel: ata5.00: status: { DRDY } Feb 23 07:02:42 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Feb 23 07:02:42 mandarin kernel: ata5.00: cmd 61/20:70:d8:75:24/00:00:0c:00:00/40 tag 14 ncq dma 16384 out Feb 23 07:02:42 mandarin kernel: res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Feb 23 07:02:42 mandarin kernel: ata5.00: status: { DRDY } Feb 23 07:02:42 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Feb 23 07:02:42 mandarin kernel: ata5.00: cmd 61/60:78:10:7c:24/00:00:0c:00:00/40 tag 15 ncq dma 49152 out Feb 23 07:02:42 mandarin kernel: res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Feb 23 07:02:42 mandarin kernel: ata5.00: status: { DRDY } Feb 23 07:02:42 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Feb 23 07:02:42 mandarin kernel: ata5.00: cmd 61/08:80:f8:75:24/00:00:0c:00:00/40 tag 16 ncq dma 4096 out Feb 23 07:02:42 mandarin kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Feb 23 07:02:42 mandarin kernel: ata5.00: status: { DRDY } Feb 23 07:02:42 mandarin kernel: ata5: hard resetting link Feb 23 07:02:43 mandarin kernel: ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Feb 23 07:02:43 mandarin kernel: ata5.00: supports DRM functions and may not be fully accessible Feb 23 07:02:43 mandarin kernel: ata5.00: supports DRM functions and may not be fully accessible Feb 23 07:02:43 mandarin kernel: ata5.00: configured for UDMA/133 Feb 23 07:02:43 mandarin kernel: ata5: EH complete Feb 23 07:02:45 mandarin root: /mnt/cache: 163.1 GiB (175116275712 bytes) trimmed This one also showed up: Feb 23 07:01:09 mandarin kernel: ata5.00: exception Emask 0x0 SAct 0x7fc00003 SErr 0x0 action 0x6 frozen I havent seen this before and i've also changed the cables twice.. to the disk in question(SSD). Is this perhaps normal log output under certain circumstances and nothing to worry about? The TRIM is scheduled once a week and i've never seen this exact output before. Any guesses?
  9. Hi, Im using SSD as cache and 5400rpm disks in my array(using single parity) When the mover script runs the write speed is roughly 10MB/sec(10MB on the data drive and 10MB/sec on the parity drive) Cache drive is connected to the OOD and the array disks to the integrated B120i controller(AHCI-mode) on my Microserver Gen8. The disks in the array are pretty old, but should get a bit higher transfer speed i think. No errors in the logs. Anyone else with similiar setup, and what transfer speeds do you get from cache -> array?
  10. Thanks I just tried that by see the same behaviour What I did was I changed /data to point towards /mnt and use that for processing, so both the to and from setting in the couchpotato renamer setting starts from the same mount point called /data. After download and extraction is complete in SAB and couchpotato kicks in with the rename and move it's still pushing high IO and see writes around 25MB/sec. I see several of these processes in 'iotop' taking around 50% IO each flapping back and forth. Don't know what it means, but seems to be some Unraid specific processes: shfs /mnt/user -disks 15 20000000000 -o noatime,big_writes,allow_other,use_ino -o remember=330
  11. Ive been having problem with unraid gui and dockers becoming unresponsive when big files are downloaded. I've seen to narrowed it down to that it happens when Couchpotato does post-processing(rename and move files) after being downloaded by sabnzb. sabnzb downloads to a folder on my SSD cachdrive(mnt/cache/temp_download) and couchpotato then takes the file from that same location and moves it to a share on my array with cache set to 'yes'. It seems like couchpotato is not really simply moving the file the way I expect since it takes a lot of disk IO for quite some time. Since SAB and Couchpotato runs in separate dockers, perhaps the move job does not know that it should be done within the same disk? the source folder is /temp_download and the destination is /media, so they're different mount points. Anyone know how to solve this?
  12. If anyone else has the same problem I found this excellent docker FAQ: http://lime-technology.com/forum/index.php?topic=40937.0 It suggested to add --cpuset-cpus=0,1 to limit a container to certain cores and --memory=4G to limit RAM. These are added under 'extra parameters' in the edit section for the specific container and can be found in advanced view mode. I limited mine to just one core and RAM to 1G and see if that fixes it.
  13. Problem solved, the errors disappeared after TRIM had run twice But just for info: My SSD at that time was a 120GB Samsung EVO 750, but has since replaced it with a Samsung 250GB EVO 750. It's connected to the OOD SATA port. Had the same strange error the first time TRIM was run on the new disk, but also it disappeared since. I'll upload a diagnostics file if I see the errors again since they're gone now.
  14. Thanks for the help I'll try BRiT's suggestion Strange thing is I didn't have any problem with my old server(7yrs old) which had an old Atom CPU and running Debian and 6x2TB disks in RAID6. Sure the unrar took longer time to finish, but didn't notice any interruption on applications running. Will look into if it's possible to limit CPU for a Docker App, otherwise I could try to run this in a VM and limit it to 1 vCPU. Decompression will take longer but perhaps that way it shouldn't slow down the whole system.
  15. ahh ok thanks, so basically my SSD is unable to read/write the data quick enough? Trying to figure out how that could be mitigated. Perhaps downloading to a another share on a separate disk without caching turned on and then extract files to a share with cache turned on? Still don't understand why the Unraid GUI becomes unresponsive though
  16. Did you ever manage to solve this? I've got a 2-core Xeon with hyper-threading and using a SSD for cache. I've got the same issue when sab starts to unpack(from/to the same SSD). The unraid GUI and all other docker GUIs become unresponsive. Below are the figures from 'top' while the GUI was unresponsive. I also ran 'atop' but forgot to copy the stats. But the CPU normally stays well under 20% during unrar if I remember correct. -------------------------------------------------------------------------------------- top - 19:08:17 up 4 days, 50 min, 1 user, load average: 16.17, 9.68, 4.73 Tasks: 298 total, 3 running, 295 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.3 us, 2.0 sy, 0.8 ni, 0.0 id, 97.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 4072456 total, 279136 free, 1147720 used, 2645600 buff/cache KiB Swap: 0 total, 0 free, 0 used. 2091124 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 53892 nobody 30 10 18096 3948 3104 D 6.0 0.1 0:20.64 unrar 54293 root 20 0 0 0 0 R 2.3 0.0 0:00.88 kworker/u128:9 31683 root 20 0 0 0 0 S 1.3 0.0 0:02.13 kworker/u128:11 587 root 20 0 0 0 0 S 1.0 0.0 2:37.12 kswapd0 54292 root 20 0 0 0 0 S 1.0 0.0 0:00.24 kworker/u128:8 35986 nobody 20 0 166284 54080 460 S 0.7 1.3 0:06.95 mongod 1562 root 20 0 9908 2796 2112 S 0.3 0.1 16:43.91 cpuload 16321 root 20 0 0 0 0 S 0.3 0.0 0:02.66 kworker/u128:14 47670 root 20 0 98920 12784 116 S 0.3 0.3 1:03.39 supervisord 50362 root 20 0 0 0 0 S 0.3 0.0 9:52.91 kworker/1:1 55749 root 20 0 24980 3172 2468 R 0.3 0.1 0:00.05 top 1 root 20 0 4372 1536 1436 S 0.0 0.0 0:10.61 init ------------------------------------------------------------------------------------------
  17. Just an update. The error didn't appear when TRIM ran this morning, I'll check the logs the next couple of days but hopefully it's automagically "fixed"
  18. I changed the SATA cable last night. Just looked at the logs and the same error still appears during TRIM operation. It's actually more than 40 seconds, it was only a portion of the log I attached yesterday. Attached is the full syslog staring from when the TRIM operation started(doesn't seem to indicate start in the log, but the completion is) until it ended, so it's more like 2mins 20sec. I've never seen these errors/warnings before so it must be related to TRIM, hopefully these errors could be ignored. I'll see if I can schedule my dockers to stop before TRIM and start after say 5mins to make sure nothing is writing to the cache during TRIM. But then again perhaps these errors aren't "dangerous" log.txt
  19. Thanks, replaced the cable and hold my thumbs
  20. I installed dynamics TRIM plugin yesterday which was scheduled to start at 05:00am. During the same time a parity check was running on my array. When I looked in the syslog when I got home from work I see a lot of disk errors from this morning. Does anyone know what this might be about, should I be concerned? I really don't know if the errors are due to running the parity check or the TRIM plugin, but i's about the same time as the TRIM is scheduled to run. the ata5 drive is my chache drive which is the only SSD disk in my server. ------- Jan 11 05:02:51 mandarin kernel: ata5: hard resetting link Jan 11 05:02:52 mandarin kernel: ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Jan 11 05:02:52 mandarin kernel: ata5.00: supports DRM functions and may not be fully accessible Jan 11 05:02:52 mandarin kernel: ata5.00: supports DRM functions and may not be fully accessible Jan 11 05:02:52 mandarin kernel: ata5.00: configured for UDMA/133 Jan 11 05:02:52 mandarin kernel: ata5: EH complete Jan 11 05:03:23 mandarin kernel: ata5.00: NCQ disabled due to excessive errors Jan 11 05:03:23 mandarin kernel: ata5.00: exception Emask 0x0 SAct 0x7f007fff SErr 0x0 action 0x6 frozen Jan 11 05:03:23 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Jan 11 05:03:23 mandarin kernel: ata5.00: cmd 61/20:00:88:93:f2/00:00:01:00:00/40 tag 0 ncq 16384 out Jan 11 05:03:23 mandarin kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Jan 11 05:03:23 mandarin kernel: ata5.00: status: { DRDY } Jan 11 05:03:23 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Jan 11 05:03:23 mandarin kernel: ata5.00: cmd 61/10:08:a8:93:f2/00:00:01:00:00/40 tag 1 ncq 8192 out Jan 11 05:03:23 mandarin kernel: res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Jan 11 05:03:23 mandarin kernel: ata5.00: status: { DRDY } Jan 11 05:03:23 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Jan 11 05:03:23 mandarin kernel: ata5.00: cmd 61/10:10:b8:93:f2/00:00:01:00:00/40 tag 2 ncq 8192 out Jan 11 05:03:23 mandarin kernel: res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Jan 11 05:03:23 mandarin kernel: ata5.00: status: { DRDY } Jan 11 05:03:23 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Jan 11 05:03:23 mandarin kernel: ata5.00: cmd 61/10:18:e0:92:f2/00:00:01:00:00/40 tag 3 ncq 8192 out Jan 11 05:03:23 mandarin kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Jan 11 05:03:23 mandarin kernel: ata5.00: status: { DRDY } Jan 11 05:03:23 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Jan 11 05:03:23 mandarin kernel: ata5.00: cmd 61/00:20:f0:c9:62/04:00:00:00:00/40 tag 4 ncq 524288 out Jan 11 05:03:23 mandarin kernel: res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Jan 11 05:03:23 mandarin kernel: ata5.00: status: { DRDY } Jan 11 05:03:23 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Jan 11 05:03:23 mandarin kernel: ata5.00: cmd 61/c0:28:20:fb:50/02:00:00:00:00/40 tag 5 ncq 360448 out Jan 11 05:03:23 mandarin kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Jan 11 05:03:23 mandarin kernel: ata5.00: status: { DRDY } Jan 11 05:03:23 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Jan 11 05:03:23 mandarin kernel: ata5.00: cmd 61/c0:30:20:fb:30/02:00:00:00:00/40 tag 6 ncq 360448 out Jan 11 05:03:23 mandarin kernel: res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Jan 11 05:03:23 mandarin kernel: ata5.00: status: { DRDY } Jan 11 05:03:23 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Jan 11 05:03:23 mandarin kernel: ata5.00: cmd 61/28:38:b8:92:f2/00:00:01:00:00/40 tag 7 ncq 20480 out Jan 11 05:03:23 mandarin kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Jan 11 05:03:23 mandarin kernel: ata5.00: status: { DRDY } Jan 11 05:03:23 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Jan 11 05:03:23 mandarin kernel: ata5.00: cmd 61/08:40:b0:92:f2/00:00:01:00:00/40 tag 8 ncq 4096 out Jan 11 05:03:23 mandarin kernel: res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Jan 11 05:03:23 mandarin kernel: ata5.00: status: { DRDY } Jan 11 05:03:23 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Jan 11 05:03:23 mandarin kernel: ata5.00: cmd 61/10:48:a0:92:f2/00:00:01:00:00/40 tag 9 ncq 8192 out Jan 11 05:03:23 mandarin kernel: res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Jan 11 05:03:23 mandarin kernel: ata5.00: status: { DRDY } Jan 11 05:03:23 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Jan 11 05:03:23 mandarin kernel: ata5.00: cmd 61/08:50:98:92:f2/00:00:01:00:00/40 tag 10 ncq 4096 out Jan 11 05:03:23 mandarin kernel: res 40/00:00:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Jan 11 05:03:23 mandarin kernel: ata5.00: status: { DRDY } Jan 11 05:03:23 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Jan 11 05:03:23 mandarin kernel: ata5.00: cmd 61/00:58:98:90:f2/02:00:01:00:00/40 tag 11 ncq 262144 out Jan 11 05:03:23 mandarin kernel: res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Jan 11 05:03:23 mandarin kernel: ata5.00: status: { DRDY } Jan 11 05:03:23 mandarin kernel: ata5.00: failed command: SEND FPDMA QUEUED Jan 11 05:03:23 mandarin kernel: ata5.00: cmd 64/01:60:00:00:00/00:00:00:00:00/a0 tag 12 ncq 512 out Jan 11 05:03:23 mandarin kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Jan 11 05:03:23 mandarin kernel: ata5.00: status: { DRDY } Jan 11 05:03:23 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Jan 11 05:03:23 mandarin kernel: ata5.00: cmd 61/00:68:98:8e:f2/02:00:01:00:00/40 tag 13 ncq 262144 out Jan 11 05:03:23 mandarin kernel: res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Jan 11 05:03:23 mandarin kernel: ata5.00: status: { DRDY } Jan 11 05:03:23 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Jan 11 05:03:23 mandarin kernel: ata5.00: cmd 61/18:70:f0:92:f2/00:00:01:00:00/40 tag 14 ncq 12288 out Jan 11 05:03:23 mandarin kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Jan 11 05:03:23 mandarin kernel: ata5.00: status: { DRDY } Jan 11 05:03:23 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Jan 11 05:03:23 mandarin kernel: ata5.00: cmd 61/50:c0:08:93:f2/00:00:01:00:00/40 tag 24 ncq 40960 out Jan 11 05:03:23 mandarin kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Jan 11 05:03:23 mandarin kernel: ata5.00: status: { DRDY } Jan 11 05:03:23 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Jan 11 05:03:23 mandarin kernel: ata5.00: cmd 61/10:c8:08:94:f2/00:00:01:00:00/40 tag 25 ncq 8192 out Jan 11 05:03:23 mandarin kernel: res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Jan 11 05:03:23 mandarin kernel: ata5.00: status: { DRDY } Jan 11 05:03:23 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Jan 11 05:03:23 mandarin kernel: ata5.00: cmd 61/20:d0:d8:93:f2/00:00:01:00:00/40 tag 26 ncq 16384 out Jan 11 05:03:23 mandarin kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Jan 11 05:03:23 mandarin kernel: ata5.00: status: { DRDY } Jan 11 05:03:23 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Jan 11 05:03:23 mandarin kernel: ata5.00: cmd 61/10:d8:c8:93:f2/00:00:01:00:00/40 tag 27 ncq 8192 out Jan 11 05:03:23 mandarin kernel: res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Jan 11 05:03:23 mandarin kernel: ata5.00: status: { DRDY } Jan 11 05:03:23 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Jan 11 05:03:23 mandarin kernel: ata5.00: cmd 61/10:e0:f8:93:f2/00:00:01:00:00/40 tag 28 ncq 8192 out Jan 11 05:03:23 mandarin kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Jan 11 05:03:23 mandarin kernel: ata5.00: status: { DRDY } Jan 11 05:03:23 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Jan 11 05:03:23 mandarin kernel: ata5.00: cmd 61/20:e8:58:93:f2/00:00:01:00:00/40 tag 29 ncq 16384 out Jan 11 05:03:23 mandarin kernel: res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Jan 11 05:03:23 mandarin kernel: ata5.00: status: { DRDY } Jan 11 05:03:23 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Jan 11 05:03:23 mandarin kernel: ata5.00: cmd 61/10:f0:78:93:f2/00:00:01:00:00/40 tag 30 ncq 8192 out Jan 11 05:03:23 mandarin kernel: res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Jan 11 05:03:23 mandarin kernel: ata5.00: status: { DRDY } Jan 11 05:03:23 mandarin kernel: ata5: hard resetting link Jan 11 05:03:23 mandarin kernel: ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Jan 11 05:03:23 mandarin kernel: ata5.00: supports DRM functions and may not be fully accessible Jan 11 05:03:23 mandarin kernel: ata5.00: supports DRM functions and may not be fully accessible Jan 11 05:03:23 mandarin kernel: ata5.00: configured for UDMA/133 Jan 11 05:03:23 mandarin kernel: ata5: EH complete Jan 11 05:03:30 mandarin root: /mnt/cache: 77.6 GiB (83277766656 bytes) trimmed
  21. Thanks, im not running any VMs at the moment, but have a handfull of dockers running. But none of them are any performance demanding though, except from Sabnzb. Seems to be just after unpacking is done. First thought it might be due to high IO load, but the unraid web GUI runs from RAM or USB stick i guess.
  22. Hi Sometimes when downloading, unpacking or moving files the unraid webgui becomes unresponsive and also the web gui of dockers running seems to 'freeze' at the same time. SSH is also becomes unresponsive(very slow output) I've looked at the syslog and cannot see any events being logged at those particular times. Have anyone noticed this behaviour? These are my server specs: Unraid version 2.6.4 HP microserver Gen8 1x Intel E5-1220L 4GB RAM 1x 120GB SSD for cache 1x 2TB drive for parity 3x 2TB drives in array CPU load is not above 50% and RAM is approx. 55% used around those times.
  23. Thanks! Just wondering since it seems the OpenVPN process uses 20% CPU looking at 'top' when using ssh to the server. Don't know if that's 20% of a threaded core, a core or the whole CPU though(I have a 1220L(non V2) so thats 2 physical cores = 4 incl. hyper threading)
  24. Does anyone know if this docker takes advantage of Intel AES-NI for OpenVPN, if one has a CPU which supports it that is?