dannen

Members
  • Posts

    26
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

dannen's Achievements

Noob

Noob (1/14)

1

Reputation

  1. I had the same problem. Removed the plugin as recommended and installed it again. Now every thing works as it should Thanks!
  2. This is a really nice addition to unraid! I'm so happy to be able to get rid of OpenVPN I noticed that I must add the DNS server manually on the client(computer/phone). Would it be possible to add the possibility to enter a DNS server manually into the GUI when configuring a Client(so no additional configuration is needed on the client)?
  3. Is the file corrupt? It complains about the md5: plugin: updating: unRAIDServer.plg plugin: downloading: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.6.4-x86_64.zip ... done plugin: downloading: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.6.4-x86_64.md5 ... done wrong md5 plugin: run failed: /bin/bash retval: 1
  4. I used the Update Assistant before upgrading to pre-release/next, so I think that adding the dynamix plugin to the equation(or updating dynamix to remedy this) would be great, for this release that is
  5. Thank that did the trick! I had SSD Trim and Statistics, no idea which one will stop to work but as long as unraid boots i'm happy at this point Thanks again!
  6. After updating to the pre-release from the latest stable(all plugins were updated before upgrading) I get this screen: Dont really know what to do, i've rebooted the server(i can login to it via SSH). I've tried Safari and Firefox and i've cleared the browser cache. I see this on the server screen via iLO: This is also spammed into the syslog: Sep 1 20:18:37 mandarin vsftpd[6242]: connect from 127.0.0.1 (127.0.0.1) Sep 1 20:18:47 mandarin vsftpd[6276]: connect from 127.0.0.1 (127.0.0.1) Sep 1 20:18:57 mandarin vsftpd[6310]: connect from 127.0.0.1 (127.0.0.1) Sep 1 20:19:07 mandarin vsftpd[6343]: connect from 127.0.0.1 (127.0.0.1) Sep 1 20:19:17 mandarin vsftpd[6377]: connect from 127.0.0.1 (127.0.0.1) Sep 1 20:19:27 mandarin vsftpd[6411]: connect from 127.0.0.1 (127.0.0.1)
  7. Hi I just checked in Firefox and this is shown: "The character encoding of the plain text document was not declared. The document will render with garbled text in some browser configurations if the document contains characters from outside the US-ASCII range. The character encoding of the file needs to be declared in the transfer protocol or file needs to use a byte order mark as an encoding signature."
  8. Hi I got some strange errors just after the scheduled Trim job was started: Feb 23 07:02:42 mandarin kernel: ata5.00: status: { DRDY } Feb 23 07:02:42 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Feb 23 07:02:42 mandarin kernel: ata5.00: cmd 61/28:68:40:6d:24/00:00:0c:00:00/40 tag 13 ncq dma 20480 out Feb 23 07:02:42 mandarin kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Feb 23 07:02:42 mandarin kernel: ata5.00: status: { DRDY } Feb 23 07:02:42 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Feb 23 07:02:42 mandarin kernel: ata5.00: cmd 61/20:70:d8:75:24/00:00:0c:00:00/40 tag 14 ncq dma 16384 out Feb 23 07:02:42 mandarin kernel: res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Feb 23 07:02:42 mandarin kernel: ata5.00: status: { DRDY } Feb 23 07:02:42 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Feb 23 07:02:42 mandarin kernel: ata5.00: cmd 61/60:78:10:7c:24/00:00:0c:00:00/40 tag 15 ncq dma 49152 out Feb 23 07:02:42 mandarin kernel: res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Feb 23 07:02:42 mandarin kernel: ata5.00: status: { DRDY } Feb 23 07:02:42 mandarin kernel: ata5.00: failed command: WRITE FPDMA QUEUED Feb 23 07:02:42 mandarin kernel: ata5.00: cmd 61/08:80:f8:75:24/00:00:0c:00:00/40 tag 16 ncq dma 4096 out Feb 23 07:02:42 mandarin kernel: res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Feb 23 07:02:42 mandarin kernel: ata5.00: status: { DRDY } Feb 23 07:02:42 mandarin kernel: ata5: hard resetting link Feb 23 07:02:43 mandarin kernel: ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Feb 23 07:02:43 mandarin kernel: ata5.00: supports DRM functions and may not be fully accessible Feb 23 07:02:43 mandarin kernel: ata5.00: supports DRM functions and may not be fully accessible Feb 23 07:02:43 mandarin kernel: ata5.00: configured for UDMA/133 Feb 23 07:02:43 mandarin kernel: ata5: EH complete Feb 23 07:02:45 mandarin root: /mnt/cache: 163.1 GiB (175116275712 bytes) trimmed This one also showed up: Feb 23 07:01:09 mandarin kernel: ata5.00: exception Emask 0x0 SAct 0x7fc00003 SErr 0x0 action 0x6 frozen I havent seen this before and i've also changed the cables twice.. to the disk in question(SSD). Is this perhaps normal log output under certain circumstances and nothing to worry about? The TRIM is scheduled once a week and i've never seen this exact output before. Any guesses?
  9. Hi, Im using SSD as cache and 5400rpm disks in my array(using single parity) When the mover script runs the write speed is roughly 10MB/sec(10MB on the data drive and 10MB/sec on the parity drive) Cache drive is connected to the OOD and the array disks to the integrated B120i controller(AHCI-mode) on my Microserver Gen8. The disks in the array are pretty old, but should get a bit higher transfer speed i think. No errors in the logs. Anyone else with similiar setup, and what transfer speeds do you get from cache -> array?
  10. Thanks I just tried that by see the same behaviour What I did was I changed /data to point towards /mnt and use that for processing, so both the to and from setting in the couchpotato renamer setting starts from the same mount point called /data. After download and extraction is complete in SAB and couchpotato kicks in with the rename and move it's still pushing high IO and see writes around 25MB/sec. I see several of these processes in 'iotop' taking around 50% IO each flapping back and forth. Don't know what it means, but seems to be some Unraid specific processes: shfs /mnt/user -disks 15 20000000000 -o noatime,big_writes,allow_other,use_ino -o remember=330
  11. Ive been having problem with unraid gui and dockers becoming unresponsive when big files are downloaded. I've seen to narrowed it down to that it happens when Couchpotato does post-processing(rename and move files) after being downloaded by sabnzb. sabnzb downloads to a folder on my SSD cachdrive(mnt/cache/temp_download) and couchpotato then takes the file from that same location and moves it to a share on my array with cache set to 'yes'. It seems like couchpotato is not really simply moving the file the way I expect since it takes a lot of disk IO for quite some time. Since SAB and Couchpotato runs in separate dockers, perhaps the move job does not know that it should be done within the same disk? the source folder is /temp_download and the destination is /media, so they're different mount points. Anyone know how to solve this?
  12. If anyone else has the same problem I found this excellent docker FAQ: http://lime-technology.com/forum/index.php?topic=40937.0 It suggested to add --cpuset-cpus=0,1 to limit a container to certain cores and --memory=4G to limit RAM. These are added under 'extra parameters' in the edit section for the specific container and can be found in advanced view mode. I limited mine to just one core and RAM to 1G and see if that fixes it.
  13. Problem solved, the errors disappeared after TRIM had run twice But just for info: My SSD at that time was a 120GB Samsung EVO 750, but has since replaced it with a Samsung 250GB EVO 750. It's connected to the OOD SATA port. Had the same strange error the first time TRIM was run on the new disk, but also it disappeared since. I'll upload a diagnostics file if I see the errors again since they're gone now.
  14. Thanks for the help I'll try BRiT's suggestion Strange thing is I didn't have any problem with my old server(7yrs old) which had an old Atom CPU and running Debian and 6x2TB disks in RAID6. Sure the unrar took longer time to finish, but didn't notice any interruption on applications running. Will look into if it's possible to limit CPU for a Docker App, otherwise I could try to run this in a VM and limit it to 1 vCPU. Decompression will take longer but perhaps that way it shouldn't slow down the whole system.