Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

5 Neutral

About SiNtEnEl

  • Rank
    Advanced Member

Recent Profile Visitors

492 profile views
  1. True, I'm considering a multi staged build for it from the official to a single image. Just have to figure it out, how to do it update wise. The latest version changed some things in the decryption command, what could break the database. I need to figure out how to prevent this. The official images have some issues as well.
  2. I'm glad LSIO holds back releases, as for Unifi i check forums before i even press the update button. The amount of annoyance i had with Unifi is quite long, and the work i had restoring it in the past is worth the wait. If u want to go bleeding edge, u should build your own image. So LSIO thank you for your effort, and keeping us safe! As for memory usage, my unifi unstable instance is around 500mb stable past days.
  3. @dj_sim U planning to publish and support it in Unraid? I have it on my list as well.
  4. @Poprin Are u not confusing used with cached? U could run the following in the console: "top -o %MEM" (with out brackets) it will sort your memory usage. With the "e" u can cycle in unit size. (kb mb gb) The values under RES corresponded with the %MEM, and an accurate representation of how much actual physical memory a process is consuming. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 19019 nobody 20 0 2272.0m 908.7m 16.8m S 1.7 5.7 136:40.05 mono 11112 nobody 20 0 2335.4m 436.6m 41.1m S 0.3 2.8 1:19.72 Plex Media Serv 12899 nobody 20 0 2412.6m 372.0m 6.7m S 0.3 2.3 8:15.03 python2 4878 nobody 20 0 3751.8m 316.2m 19.9m S 0.0 2.0 0:25.01 java My case Radarr and Plex are top consumers. But most of it is cache.
  5. Filebot has allot of scripts, including AMC that u could customize to your needs. Some examples u can find here: https://www.filebot.net/forums/viewtopic.php?f=4&t=215. I don't know your exact setup, so its hard to give further advice on that. Unrarall is quite simple and offers less options then Filebot. Basically u have to execute a command on the host "docker exec -t unrarall --clean=all" (Clean all removes the rar files and leaves the clean extract). This can be on a cronjob in unraid or any other thing that triggers the command. Best this is using a event that signals that the sync between your host and seedbox is done. Or u can make a filewatcher script, but if the sync is not complete your extract will fail.
  6. The IcyDocks are default in a pull configuration, the suck out fresh air from the front of the computer case in to the tray's over the disks in too the case. Swapping it around will lead u blowing warmed air from inside the case over the disks, i wouldn't suggest doing this as its likely to increase the heating issue. The rear fan and the CPU fans should be facing outwards towards the back of the computer. Top fan could add more positive pressure to get the warm air out.
  7. In your usecase u could go either Filebot (available in CA) or unrarall https://github.com/arfoll/unrarall (not available in CA).
  8. Rather have no external services passwords in clear on the server. But this is personal. The file gives enough convenience, but using variables makes it easier for novice users to get around with the docker template especially when u add it to Community Applications (CA) later on. On the other hand using secrets is not to novice friendly either, so variables is a good start and maybe adding it to CA as well.
  9. Interesting, i will gave this docker a spin. Works as intended. Are u planning support for docker secrets and variables to store the credentials? Thanks for creating and sharing.
  10. The RC of 6.7.0 gives out of the box sensor information that could be useful analyzing your issue. Our you could install Dynamix System Temperature plugin, on 6.6 and see what other temps are. If the motherboard temperatures are OK, then i'm pretty much sure its the airflow in the docks. The older had better airflow in my opinion, i have used some newer ones and didn't like there cooling. My disks are running at 32c and mainboard about 28,5c with a 24c air temp running full quiet. Only time the fans run faster on my array is when i run my monthly parity check for 16 hours. So if your mainboard is around 30c, at least if u have a sensor that measures it. Im pretty sure your airflow inside the case is alight, and thus your docks are hot inside. If your mainboard is in the 40c range, then hot air is not getting out of your case. Thus adding extra heat to the docks, etc.
  11. My experience with icy dock that there is where restricted airflow with 5 drives in them. If i understand correctly u had the same 2 drive bays in a different enclosure running at 30c? About your current setup: Either u go Positive or Negative Pressure with your casing and fans. I would go positive pressure, since with the big mesh on your side panel negative would be hard. So basically your front fans on your drive bays need to push in more air volume then your rear fan does. The PSU if it draws from the bottom of the case will not draw in air into the case, and will exit out rear. But my guess is that the front bay fans don't get enough air in the case. Are the on max RPM?
  12. In low memory situations, Linux tries to kill processes that are consuming lots of memory. There are some further conditions that are checked, before the process that will be killed is selected. (i won't sum them). This happened 6 times on a docker instance, in your case the one running Sonarr. Line 3560: Jan 10 19:59:13 ffs2 kernel: Killed process 17778 (mono) total-vm:3574628kB, anon-rss:2057924kB, file-rss:0kB, shmem-rss:4kB Line 3778: Jan 14 21:09:07 ffs2 kernel: Killed process 15431 (mono) total-vm:3158360kB, anon-rss:2034704kB, file-rss:0kB, shmem-rss:4kB Line 4029: Jan 17 21:53:00 ffs2 kernel: Killed process 6146 (mono) total-vm:3016800kB, anon-rss:2049280kB, file-rss:0kB, shmem-rss:4kB Line 4260: Jan 21 19:31:30 ffs2 kernel: Killed process 25086 (mono) total-vm:3102052kB, anon-rss:2041428kB, file-rss:0kB, shmem-rss:4kB Line 4479: Jan 24 17:25:16 ffs2 kernel: Killed process 13894 (mono) total-vm:3638768kB, anon-rss:2048868kB, file-rss:0kB, shmem-rss:4kB Line 4619: Jan 27 00:18:22 ffs2 kernel: Killed process 24288 (mono) total-vm:3546380kB, anon-rss:2043724kB, file-rss:0kB, shmem-rss:4kB There are plenty of complaints on the internet about mono + sonarr combination eating up memory. U could try restarting the docker instance on regular basis before the system runs out of memory or mono hogging to much memory. Trying a different sonarr docker template, could be a option too. Could be a bug as well in sonarr / mono it self that happens on your configuration or content.
  13. Docker Compose would be a nice feature, but if Unraid is not going that way i would vote for "Grouping functionality" in the docker web interface.
  14. I found this a good read: https://access.redhat.com/security/vulnerabilities/L1TF-perf
  15. Jul 8 18:45:22 DeusVult kernel: Call Trace: Jul 8 18:45:22 DeusVult kernel: local_pci_probe+0x3c/0x7a Jul 8 18:45:22 DeusVult kernel: pci_device_probe+0x11b/0x154 Jul 8 18:45:22 DeusVult kernel: driver_probe_device+0x142/0x2a6 Jul 8 18:45:22 DeusVult kernel: __driver_attach+0x68/0x88 Jul 8 18:45:22 DeusVult kernel: ? driver_probe_device+0x2a6/0x2a6 Jul 8 18:45:22 DeusVult kernel: bus_for_each_dev+0x63/0x7a Jul 8 18:45:22 DeusVult kernel: bus_add_driver+0xe1/0x1c6 Jul 8 18:45:22 DeusVult kernel: driver_register+0x7d/0xaf Jul 8 18:45:22 DeusVult kernel: ? 0xffffffffa02dc000 Jul 8 18:45:22 DeusVult kernel: do_one_initcall+0x89/0x11e Jul 8 18:45:22 DeusVult kernel: ? kmem_cache_alloc+0x9f/0xe8 Jul 8 18:45:22 DeusVult kernel: do_init_module+0x51/0x1be Jul 8 18:45:22 DeusVult kernel: load_module+0x1854/0x1e3f Jul 8 18:45:22 DeusVult kernel: ? SyS_init_module+0xba/0xe0 Jul 8 18:45:22 DeusVult kernel: SyS_init_module+0xba/0xe0 Jul 8 18:45:22 DeusVult kernel: do_syscall_64+0x6d/0xfe Jul 8 18:45:22 DeusVult kernel: entry_SYSCALL_64_after_hwframe+0x3d/0xa2 Jul 8 18:45:22 DeusVult kernel: RIP: 0033:0x14bb588958aa Jul 8 18:45:22 DeusVult kernel: RSP: 002b:00007ffeacc548c8 EFLAGS: 00000246 ORIG_RAX: 00000000000000af Jul 8 18:45:22 DeusVult kernel: RAX: ffffffffffffffda RBX: 0000000000625e50 RCX: 000014bb588958aa Jul 8 18:45:22 DeusVult kernel: RDX: 0000000000629be0 RSI: 00000000002202e8 RDI: 0000000000f1fa80 Jul 8 18:45:22 DeusVult kernel: RBP: 0000000000629be0 R08: ffffffffffffffe0 R09: 00007ffeacc52a68 Jul 8 18:45:22 DeusVult kernel: R10: 0000000000623010 R11: 0000000000000246 R12: 0000000000f1fa80 Jul 8 18:45:22 DeusVult kernel: R13: 0000000000625f80 R14: 0000000000040000 R15: 0000000000000000 Jul 8 18:45:22 DeusVult kernel: Code: 00 e8 6a c7 e7 ff 8b 83 28 06 00 00 83 e8 16 83 e0 fd 0f 84 16 02 00 00 48 c7 c6 aa cf 4c a0 48 c7 c7 a7 c9 4c a0 e8 d4 32 c6 e0 <0f> 0b e9 fc 01 00 00 66 3d 00 a3 75 4e 48 c7 c2 49 d0 4c a0 be Jul 8 18:45:22 DeusVult kernel: ---[ end trace 3f2b0267604ec704 ]--- I'm getting also call traces on the i915, noticed since 6.5.3 series. Did a post here: https://lime-technology.com/forums/topic/72427-call-trace-on-653-series/ Looks like the i915 driver / module is funky after 6.5.2 for more users other then me on unraid. Since hardware transcoding is still working properly, i muted the error.