quietwalker

Members
  • Posts

    28
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

quietwalker's Achievements

Noob

Noob (1/14)

2

Reputation

  1. This fixed the Realtek card issue with ASPM disable. Now it is shown as enabled: echo 1 > /sys/bus/pci/devices/0000:02:00.0/link/l1_aspm Unfortunately powertop still shows
  2. Hi people! My setup is: Mobo: ASRock H670M-ITX/ax RAM: 2X16GB PSU: picoPSU 150w CPU: 13th Gen Intel® Core™ i3-13100 @ 3366 MHz 2 nvme as cache: Crucial P3 2 SSD: 1 Samsung 870QVO and 1 Crucial BX500 2x12TB HDD: 1 parity and 1 data disk Unfortunately I'm not able to go lower than C3. The ASPM output shows ASPM disabled for 2 devices: I've enabled all ASPM settings, p-cores and c-states supports as suggested in the first @mgutt post but I'm still unable to understand why I cannot enable ASPM also for those 2 devices shown in the above output. These are some of the settings I edited from BIOS: CPU C-state support: enabled enhanced halt state(C1E): enabled CPU C6/C7 state support: enabled package Cstate support: enabled CFG lock: enable (don't really know if it's necessary or not) C6 DRAM: enabled PCIE native controll: enabled PCIE ASPM Support: L0S1 PH PCIE ASPM Support: L1 (is the only available) DMI ASPM Support: enabled SATA Aggressive link power mode: enabled onboard WAN device: disabled onboard Audio: disabled I also performed the actions suggested here, using another ssd formatted with windows 11 only to perform those steps: Actually my power draw from wall is not bad when idle (it flows between 18w/h and 30w/h, especially after I did the above reddit post steps. Before the power draw was pretty stable at 20w in idle), but I think this mobo could consume less if it could reach lower c-states. EDIT: with ubuntu desktop with the same setup and disks spun down it consumes 16watt, but powertop shows a different interface where cstates are only shown as C1_ACPI,C2_ACPI, C3_ACPI and RC6pp and it reaches C3_ACPI for 97% of the time (don't know if this could help). In addition lspci return all ASPM status as enabled for all the devices. Do anyone of you have any suggestion for me?
  3. sure, if it may help you. In alternative you can install GPU Statistic plugin (this is the package URL: https://raw.githubusercontent.com/b3rs3rk/gpustat-unraid/master/gpustat.plg) have you added the following in the Extra Parameters setting of Jellyfin container (you have to enable the advanced view)? --device=/dev/dri If you have done it, check in the host and into the docker container (with docker exec -it <container name> bash or using the Unraid interface to open a terminal session into the container) if that device is available. jellyfin-template.xml
  4. The template for the official one was removed recently because the maintainer of the template wasn't able anymore to maintain it anymore. I guess you can just select one of the template available in the App Center and just use the official docker image in the field "Repository" (it is jellyfin/jellyfin ). Of course you should adapt the container internal paths
  5. After some hours the same problem occurred again: from docker page I see 3 containers with the string "not available". The containers are: - Binhex-QBitTorrentVPN - Firefly III - PhotoPrism
  6. After the CA upgrade of the containers and after the UnRaid upgrade (and reboot) it looks good now. The Docker page shows all the containes as up to date and no failures are shown. I will monitor the situation for the next days, hoping the issue was solved by the upgrade (or reboot, who know 😆)
  7. I've discovered something really strange: from CA I saw the Action Center alert icon and I've found that there were 3 containers updates: Photoprism. binhex-qbittorrent and Firefly At this point I don't know why the docker page is not able to show those updates as well. In the meantime I also update the UnRaid release to v6.12.0
  8. @dlandonhave you found something which may help me understand the cause of the issue? Thanks in advance
  9. Sure, this is the diagnostic archive. Thanks you! homeserver-diagnostics-20230614-1413.zip
  10. Hello, I'm having issues upgrading 3 of my containers in my Unraid setup. I read a lot of comment/post regarding the misbehavior of the container upgrade feature when a PiHole is used as DNS so, since I used PiHole, I've decided to use a different DNS server: I've tried with multiple DNS servers (Quad9 and Cloudflare, just to mention 2) but the problem still persist. The strange thing is that this issue only happen for 3 container (I have 21 running containers actually) so the question is: how can I troubleshoot this issue? The impacted container are Firefly III, VaultWarden and PhotoPrism Thanks in advance.
  11. Hello, I've recently installed Nextcloud AIO container in my unraid system. Immediately after the installation I noticed something strange: the browsing experience in nextcloud was so painfully slow. After some time the Unraid GUI started to send some warning messages about the increasing disks utilization by docker.img. So here started my journey...I noticed that despite I provided 2 dedicated shares to nextcloud (of type "cache only" because I want to keep all nextcloud data and its DB to SSDs) it seems it totally ignore them and it created all the docker volumes into the docker.img loop device, as you can see from the image below: so this could explain why the browsing experience is so slow and also why the docker.img loop device is so bloated: so the question now is: how can I move all these docker volumes to my nextcloud-dedicated cache pool? Should I recreate those volumes manually, binding them to my nextcloud-dedicated cache pool? Is there any better/automated alternative? Thanks in advance!