unr41dus3r

Members
  • Posts

    142
  • Joined

  • Last visited

Everything posted by unr41dus3r

  1. Another interesting find. If i create a snapshot in cache-mirror@borgmatic and the snapshot dataset is stuck. I recieve the error which i described in the RC7 thread when stopping the array/ rebooting Jun 13 15:25:22 tower emhttpd: shcmd (9223): /usr/sbin/zpool export cache-mirror Jun 13 15:25:22 tower root: cannot unmount '/mnt/cache-mirror': pool or dataset is busy Jun 13 15:25:22 tower emhttpd: shcmd (9223): exit status: 1 If i create a snapshot in cache-mirror/appdata@borgmatic and the snapshot dataset is stuck. It is possible to stop the array normally.
  2. Could you explain what you mean with child dataset? I dont exactly know how to do this. About the loop error message in an container, is this expected or is their something wrong?
  3. Yeah i did a complete snapshot of the appdata folder but this means, the state is frozen and all new files are written additional. With the send | recieve command it copys the complete frozen state into a new dataset. So i have the cache-mirror/appdata dataset incl. snapshot and now a new dataset cache-mirror/borgmatic I dont know how zfs exactly works with compression and dedup in this situation but now it uses 2 times the space. Also it did a more or less copy operation when i checked the saturation of my SSDs. Edit: With this method, i dont recieve the error with the symlink loop / when trying to destroy that the dataset is busy. But the problem is still its to much overhead as i understand. Example: cache-mirror/appdata = 150GB cache-mirror/appdata@borgmatic = Snapshot has only some MB or maybe 1GB Now i start abackup of the snapshot (frozen state of the specific time) When i issue the send | recieve command i need much more space and time/performance cache-mirror/appdata = 150GB cache-mirror/appdata@borgmatic = 1GB (for example) cache-mirror/borgmatic = 150GB So i need double the space and it needs time to the copy data. I hope it is understandable what i mean.
  4. mmmh, i am afraid this is not what i want to achieve? zfs send cache-mirror/appdata@borgmatic | zfs receive cache-mirror/borgmatic The process is now running but i am afraid now it is copying all data from the snapshot time to a cache-mirror/borgmatic dataset? I dont want to clone the complete folder. My appdata folder has 150GB ^^ This is contraproductive, the backup would only copy the delta and now i would clone the complete directory for every backup. Still will test it, if it works this way, it still needs some time to copy the data
  5. Ok, tested the following. Working: Create Snapshot Start Container Stop Container Destroy Snapshot Not Working: Create Snapshot Start Container ls /mnt/cache-mirror/appdata/.zfs/snapshot/borgmatic ls: /mnt/cache-mirror/appdata/.zfs/snapshot/borgmatic/: Symbolic link loop Stop Container Destroy Snapshot cannot destroy snapshot cache-mirror/appdata@borgmatic: dataset is busy Trying to fix the problem: Disable Docker Still Unable to delete Snapshot Stop Array Start Array (still stopped docker) Destroy is working Looks like an process is stuck? Next step i will try your suggestion
  6. Hi everyone, i dont know if this is a Unraid, Docker or Borgmatic problem. (Using RC8) I want todo the following. Create ZFS Snapshot of dataset appdata for borgmatic cache-mirror/appdata zfs snapshot cache-mirror/appdata@borgmatic Mount the complete cache-mirror pool into borgmatic container /mnt/cache-mirror to /mnt/cache-mirror Try to backup /mnt/cache-mirror/appdata/.zfs/snapshot/borgmatic Problem: If Container is started BEFORE the snapshot is created, i recieve this error when i want to access the folder: dir_open: [Errno 40] Symbolic link loop: 'borgmatic' this happens with the 'ls' command and with borgmatic Solution: If Container is started AFTER the Snapshot is created, it works fine Next problem: Every time the container is started AFTER the Snapshot is created (Solution), i cant delete the snapshot afterwards. It doesnt matter if the borgmatic container is stopped or not. cannot destroy snapshot cache-mirror/appdata@borgmatic: dataset is busy Sometimes (couldnt reproduce it every time) when the container is started BEFORE the snapshot is created (when access in the container is not working to the snapshot (.zfs folder), a snapshot destroy is possible. It is possible i use the snapshot command wrong or should do this another way, but because its ZFS maybe there is another problem in the RCs @JorgeB sorry for pinging you directly but we communicated already in the RC7 thread and maybe this is related to the other dataset is busy problems. Edit: Can you tell me how i could force remove the snapshot or unmount? I do every time a reboot if the error happens because the force unmount or force delete snapshot commands dont work for me.
  7. I wanna report something, maybe it is already known. I use a cache pool as ZFS. All new shares that will be created are getting an dataset. The problem is all existing shares/folders that are only on the pool (example appdata) will never be created as dataset because the "folder" stays on the pool. It is necessary to create a new share so a dataset will be created. I migrated now all of my shares/folders to the dataset. Dont know if this can be automated or it should be mentioned anywhere
  8. Sorry i am highjacking this now, but i think the comments about rc7 are now over I think i found the problem. I tried to shutdown the server and have a problem to unmount cache-mirror I found out an snapshot is stuck and busy and cant destroy it. cannot destroy snapshot cache-mirror@backup1: dataset is busy I create this snapshot with "zfs snapshot cache-mirror@backup1" for my backup docker and use the command "zfs destroy cache-mirror@backup1" to destroy it, but then i recieve the above command. At the moment i dont know why this happens. I use this snapshot to backup my appdata folder. NAME USED AVAIL REFER MOUNTPOINT cache-mirror@backup1 8.26G - 167G - Will debug it next, maybe you have an idea. Edit: It is possible this is an snapshot from and older RC. I started with RC5 and it could be from this version The docker container is of course disabled.
  9. @JorgeB Sadly this night i received the old error again. rserver shfs: /usr/sbin/zfs create 'cache-mirror/share1' |& logger rserver root: cannot create 'cache-mirror/share1': dataset already exists rserver shfs: command failed: 1 I am on RC8 now, but the dataset was probably created with RC7 I can see it tried to delete the dataset but couldnt do it because it was busy. Before this a force mover schedule was running. Log about the failed destroy (maybe a second try after some time could be implemented?) tower shfs: /usr/sbin/zfs destroy -r 'cache-mirror/share1' |& logger tower root: cannot destroy 'cache-mirror/share1': dataset is busy tower shfs: error: retval 1 attempting 'zfs destroy' Some hours after this error i get the message that the new dataset cache-mirror/share1 could not be created. I did the following now: In /mnt/cache-mirror/ the share1 folder is missing. With "zfs list" i can see the share1 dataset. zfs mount -a mounted the dataset correct in /mnt/cache-mirror/share1 After a mover run, the folder and dataset share1 was correctly removed. As i wrote above i am on RC8 now BUT the datasets was created with RC7 as i remember. I will report back in the RC8 thread or create a new bug report, if the error occurs again.
  10. You are completly right! It was an script i had. Thanks! It looks like the dataset errors are also gone! So you idea with creating the dataset new with RC7 should fix the problem. Will update to RC8 next
  11. I thought so, still it appears as an error in the notification area Edit: In the syslog itself it is a white message and i recieve an emai that it is an alert.
  12. @JorgeB For now the massive create errors are gone. (Before the errors was spammed every minute if something was wrong) So looks like it helped. But now i get some single errors over the last hours in syslog, but it is not shown directly as an red error in log. shfs: set -o pipefail ; /usr/sbin/zfs create 'cache-mirror/share1' |& logger some hours later: shfs: set -o pipefail ; /usr/sbin/zfs destroy -r 'cache-mirror/share1' |& logger No more information is found. in syslog A dataset with share1 is now existing I checked with zfs list now my datasets and i dont have a share1 dataset, after i recieved the above destroy error. Maybe it destroyed in a second try but i dont see anything in the log
  13. The status now, i ran "zfs mount -a" and after that i saw that all folders / datasets (zfs list) was on the cache drive. I started the Mover and now the folders and the datasets are removed. (checked with zfs list again) It was possible to copy something on the drive and the dataset was created correctly, so the error is gone for the moment. I will report back if it should happen again.
  14. I will do this later, but i found this 2 bug reports and it looks like they are the same as mine. When the above error shfs: set -o pipefail ; /usr/sbin/zfs create 'cache-mirror/share1' |& logger root: cannot create 'cache-mirror/share1': dataset already exists shfs: command failed: 1 occurs i cant write anything onto "mnt/user/share1/testfolder" It does not matter, if i want to create files from my pc over smb share or i use the web gui File Browser to create a folder in "mnt/user/share1/" share1 has cache-mirror as cache drive set After creating a the folder "mnt/cache-mirror/share1" i can normally copy files and folders into "mnt/user/share1"
  15. The problem is this error occurs when there is no folder and happens with all shares. After i created the folder the errors doesnt appear again for the moment. I think after a mover action the errors appears again. Then the "share folder" is missing, till i would create the folder manual. Edit: This destory error i recieved 1 hours before the create error occurs in the syslog shfs: set -o pipefail ; /usr/sbin/zfs destroy -r 'cache-/m..r/.../share1' |& logger root: cannot destroy 'cache-/m..r/.../share1': dataset is busy shfs: error: retval 1 attempting 'zfs destroy'
  16. Hey everyone, i switched from a BTRFS Cache Pool to a ZFS Cache Pool. Nothing changed in config, i only created the new pool, copied the files, deleted old pool and renamed the new one. I am recieving now from time to time the error in the log: shfs: set -o pipefail ; /usr/sbin/zfs create 'cache-mirror/share1' |& logger root: cannot create 'cache-mirror/share1': dataset already exists shfs: command failed: 1 When i run the command "mkdir /mnt/cache-mirror/share1" the folder will be created AND i can see that are files in this folder. The problem is then fixed for some time. I had this problem also with rc6. Anybody else expierience this error?
  17. Hey everyone, as i understand the Estimated finish time of a parity check is based on the current Speed of the HDDs As the HDDs are getting slower the estimated finish time is most of the time miscalculated. My idea now is, when the Array configuration did not change from the last run, use this time for the estimated finish. For a little bit more accurated time, you could additionally use the elapsed time with the % Status of the last run and current running job to get an better estimated finish time. So we could use this to calculate the estimated time (really simple example) (time needed last run, to complete run) - (time needed last run, to achieve x%) + (time needed current run, to achieve x%) = estimated time current run Example calculate 25% estimate last Run (finished after 10 hours) Finished 25% after 2 hours Current Run (needs to calculated) Finished 25% after 2:30 hours Example calculation at the 25% mark and example times above 10hours (time needed last run to complete) - 2 hours (time needed last run for 25%) + 2:30 hours (time needed current run for 25%) = 8:30 hours minimum estimate to finish current run If the current run is faster, it would work the same way. I know if the array is strongly in use while the check is running, the time differs again, but at the moment the estimated finish time is worse i would say
  18. Hi, danke für die Antwort, habe ich total übersehen. Nach vielen suchen bin ich draufgekommen, dass es sich dabei um das Docker Image gehandelt hat. Habe das Image neu erstellt und der Fehler war weg bzw. bin ich mittlerweile auf Docker Folder umgestiegen.
  19. Will the @Squid Docker Folder Plugin be fixed for 6.12 ? I use RC6 and the problem is i cant sort my containers in the folders. This would be necessary for dependencies. Thanks for your work
  20. Hi everyone, quick question, i am thinking about to add a graphics card into my unraid server. It would be a 7900XT and as i know the idle power consumption is high. Would this be the same if the VM is not running and disconnect from the unraid system (only in use if vm is running) Best regards
  21. Hi Leute, ich habe ein kleines Problem. Mir ist in netdata aufgefallen, dass anscheinend ein BTRFS Pool errors aufweist. Habe darauf hin auf meinen beiden BTRFS Pools (Raid 1 mit 2 Disks + einmal Single Disk Pool) Scrub laufen lassen, was keinen Fehler findet. Im nächsten Schritt ist mir erst aufgefallen das netdata 4 BTRFS Pools anzeigt obwohl ich nur 2 im Einsatz habe. Ich vermute dies ist ein alter Pool der mit einer alten Disk Probleme gemacht hat. Habe das Cache Array aber bereits gelöscht und die Disk entfernt. Wenn ich nun den Ordner "/sys/fs/btrfs/" mit ls prüfe sehe ich dort auch 4 "Ordner" und auch den Problematischen. Da ich mir nicht komplett sicher bin was los ist, wollte ich fragen ob ich solch einen Ordner entfernen kann bzw. prüfen kann ob hier auch wirklich nichts mehr im Einsatz ist? Status: Pool 1 - 2 Disks = UUID: b5f48a8e-3cfb-436e-bbc8-233cb498d28c Status; Pool 2 - 1 Disk = UUID: fcadba9d-cb01-40a0-b31c-51cf51e33406 Im btrfs folder vorhanden mit "ls" Befehl 427bd3c5-3a21-493b-9da1-f283c337c00e/ b5f48a8e-3cfb-436e-bbc8-233cb498d28c/ features/ acc67e5b-c774-46f7-9faa-029b2ebc0e00/ fcadba9d-cb01-40a0-b31c-51cf51e33406/
  22. So habe nun den ASM1166 eingebaut und der Controller sowie ein AMD PCIe Device in der ASPM Übersicht sind nun zusätzlich auf Enabled. In der Powertop übersicht sehe ich halt max. C3 aber glaube das ist der Powertop Bug von aktuelleren AMD CPUs. Meine IDLE Leistung mit Spin Down Festplatten fällt nun auf ~27W das ist Top und bin komplett zufrieden. Mit laufenden Disk (ohne Auslastung) 40W.
  23. @mgutt Habe mir nun die Sache angesehen bezüglich den amd_driver zu laden. Der wird nun auch geladen jedoch sind meine ASPM States schlechter als vorher und mehr disabled. Kann das Sein? Aktuell mit driver: amd-pstat 00:02.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Renoir/Cezanne PCIe GPP Bridge (prog-if 00 [Normal decode]) LnkCap: Port #2, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <32us LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+ 00:02.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Renoir/Cezanne PCIe GPP Bridge (prog-if 00 [Normal decode]) LnkCap: Port #1, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+ 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Renoir Internal PCIe GPP Bridge to Bus (prog-if 00 [Normal decode]) LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+ 01:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] 500 Series Chipset USB 3.1 XHCI Controller (prog-if 30 [XHCI]) LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <2us, L1 <32us LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+ 01:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] 500 Series Chipset SATA Controller (prog-if 01 [AHCI 1.0]) LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <2us, L1 <32us LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+ 01:00.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] 500 Series Chipset Switch Upstream Port (prog-if 00 [Normal decode]) LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <2us, L1 <32us LnkCtl: ASPM Disabled; Disabled- CommClk+ 02:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43ea (prog-if 00 [Normal decode]) LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us LnkCtl: ASPM Disabled; Disabled- CommClk- 02:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43ea (prog-if 00 [Normal decode]) LnkCap: Port #4, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us LnkCtl: ASPM L1 Enabled; Disabled- CommClk+ 02:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43ea (prog-if 00 [Normal decode]) LnkCap: Port #8, Speed 8GT/s, Width x1, ASPM L1, Exit Latency L1 <64us LnkCtl: ASPM Disabled; Disabled- CommClk+ 02:09.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43ea (prog-if 00 [Normal decode]) LnkCap: Port #9, Speed 8GT/s, Width x1, ASPM L1, Exit Latency L1 <64us LnkCtl: ASPM L1 Enabled; Disabled- CommClk+ 04:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO (prog-if 02 [NVM Express]) LnkCap: Port #0, Speed 16GT/s, Width x4, ASPM L1, Exit Latency L1 <64us LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+ pcilib: sysfs_read_vpd: read failed: No such device 05:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9215 PCIe 2.0 x1 4-port SATA 6 Gb/s Controller (rev 11) (prog-if 01 [AHCI 1.0]) LnkCap: Port #0, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <512ns, L1 <64us LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+ 06:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 05) LnkCap: Port #0, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s unlimited, L1 <64us LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+ 07:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO (prog-if 02 [NVM Express]) LnkCap: Port #0, Speed 16GT/s, Width x4, ASPM L1, Exit Latency L1 <64us LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+ 08:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Cezanne [Radeon Vega Series / Radeon Vega Mobile Series] (rev d9) (prog-if 00 [VGA controller]) LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+ 08:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Renoir Radeon High Definition Audio Controller LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+ 08:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 10h-1fh) Platform Security Processor LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+ 08:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Renoir/Cezanne USB 3.1 (prog-if 30 [XHCI]) LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+ 08:00.4 USB controller: Advanced Micro Devices, Inc. [AMD] Renoir/Cezanne USB 3.1 (prog-if 30 [XHCI]) LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+ Vorheriger Post: 00:02.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Renoir/Cezanne PCIe GPP Bridge (prog-if 00 [Normal decode]) LnkCap: Port #2, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <32us LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+ 00:02.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Renoir/Cezanne PCIe GPP Bridge (prog-if 00 [Normal decode]) LnkCap: Port #1, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+ 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Renoir Internal PCIe GPP Bridge to Bus (prog-if 00 [Normal decode]) LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+ 01:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] 500 Series Chipset USB 3.1 XHCI Controller (prog-if 30 [XHCI]) LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <2us, L1 <32us LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+ 01:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] 500 Series Chipset SATA Controller (prog-if 01 [AHCI 1.0]) LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <2us, L1 <32us LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+ 01:00.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] 500 Series Chipset Switch Upstream Port (prog-if 00 [Normal decode]) LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L0s L1, Exit Latency L0s <2us, L1 <32us LnkCtl: ASPM L1 Enabled; Disabled- CommClk+ 02:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43ea (prog-if 00 [Normal decode]) LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us LnkCtl: ASPM Disabled; Disabled- CommClk+ 02:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43ea (prog-if 00 [Normal decode]) LnkCap: Port #4, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L1 <64us LnkCtl: ASPM L1 Enabled; Disabled- CommClk+ 02:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43ea (prog-if 00 [Normal decode]) LnkCap: Port #8, Speed 8GT/s, Width x1, ASPM L1, Exit Latency L1 <64us LnkCtl: ASPM Disabled; Disabled- CommClk- pcilib: sysfs_read_vpd: read failed: No such device 02:09.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43ea (prog-if 00 [Normal decode]) LnkCap: Port #9, Speed 8GT/s, Width x1, ASPM L1, Exit Latency L1 <64us LnkCtl: ASPM L1 Enabled; Disabled- CommClk+ 03:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03) LnkCap: Port #0, Speed 5GT/s, Width x8, ASPM L0s, Exit Latency L0s <64ns LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+ pcilib: sysfs_read_vpd: read failed: No such device 04:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO (prog-if 02 [NVM Express]) LnkCap: Port #0, Speed 16GT/s, Width x4, ASPM L1, Exit Latency L1 <64us LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+ 06:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 05) LnkCap: Port #0, Speed 5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s unlimited, L1 <64us LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+ 07:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO (prog-if 02 [NVM Express]) LnkCap: Port #0, Speed 16GT/s, Width x4, ASPM L1, Exit Latency L1 <64us LnkCtl: ASPM L1 Enabled; RCB 64 bytes, Disabled- CommClk+ 08:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Cezanne [Radeon Vega Series / Radeon Vega Mobile Series] (rev d9) (prog-if 00 [VGA controller]) LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+ 08:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Renoir Radeon High Definition Audio Controller LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+ 08:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 10h-1fh) Platform Security Processor LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+ 08:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Renoir/Cezanne USB 3.1 (prog-if 30 [XHCI]) LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+ 08:00.4 USB controller: Advanced Micro Devices, Inc. [AMD] Renoir/Cezanne USB 3.1 (prog-if 30 [XHCI]) LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+ Nachtrag: Was mir grade einfällt, ich habe vorhin noch den Dell Perc gegen den billigen Marvell SATA Controller getauscht, kann es daran liegen? An den Controller hängt nichts kritisches und wird in 3 Tagen getauscht. Nachtrag2: Ja bestätige hiermit, dürfte an den Marvel Chip liegen. Habe den AMD_Treiber wieder abgeschaltet und weiterhin so ein "schlechter" status bezüglich ASPM. Ich werde die Tage dann den ASM1166 Controller einbauen und berichten ob es besser ist.
  24. @DataCollector @mgutt Danke euch fürs ganze raussuchen. Ich habe mich jetzt für den 1166 von Amazon entschieden, kostet 10€ mehr als auf Ebay/Aliexpress und es wird schnell geliefert. (Weiters das Problem das nach Austria viele China Händler nicht schicken. https://amzn.to/42bVKRZ
  25. Danke dir, werde die von Ebay nehmen, braucht zwar einige Zeit aber auf Amazon gibt es nichts (was zu mir liefert) und das sollte ausreichen. Vielen Dank! Ich habe die Micron SSD ausgebaut und den Dell PERC H310 ausgebaut und komme nun mit den 4 HDDs und der Intenso SSD auf 50W im durschnitt Find ich schonmal cool.