jamikest

Members
  • Posts

    7
  • Joined

  • Last visited

jamikest's Achievements

Noob

Noob (1/14)

2

Reputation

  1. I was doing some maintenance today and updated to 6.12.8. Afterwards I noted my power consumption had increased by about 8W (direct measurement at inline meter). I checked powertop and confirmed my package C3 state is now only about 18%; it used to be around 55%. I have checked tunables all are "Good", Wakeup all are disabled, and ASPM are all enabled. Edit: I have removed FCP and rebooted. The C3 state went from 55% --> 18% about ten minutes after startup; the below observation was just a coincidence. Nothing else changes in the sys logs at that moment. Here is the strange thing that I noticed after several reboots and checks: The system starts at 55% in package C3 and the exact moment Fix Common Problems Version 2024.02.22 shows up in the logs, C3 drops to 18% and power consumption jumps up about 8W. Since this plugin was recently updated (2 days ago), I am not sure if the C3 state accompanied Fix Common Problems or the system update I did today. Any thoughts as to what in FCP (or the system update) could be affecting my C3 state?
  2. I just wanted to say thanks to you and unr41dus3r. I had this same issue with my forced mover. Hopefully a permanent fix can be implemented.
  3. Adding to the growing number of users that this fix does not resolve. I have an array 10 drives: four 8 and six 12 TB drives running from an LSI card / expander combo. I have four ST8000VN004 drives and have disabled EPC and low power spin up. Two of the ST8000VN004 drives continue to set errors: logs show they are not spinning up in 15 seconds, then the read / write errors occur. All four have the same firmware (60). My latest attempt to solve this is as follows: I have removed the expander card and run the 8TB drives direct from the MoBo. I only have the 12TB drives on the LSI card. Fingers crossed!
  4. I believe I may have solved my previous question: I had added another SSD via USB to replace a failing unassigned drive. I had never rebooted the system and the array was working. During this process, I rebooted and the array would not start; this USB SSD had exceeded my license key! I upgraded to an unlimited license and was then able to use unBALANCE to move data.
  5. Hey There! I have seen this question multiple time in this forum, yet it has never been definitively answered in the two years that it keep popping up. unBALANCE is stating: unBALANCE needs exclusive access to disks, so disable mover and/or any dockers that write to disks, before running it. Also note that transfer speed may be affected by disk health. I have disabled dockers and cannot "scatter" a drive that I wish to upgrade. The options are all grayed out (plan, move, copy). There is no option to disable mover. Any ideas how to proceed? Previous (unsolved questions) from this and other forums: https://forums.unraid.net/topic/43651-plug-in-unbalance/page/62/ https://www.reddit.com/r/unRAID/comments/umnwxo/unbalance_not_workingloading/
  6. Just to add to this a bit further - I am running both 4TB ST4000VN008 and 8TB ST8000VN004 Ironwolf drives. I have not had a single error on my (5) 4 TB drives since I started my build about ~5 months ago. As soon as I added an 8TB Ironwolf to my array, the errors started. One more thing I find interesting is that I swapped in an 8TB Ironwolf to my parity ~6 weeks ago and have no errors on that drive located in my parity. I am not sure why the parity drive behaves differently. I disabled EPC and low power spin up on all the 8TB drives (parity and array) and left the 4TB as is.
  7. I just wanted to give a quick THANK YOU for this post. I was receiving multiple errors when my 8TB Ironwolf drive would spin up. I went through cables, relocating on the controller, and finally trying a new 8TB Ironwolf drive. The issue persisted through all of my measures. Digging a bit deeper, I found this post. I tried the SeaChest commands and the spin up errors are resolved. For anyone else forum searching, here is the syslog output anytime a drive would spin up (sometimes with read errors in unraid, sometimes no read errors as in the example below): Apr 17 11:03:37 Tower emhttpd: spinning up /dev/sdc Apr 17 11:03:53 Tower kernel: sd 7:0:1:0: attempting task abort!scmd(0x000000009175e648), outstanding for 15282 ms & timeout 15000 ms Apr 17 11:03:53 Tower kernel: sd 7:0:1:0: [sdc] tag#1097 CDB: opcode=0x85 85 06 20 00 00 00 00 00 00 00 00 00 00 40 e3 00 Apr 17 11:03:53 Tower kernel: scsi target7:0:1: handle(0x0009), sas_address(0x4433221101000000), phy(1) Apr 17 11:03:53 Tower kernel: scsi target7:0:1: enclosure logical id(0x5c81f660d1f49300), slot(2) Apr 17 11:03:56 Tower kernel: sd 7:0:1:0: task abort: SUCCESS scmd(0x000000009175e648) Apr 17 11:03:56 Tower emhttpd: read SMART /dev/sdc After disabling low current and EPC, here is the result of spinning up the same drives (no errors!): Apr 17 12:08:42 Tower emhttpd: spinning up /dev/sdc Apr 17 12:08:51 Tower emhttpd: read SMART /dev/sdc