cholzer

Members
  • Posts

    77
  • Joined

  • Last visited

Everything posted by cholzer

  1. Guys, you should really open a new report in the correct subforum because mine is closed. Hardly anyone from the dev team will read closed reports.
  2. Hi! One of my Seagate IronWolf 8TB drives shows 14 read errors. But how concerned should I be about that? The drive is 13 months old, I did pre-clear it. Should I replace it and try RMA? ST8000VN004-2M2101_WKD17TMQ-20210301-0717.txt
  3. Tired to unplug the network cable over night and then in the morning reconnect and check the log if spin ups occurred? That way you can rule out devices on your network causing the drives to wake up. Just for the record, since I replaced my failing HBA my drives have stopped to spin up randomly.
  4. Yeah mine was broken, I even got random disconnects of entire disks later on. RMA in progress. In the meantime the SAS2008 is working fine and no more issues.
  5. My HBA is 7 months old, and it started to fail shortly after I upgraded to RC2. So just because your hardware is new does not mean that it is okay. The only way how you can make sure that RC2 is to blame is by downgrading.
  6. increased CPU load every 10 seconds!? Intel Xeon E5-2620 v3 my UnRAID is bored 90% of the day. there are no dockers, no VM's and no one on the LAN accesses the shares. With 6.8.3, prior to upgrading to RC2, the CPU load was mostly at 0-1%, sometimes there was a spike to 5% on a core. Now with RC2 I see multiple CPU cores spike to to 15% every 10 seconds..... (all array disks spun down, no one using it for anything). nas-diagnostics-20210122-0730.zip
  7. If the issue goes away when you downgrade to the latest stable, then yeah - it would appear that RC2 is the problem. But just because you don't use the same HBA as I do does not mean that it is software related. In my case the HBA began to die, that is what caused my issue. Your onboard SATA controller can malfunction just as my HBA did. So unless downgrading Unraid to the latest stable fixes your issue, you can not rule out a hardware fault. In my case the issue started after the upgrade, which is why I first also thought that RC2 was to blame, while it was the HBA.
  8. In my case it was caused by the LSI 3008 controller which began to act up in other ways as well. This issue was in my case just one more symptom of the HBA failing. If you use plugins, VM's or dockers then you should try to disable all of them and see if the disks stay spun down.
  9. Changed Status to Closed I am nolonger able to reproduce the issue since I replaced my potentially failing LSI 3008 with an LSI 2008.
  10. I had to replace my LSI 3008 HBA with an LSI 2008 HBA because I suspected that the 3008 was failing. Since I did that I can nolonger reproduce this issue. Disks remain spun down now!
  11. I have not tried that. I only have my Array disks, cache SS and one disk I share through unassigned devices for backups. That entry in the log just keeps showing up throughout the day.
  12. Anyone else gets this error in their log?
  13. Already changed the HBA and all cables. I still get this error in the log several times throughout the day. I do not recall getting that error prior to the upgrade to 6.9.2 RC2 What does it even mean? What is this task it tries to abort? *edit* Recently replaced the LSI 3008 with a LSI 2008 (including the cables), this error message still shows up. Jan 18 07:23:17 NAS kernel: sd 3:0:4:0: attempting task abort!scmd(0x000000006d823564), outstanding for 15380 ms & timeout 15000 ms Jan 18 07:23:17 NAS kernel: sd 3:0:4:0: [sdf] tag#789 CDB: opcode=0x85 85 06 20 00 00 00 00 00 00 00 00 00 00 40 e5 00 Jan 18 07:23:17 NAS kernel: scsi target3:0:4: handle(0x000d), sas_address(0x4433221106000000), phy(6) Jan 18 07:23:17 NAS kernel: scsi target3:0:4: enclosure logical id(0x590b11c05321f300), slot(5) Jan 18 07:23:20 NAS kernel: sd 3:0:4:0: task abort: SUCCESS scmd(0x000000006d823564) Jan 18 07:23:20 NAS kernel: sd 3:0:4:0: Power-on or device reset occurred Jan 18 07:23:20 NAS emhttpd: read SMART /dev/sde Jan 18 07:23:20 NAS emhttpd: read SMART /dev/sdf Jan 18 07:23:37 NAS kernel: sd 3:0:5:0: attempting task abort!scmd(0x00000000191fe81f), outstanding for 15232 ms & timeout 15000 ms Jan 18 07:23:37 NAS kernel: sd 3:0:5:0: [sdg] tag#787 CDB: opcode=0x85 85 06 20 00 00 00 00 00 00 00 00 00 00 40 e5 00 Jan 18 07:23:37 NAS kernel: scsi target3:0:5: handle(0x000e), sas_address(0x4433221105000000), phy(5) Jan 18 07:23:37 NAS kernel: scsi target3:0:5: enclosure logical id(0x590b11c05321f300), slot(6) Jan 18 07:23:39 NAS kernel: sd 3:0:5:0: task abort: SUCCESS scmd(0x00000000191fe81f) Jan 18 07:23:39 NAS emhttpd: read SMART /dev/sdg Jan 18 07:23:39 NAS emhttpd: read SMART /dev/sdb Jan 18 07:36:08 NAS kernel: sd 3:0:4:0: Power-on or device reset occurred nas-diagnostics-20210118-0830.zip
  14. So if I would get this one then it comes with IT-Mode not IR-Mode? https://www.broadcom.com/products/storage/host-bus-adapters/sas-9300-8i
  15. Ah so if I would not have bought an LSI SAS 3008 and instead went straight for an 9300-8i then I would not have had to flash the IT BIOS? Cool, great to know! So you would still go for an 9300-8i in 2021, right?
  16. I am building an Unraid system for a friend and I was wondering which HBA to choose. It has to support (up to) 7 HDD's and 1 SSD (cache) If possible I'd like to avoid the stressful experience I had with switching/flashing my SAS 9300-8i to IT-Mode 😅
  17. After the update to RC2 everything seemed fine but then I started to notice that Unraid would wake up disks even when data was copied to the *cache* not the array. I also noticed that throughout the day some disks get spun up for no apparent reason - at times where everyone is asleep - there are no VM's nor dockers, Unraid is used as a plain NAS
  18. The same 2 disks (sdb is the parity disk) seem to get spun up to read SMART (?) throughout the day. Time is AM, everyone was asleep during that time, no one accessed the NAS. There are no VM's and no dockers in Unraid, I use it as a "simple" NAS. Fusion-MPT 12GSAS SAS3008 PCI-Express in IT Mode The following plugins are installed: CA User Scripts Community Applications Dynamix Cache Dirs Dynamix Schedules Dynamix SSD Trim openVMTools_compiled Recycle Bin Tips and Tweaks Unassigned Devices Unassigned Devices Plus (Addon) Jan 12 03:02:13 NAS emhttpd: read SMART /dev/sde Jan 12 03:02:32 NAS emhttpd: read SMART /dev/sdd Jan 12 04:01:19 NAS emhttpd: spinning down /dev/sdd Jan 12 04:01:21 NAS emhttpd: spinning down /dev/sde Jan 12 04:07:29 NAS emhttpd: read SMART /dev/sde Jan 12 04:27:05 NAS emhttpd: read SMART /dev/sdd Jan 12 05:31:27 NAS emhttpd: spinning down /dev/sdd Jan 12 05:31:27 NAS emhttpd: spinning down /dev/sde
  19. I just have the spindown delay set to 30minutes. That works fine for me using an LSI SAS 2008 in IT-Mode
  20. Generally speaking disks spin up / down fine for me in RC2, but there is one usecase where I have noticed unnecessary spin ups. Steps to reproduce: 1. create a share which is set to "cache only" 2. wait for unraid to spin down all disks 3. access that "cache only" share (in my case from a Windows 10 PC where it is mapped as a network drive) 4. copy a file to that "cache only" share (while all other array disks are spun down!) Expected behaviour: file gets copied to the cache drive, all array disks stay spun down Result: (some) disks spin up, log shows "read smart" entries for those array disks Jan 6 05:53:07 NAS emhttpd: read SMART /dev/sde Jan 6 05:53:26 NAS emhttpd: read SMART /dev/sdd
  21. Thank you! I was only looking at the log next to the disk in UD which did not show anything. Looking at the correct log I guess there is indeed something wrong with that partition. I did shutdown the PC as usual though before I removed it. Well... lets investigate. Dec 30 22:17:36 NAS unassigned.devices: Mount warning: The disk contains an unclean file system (0, 0). Metadata kept in Windows cache, refused to mount. Falling back to read-only mount because the NTFS partition is in an unsafe state. Please resume and shutdown Windows fully (no hibernation or fast restarting.)
  22. RC2 with 2020.12.19 For a long time I used UD to share a 6TB drive, which was previosuly used in a Windows PC and only contains a single NTFS partition. That drive/smb share still works nicely! 👍 However today I added a 512GB SSD (using the same settings in UD as the 6TB drive). When I access that share from a Windows PC it tells me that it is "write proteced"!?!? Even though "read only" is not enabled in UD. The settings are identical to the 6TB drive and the same SMB user is used as for the 6TB drive. Another thing I just noticed is that the "Change Disk UUID" dropdown list in the UD settings is empty.
  23. Upgraded my Unraid Server (which runs inside ESXi) to RC2 about a day ago. Everything is working nicely so far! Including spin down/up of disks. (note I do not use any dockers nor Vm's in UNRAID, it is a pretty simple setup with an LSI 2008 in IT mode, 5x 8TB HDD's Array, 1x 500GB cache SSD and 1x 6TB share unassigned devices)
  24. Thx I just read your reply in the 6.9RC1 thread! But I guess I will wait for RC2 which fixes some spinup/down issues remaining in RC1?