thingie2

Members
  • Content Count

    22
  • Joined

  • Last visited

Community Reputation

1 Neutral

About thingie2

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I can try that, however of the 6 data drives I have in my server, I only have 2 different models of drive (4x4TB WD SE drives, & 2x8TB WD Red drives). The drive that keeps coming up with an issue is one of the 4TB drive, and I also have another one of those drives on the same expansion card. If it was an issue with compatibility with the controller, wouldn't the other drive of the same type on the same controller have the same issue (or am I oversimplifying)? I think I'm going to wait until (if) the drive fails again, so I can determine if the change of PWR cable has any effect
  2. Interestingly, I've just updated the battery date & it's not showing an updated runtime of ~50 mins with a 12% load. Looks like that's sorted it. Seems a little strange that's needed, but I guess it makes sense that the UPS knows a battery won't get any better as it ages, so it doesn't improve the runtime unless it sees it as a new battery.
  3. Hmm, didn't think of changing the battery date etc. I'll try that & hopefully it'll work.
  4. I set one to run overnight last night. The rest has completed successfully, see attached log. tower-smart-20210102-1200.zip
  5. I've got an APC Back-UPS XS 1400U UPS that I have had running for a few years now. over the last few months I've had very low runtimes available (used to be ~90 mins reported, now is <10 mins), so I decided the batteries are probably dead & need replacing. I have replaced the batteries & the reported runtime left has not changed. I have run an automatic calibration (with the new batteries) using the apctest command, the % remaining reported by that seemed to approximately match the time displayed (circa 10 mins) I have run a manual calibration (disconnected APC from
  6. I'm thinking more & more it's an issue with the drive. I've just had another look through the logs from after noticing something at machine startup today. I get the following error during boot: ata8: link is slow to respond, please be patient (ready=0) Then the following errors just prior to the disk errors yesterday: Dec 31 14:54:54 Tower kernel: ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Dec 31 14:54:54 Tower kernel: ata8.00: failed command: FLUSH CACHE EXT Dec 31 14:54:54 Tower kernel: ata8.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 22 Dec 31 14:54:
  7. I've just finished checking connections & changing the PWR cable for the drive. All connections seemed fine, but here's the smart report for the drive now that it's available, but it doesn't look like there's anything to worry about in that. tower-smart-20210101-1352.zip
  8. I thought I read somewhere that if a drive is disabled, you can't get the SMART report for that drive until the array has been re-started, which is why I didn't worry about there not being a SMART report in that log. I'm going to open it up, check connections & swap the HDD PWR cable today, it just seems odd to me that if it is a loose connection, it's the same drive that's coming loose every time.
  9. So a follow up on this. I've tried swapping SATA cables & changing the SATA port the drive is connected to, but I've had errors a couple of times since trying these. Only thing I haven't done yet is trying a different PWR cable, but there are other drives on the same chain, so thinking it's unlikely to be that, but going to try and determine for sure. I've attached the diagnostics for the last couple of times it's had the problem. Do these give any further idea on what might be the issue? tower-diagnostics-20201231-2033.zip tower-diagnostics-20201227-1150.zip
  10. Thanks, that's what I was hoping, but wasn't sure how to determine/check. I'll add the drive back into the array & rebuild, then swap the cables round/replace with others & hopefully that'll prevent it happening again.
  11. Yesterday morning I had one of my drives get disabled due to errors. How can I determine the cause of the errors, and if this means there's an issue with the drive, or an issue elsewhere? The disabled drive is fairly old, so I wouldn't be surprised if it's on it's way out, but I want to be sure before doing something about it. I've attached my diagnostics zip, but it's missing the SMART data for the disabled drive (due to it being disabled). I've now uploaded the SMART result from the failed drive after a restart. The SMART test has passed. tower-diagn
  12. For anyone else who find this with similar issues, I found a solution. The issue is the tasks that the rclone filesystem supports. Once I realised this, and could look into rclone's capability a bit more, I found the vfs-cache-mode parameter. I've since set this to write, and it has resolved my issues.
  13. For anyone else who find this with similar issues, I found a solution. The issue is the tasks that the rclone filesystem supports. Once I realised this, and could look into rclone's capability a bit more, I found the vfs-cache-mode parameter. I've since set this to write, and it has resolved my issues.
  14. Did you ever get this resolved? I'm having a similar issue, but with Onedrive. It's finding the folder fine, and loading some .tmp files in there (upto a few 100kb), but then throwing up sync errors (both illegal seek & operation not permitted). and example of the errors: 2020-07-19 21:36:57 Puller (folder "5870" (5870), item "DSC_3452.JPG"): truncate /onedrive/pictures/.syncthing.DSC_3452.JPG.tmp: operation not permitted 2020-07-19 21:37:00 Puller (folder "5870" (5870), item "DSC_3453.JPG"): save: write /onedrive/pictures/.syncthing.DSC_3453.JPG.tmp: illegal
  15. I think I spoke too soon! The folder is being seen with the correct size, and I can access it fine from everywhere, other than syncthing... It seems to be a permissions issue, but I think it's on the syncthing side, rather than rclone. Time to head over to the syncthing docker support topic & hope someone can help there.