• Posts

  • Joined

  • Last visited


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

LEKO's Achievements


Newbie (1/14)



  1. @Squid : Oh! Thanks. I never noticed those in the past. But the warranty period/field is "fixed" and not tweakable. I'll probably have to workaround that to have the real end of support date being shown.
  2. Mostly a quality of life improvement. It would be nice to add warranty information related to our disks directly in UNRAID gui. Nothing complex/fancy. 1. Date bought field. 2. Warranty expiration date field. 3. Bought from field. 4. Order id/number field. 5. Attach an order file. All these fields could be optionals. At the moment, I used the Comments field to keep track of the warranty expiration date. Having this information readily available could help when either one of our drive have an issue or when we want to replace an existing drive: replacing the oldest or the one(s) the warranty have expired first.
  3. ATA Error Count: 1 CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 1 occurred at disk power-on lifetime: 25635 hours (1068 days + 3 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 53 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 60 00 00 ff ff ff 4f 00 20d+10:15:07.577 READ FPDMA QUEUED 60 00 40 ff ff ff 4f 00 20d+10:15:07.577 READ FPDMA QUEUED 60 00 40 ff ff ff 4f 00 20d+10:15:07.577 READ FPDMA QUEUED 60 00 40 ff ff ff 4f 00 20d+10:15:07.577 READ FPDMA QUEUED 60 00 40 ff ff ff 4f 00 20d+10:15:07.577 READ FPDMA QUEUED ^^^^ I get this odd error message on one of my drive and I UNRAID keeps telling me my drive have an "error". I try to find a bit more details on that specific SMART log, but I can't find anything obvious. I reseated all SATA cables, I've been doing a couple power cycles in the last days, the error seems to "stick". I just acknowledge the error in UNRAID gui. In the past I had experienced other SMART errors and HDD, it always meant the drive was dying: and there was symptoms like slow read or odd errors reported by the OS, etc. At the moment, I'm clueless because the drive performs well and it does gives any sign of being unreliable. Drive info: Seagate BarraCuda 3.5 (CMR) ST3000DM008-2DM166 Power-on hours: 34122 (3y, 10m, 21d, 18h) Head-flying hours: 14990h+46m+24.279s
  4. I stumble upon this thread, because my parity drive now have Current pending sector count of 2 and I could not find any meaningful information about this particular S.M.A.R.T. metric. Based on your post @John_M, it would be nice to have a nice S.M.A.R.T. counters reference document/post with any tips on how to fix our workaround those errors. Based on what you said, I'll run GRC's SpinRite on Level2 on my parity drive to force a read/write and hopefully clear this pending sector counter. Oddly enough, my parity drive is one of the newest drive of my array. My other drive reports 1 uncorrected error, I'll probably also try to do a pass with SpinRite to ensure there is no other underlying problem. tower-smart-20211029-0736.zip tower-smart-20211029-0734.zip
  5. I used to pay for Genie Timeline as a simple backup software for Windows. Any recommendation for a good Windows software that would simply dump a copy of my local PC important stuff onto my Unraid array auto-magically. I'm even considering some "home-brewed" solutions, but a simple (read install and forget) would be my preferred choice.
  6. That's right. A one-click to remove the disk from an array. This is exactly what I want to do with my oldest and smallest disk in my array that starts to log errors. I don't want to replace it, I just want it gone. It's a 640Gb drive while others are 3Tb+.
  7. /Bump This is the major thing that I find missing from Unraid. A "trend/projection" graph based on capacity available vs use vs rate of usage. That would be nice to have 1 week, 1 months, etc. trends. At the moment, I often find myself login in Unraid just to check how much capacity is still available. And I "guess" my rate of usage instead of just looking at trends. This nice addition would be awesome to help plan disk replacement, array size increase, etc.
  8. Thanks for the advice. If the error persist, I'll have good next steps and stuff to look into. I'll double-check the checksum thing. I think I have it set up, but not 100% sure. As for "frequency", I'm ok with doing it once a week. I use Unraid purely for backup, so I don't mind "exercising" the CPU/drive/memory once a week.
  9. Is there any easy way to have a bit more detail an context on the error(s) found in Unraid. In the screenshot (attached to this post), I noticed I had a parity error a couple of days ago. But I don't know exactly how to diagnose/interpret it. My guess at the moment is this: 1. Parity check ran. 2. An error was found and probably fixed (How I can know/tell for sure?) 3. The notification is reporting that. 4. The next parity check will run (next week). 5. If no errors are found, the message will change for "no error found" (or something similar). But, what if I would like to find our more about that error? What if parity errors comes (always) on the same drive? What it's "never" fixed and/or I start to run into bad sectors? Basically, I would like to know how I could interpret error messages and logs and make them actionable if necessary. Any tips/ideas?
  10. I tried to find the data in the current software release and I don't see it. Also, did not found this information/data in the plugins I'm currently using. It would be nice to have a prediction/trend graph of our Array usage. A graph over time of the total array size and percentage usage with a "trend" line. The idea would be to preemptively resize the array. If we would see that that array would fill up (at the current rate) in a couple of weeks, we could buy HDD in advance and plan a maintenance to resize the array. That would be a nice and simple addition. I feel like it's something that is missing from Unraid. There is not a lot of data/graphing at the moment... Or the information is not readily available. Still loving my Unraid setup: the perfect "install and forget" SAN solution for me.
  11. I'm glad to hear that the software I just purchased is well maintained. Thanks for the quick patch and your full and (immediate) disclosure.
  12. LEKO

    Unraid newcomer

    I'll look into those. Thanks for tip and welcome message. I must admit that my IT professional experience help me a lot. I don't directly manage storage at work, but I work closely with the admins that manage Petabytes of SSD SAN. These rigs are something quite not comparable to UnRaid.
  13. LEKO

    Unraid newcomer

    First, I'll introduce myself. I've been working in the IT/telecomm industry for almost 25 years now. I've been work working for an ISP/Carrier for almost 20 years and I moved to AAA gaming industry almost 4 years ago. My primary focus is networking at large scale: BGP, MPLS, big Data Center fabric, etc. At the moment, I've an official title of Network Administrator, but I wear many hats: DevOps (automation through Ansible, Python), network engineer and architect for many project and still doing ops/maintenance/deployment. I spend so much time at work on big problem/challenges that I don't want to waste any time at home managing my devices. Having 3 child and a ton of connected devices/PC, I started to feel the urge of "upscaling" my data storage/backup solution. I used to have an (old) LaCie Wireless Space (1Tb NAS/router), but it was getting old and only supported SMB 1.0 which is now deprecated in most modern OSes. Time to move on! I looked into multiple potential solutions and handed up choosing UnRaid for some main reasons: - Simplicity of installation/operations. - Ease of upgrade/scale the array. - Based on Linux. Works of on "off-the-shelf" x86 hardware. So, my journey to UnRaid started with the upgrade of my main PC from an Intel i7-3770 to a Ryzen 5 3600 based system. Then, I scavenged all SATA HDD I could find at home, ranging from 640Gb to 3Tb. During the UnRaid first run, I found that 3 of my HDD were dead or almost dead (thanks to the UnRaid very good reporting/logging). So here is my current UnRaid setup: CPU: Intel i7-3770 Mobo: Gigabyte P67A-D3-B3 Memory: 12Gb DDR3 (2x2Gb + 2x4Gb) GPU: GeForce GT 710 (only used for initial setup, will I ever use it in the future?) Boot USB stick: 32Gb (No-brand: I know, it will probably die soon, I have 5-6 other sticks ready to take over) Current Storage: Parity: Seagate 3Tb Disk1: WD 1Tb Disk2: WD 640Gb (I don't trust it... Unraid reported couple of SMART errors) Disk3: Seagate 1Tb 3.5 inches (Too many errors, it was slowing UnRaid array by a lot) Disk4: Seagate 1Tb 2.5 inches (Dead, won't spin up) I will receive a WD Purple 4Tb by the end of the day, I'll do a bit of shuffling: Parity: WD Purple 4Tb Disk1: WD 1Tb Disk2: WD 640Gb (I don't trust it... Unraid reported couple of SMART errors) Disk3: Seagate 3Tb (old parity disk) I want to keep the 640Gb in the array to "experiment" with an old/soon to die drive. My UnRaid array is used for "backup only", no mission critical and/or applications depends on it. So far, I have a good experience with UnRaid, the learning curve is not too steep. Some stuff would probably benefits from some more "newbie friendly" attention, but overall I feel the platform is solid. Side note: to have some fun, I spun a CentOS 8 VM to check how it would behave and to learn a bit about VM management with UnRaid. It is straight-forward, but I ran into some issues, but I think it is mostly related to CentOS, not UnRaid. TL;DR: UnRaid team, you did a good job!
  14. Feb 8 15:55:49 Tower bunker: warning: SHA256 hash key mismatch (updated), /mnt/disk1/domains/CentOS/vdisk1.img was modified Feb 8 15:58:50 Tower bunker: error: SHA256 hash key mismatch, /mnt/disk1/system/docker/docker.img is corrupted Feb 8 15:58:55 Tower bunker: warning: SHA256 hash key mismatch (updated), /mnt/disk1/system/libvirt/libvirt.img was modified Feb 8 15:58:55 Tower bunker: warning: 2 corrections made, export file needs to be updated ^^^^ Just got these logs. On first check, I tough I had a problem. Then, I noticed it was normal to have these hash to fail because it is the VM image file which changes quite a bit. Would it be possible to change the default behaviour of this plugin to not check the integrity of those files? At least, users should be warned that checking integrity of all files can generate false positive.
  15. Same problem on my freshly installed unraid. I can't install this plugin because of the MD5 hash issue.