engin33rh3r3

Members
  • Posts

    26
  • Joined

  • Last visited

Everything posted by engin33rh3r3

  1. I am having the same issue. How did you point it to sync to the correct location instead of the appdata folder?
  2. Does anyone have any good tutorials for this? I can't get my receiving server to place the files in the correct place. For some reason it is putting in the appdata/syncthing folder.
  3. If you were to build this today what would you change? I built me a 14900k with 128gb ddr 5 ecc 5400 mhz on a ASUS w680 ace impi but it’s leaving me lacking for the ability to add u.2 ssds. I’m finding u.2 ssds are about half the price of m.2, can endure x10 more, and have PLP. My other option would be to shell big on a bifurcating pcie card. Thoughts? Running about 20 mechanical hdds, a couple of 8tb Samsung qvos, and have a handful of u.2 drives id like to use and move away from sata and m.2 drives.
  4. What did you end up settling with? I built a 14900k with 128gb ddr5 5600 ecc but I don’t have enough lanes to run all the u.2 ssds. Now looking at Xeon, AMD, or a bifurcation card for the 14900k.
  5. Looking to have 20 hard drives, a couple sata ssds, and a possibly up to four u.2 or u.3 drives. I have a supermicro 20 bay server with dual xeons running now and I hate how loud it is. I also have two TS440s but they don't hold enough drives for what I want to do. Recommendations? Considered a meshify 2 XL but the thought of loosing the drive hot swap drive cadies for easily locating failing drives seems to be a nightmare. I do have a unused lenovo SA120 I might could use to augment a small chassis too. Also, what HBA card is recommended. 9200 series is so old and I understand the 9300 series are power hungry. Would something like this be more modern and able to handle my needs? SAS3224-24I ?
  6. Any updates on this board? I was set on the Asus W680-pro ACE but after reading this i am not sure.
  7. Have you completed this build yet? I am in a similar situation. Not sure if I want to use a 14600k 14700k or 14900k. Would likely undervolted. I was going to go with a 14500 but saw they actually are using more power than the 14600k due to voltage settings. I think the 14500 is based on the 12th gen die. Looking at a w680 ace mobo and 64gb ddr5 ECC to start.
  8. But your original post clearly indicated it was a hang on a intel e1000e. I started having the same issue a few years back and ethtool -K eth0 tso off seems to have fixed it. Very curious to understand more what happened here.
  9. Thank you immensely to the @smeehrrr for sharing this crucial workaround. For several years, I've been grappling with persistent network issues on my Unraid server, pushing me to the brink of frustration. Implementing the suggested command has finally brought relief. This problem began around 2018-2019, and my system had been functioning smoothly prior to that. The resolution provided here has been a game changer, and I'm deeply grateful for the shared knowledge and support. I'm curious about the underlying mechanics of this fix. Is there a more permanent solution that can be implemented? I'm keen to understand why this particular command was effective, especially considering the issue's persistence from 2018-2019 to the present day (Unraid Version: 6.12.6). Here are some specifics of my setup for context: Server Model: LENOVO ThinkServer TS440 BIOS Version: FBKTDIAUS (Dated: Thu 16 Sep 2021) Processor: Intel® Xeon® CPU E3-1275L v3 @ 2.70GHz Any additional insights or suggestions for a long-term resolution would be greatly appreciated!
  10. sAnyone have any update on this? Looking to update my SSDs but have concerns after reading issues with newer Samsung SSDs .
  11. Update: Guess it was a permission issue that Unraid 'permission tool' couldn't fix. Issued these commands via PuTTy This fixed Plex: Stopper Docker ls -la /mnt/user/appdata/PlexMediaServer (View permissions before) sudo chmod -R 777 /mnt/user/appdata/PlexMediaServer/ (Wait a couple mins then check with) ls -la /mnt/user/appdata/PlexMediaServer (view permissions after) Start Docker & Wait a couple mins before attempting to access Plex (does some sort of cleanup routine) This fixed Tautulli: ls -la /mnt/user/appdata/tautulli (View permissions before) sudo chmod -R 777 /mnt/user/appdata/tautulli (Wait a moment then check with) ls -la /mnt/user/appdata/tautulli (view permissions after) Start Docker
  12. Has anyone found a way other than to delete and re-install? I was running fine then updated to 6.10.3 then the plex docker stopped working. I have tried running the permissions tool with no luck. This is my plex log, Jul 08, 2022 16:13:53.863 [0x147433d75b38] INFO - /usr/lib/plexmediaserver/Plex Media Server Jul 08, 2022 16:13:54.001 [0x14743755b0d0] INFO - Running migrations. (EPG 0) Jul 08, 2022 16:13:54.005 [0x14743755b0d0] INFO - Running forward migration 20211027132200. Jul 08, 2022 16:13:54.005 [0x14743755b0d0] ERROR - SQLITE3:0x80000001, 8, statement aborts at 17: [INSERT INTO schema_migrations (version) VALUES (20211027132200)] attempt to write a readonly database Jul 08, 2022 16:13:54.005 [0x14743755b0d0] ERROR - Exception inside transaction (inside=1) (/data/jenkins/server/3535212772/Library/DatabaseMigrations.cpp:295): sqlite3_statement_backend::loadOne: attempt to write a readonly database Jul 08, 2022 16:13:54.009 [0x14743755b0d0] ERROR - Exception thrown during migrations, aborting: sqlite3_statement_backend::loadOne: attempt to write a readonly database Jul 08, 2022 16:13:54.435 [0x14743755b0d0] ERROR - SQLITE3:0x80000001, 8, statement aborts at 23: [update activities set finished_at=started_at where finished_at is null] attempt to write a readonly database Jul 08, 2022 16:13:54.435 [0x14743755b0d0] ERROR - Database corruption: sqlite3_statement_backend::loadOne: attempt to write a readonly database
  13. I am having a similar issue and have even tried different usb drives but no luck same error.
  14. Thanks. After more research and talking to people I am now stuck on the fence between unraid and stable bit+snap raid. Each have there own pros cons and they are significant in each of there own ways. Almost everything has shown up now and I have made a few changes since I posted this. Now it will be SUPERMICRO 1U SERVER X9DRI-LN4F+ 2x E5-2680V2 32GB 4x TRAYS RAILS 164GB DDR3 ECC ram 6x12TB WD Golds 7200rpm 1x12TB Seagate 12TB Enterprise Capacity 7200rpm 24x8TB Reds&White Labels mix 2x500GB Samsung 860 SSD's (2xCache drives dual parity) 2x2TB 2TB Micron 2xDell LSI SAS 9207-8I PCI-E 3.0 Supermicro 846E16-R1200B 4U Server BPN-SAS2-846EL1 24x TRAYS Also got a Rosewill Rosewill 4U Server Chassis(RSV-L4500) that I considered moving the 1u supermicro into to give me more bays without using the SA120 just didn't know if I want to use something like an HP 24 BAY 3GB SAS expander card with the drives in it or add another 2xDell LSI SAS 9207-8I PCI-E 3.0 cards? I am also assuming that I want the cache drives and parity drives directly connected to motherboard and not an the EXPANDER or the 846E16-R1200B JBOD? Also, will the 846E16 not choke if I try to access to many of the 24 drives??? I really like the idea that Stable Bit+Snap Raid is in Windows 2016 Server and that I am the one that controls when the redundancy information is updated. Also, unriad can't identify silent data corruption.Then on top of that Windows 2016 Server can now also run Windows dockers and support for Linux dockers is currently in preview/coming soon and through Plex the tool I am primarily building this for now supports GPU hardware acceleration which is significant.... However, I really really enjoy how reliable and straight forward unraid has been for me for the past couple of years and feel that MANY people in IT overlook the value of this especially with big data... I have never built a single FREENAS/NAS4FREE or any Windows Servers that has given me anything even close to the order of magnitude of reliability my unraid rig has given me... That is a big deal because of the size of the data set... Also, I don't feel like I have to keep re-learning things and it just always works with almost no headaches... Any words of wisdom here? I have more time effort and money invested into this and rather not mess it up lol...
  15. This is not a joke. I am an engineer for a fortune 500 company and wanted to use my bonus to go balls out. Especially since I am out of space and don't have a backup for my original unraid/plex server.
  16. I was going to use a sa120 + a 12 bay Norco but the sa120 is so loud and still wasn't enough space. Now I am looking for a chassis to support this many drives as well as a what CPU+RAM+MOBO combo. My primary unraid server is a 14x8tb TS440 with extra drive caddies running 32gb ECC and a Xeon 1275L CPU. The new mega-unraid server will allow me to shift my original server to being a backup. My CPU+RAM+MOBO+CHASSIS budget is less than $2k Already have, 28x8TB Reds and 2x12TB WD Golds 2xDell LSI SAS 9207-8I PCI-E 3.0 2x1TB Samsung 860 SSD's And if there is a way to use what I have on hand idea would be appreciated, SA120 w/drive caddies (Currently Empty) TS440 w/12 empty drive caddies 64gb of ddr4 2400MHz Corsair Dominator Platinum Ram ( *not using - from previous main rig) 64gb of ddr3 ECC ram and xeon 1275L v3 on a TS440 mobo Dell PowerEdge T30 8GB UDIMM, 2400MT/s Intel Xeon E3-1225 (*not using - from a failed project) Lots of rack space What is the most efficient build? Would like to have current generation equipment unless I have enough lanes on with the TS440 or need to go to the latest generation. Also have a spare 7700k or could steal the 8700k from my new rig but unsure about not using ECC ram.
  17. Is only solution to really reinstall with a SeaBios rather than OVFM?
  18. Any update on this? Big business right now and would love to through some gpu's in my idle unraid server...
  19. Did you get this to work? I have tried many things and the VM can see it but it can't see the temps or clock speeds so none of my mining programs will run :-(...
  20. Something obviously changed.... No other updates or changes were made to server in the past several days and it was fine to the very moment I performed the update... "May 19 19:44:09 Tower kernel: e1000: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX May 19 19:44:09 Tower kernel: br2: port 1(eth2) entered blocking state May 19 19:44:09 Tower kernel: br2: port 1(eth2) entered forwarding state May 19 19:44:09 Tower dhcpcd[1703]: br2: soliciting a DHCP lease May 19 19:44:11 Tower kernel: e1000 0000:04:00.1 eth2: Detected Tx Unit Hang May 19 19:44:11 Tower kernel: Tx Queue <0> May 19 19:44:11 Tower kernel: TDH <0> May 19 19:44:11 Tower kernel: TDT <1> May 19 19:44:11 Tower kernel: next_to_use <1> May 19 19:44:11 Tower kernel: next_to_clean <0> May 19 19:44:11 Tower kernel: buffer_info[next_to_clean] May 19 19:44:11 Tower kernel: time_stamp <10018941f> May 19 19:44:11 Tower kernel: next_to_watch <0> May 19 19:44:11 Tower kernel: jiffies <100189a40> May 19 19:44:11 Tower kernel: next_to_watch.status <0> May 19 19:44:13 Tower kernel: e1000 0000:04:00.1 eth2: Detected Tx Unit Hang May 19 19:44:13 Tower kernel: Tx Queue <0> May 19 19:44:13 Tower kernel: TDH <0> May 19 19:44:13 Tower kernel: TDT <1> May 19 19:44:13 Tower kernel: next_to_use <1> May 19 19:44:13 Tower kernel: next_to_clean <0> May 19 19:44:13 Tower kernel: buffer_info[next_to_clean] May 19 19:44:13 Tower kernel: time_stamp <10018941f> May 19 19:44:13 Tower kernel: next_to_watch <0> May 19 19:44:13 Tower kernel: jiffies <10018a240> May 19 19:44:13 Tower kernel: next_to_watch.status <0> May 19 19:44:14 Tower dhcpcd[1703]: br2: probing for an IPv4LL address May 19 19:44:14 Tower kernel: e1000 0000:04:00.1 eth2: Reset adapter May 19 19:44:14 Tower kernel: br2: port 1(eth2) entered disabled state May 19 19:44:15 Tower dhcpcd[1703]: br2: carrier lost May 19 19:44:18 Tower dhcpcd[1703]: br2: carrier acquired May 19 19:44:18 Tower kernel: e1000: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX May 19 19:44:18 Tower kernel: br2: port 1(eth2) entered blocking state May 19 19:44:18 Tower kernel: br2: port 1(eth2) entered forwarding state May 19 19:44:19 Tower dhcpcd[1703]: br2: soliciting a DHCP lease May 19 19:44:20 Tower kernel: e1000 0000:04:00.1 eth2: Detected Tx Unit Hang May 19 19:44:20 Tower kernel: Tx Queue <0> May 19 19:44:20 Tower kernel: TDH <0> May 19 19:44:20 Tower kernel: TDT <1> May 19 19:44:20 Tower kernel: next_to_use <1> May 19 19:44:20 Tower kernel: next_to_clean <0> May 19 19:44:20 Tower kernel: buffer_info[next_to_clean] May 19 19:44:20 Tower kernel: time_stamp <10018bab2> May 19 19:44:20 Tower kernel: next_to_watch <0> May 19 19:44:20 Tower kernel: jiffies <10018c000> May 19 19:44:20 Tower kernel: next_to_watch.status <0> May 19 19:44:22 Tower kernel: e1000 0000:04:00.1 eth2: Detected Tx Unit Hang May 19 19:44:22 Tower kernel: Tx Queue <0> May 19 19:44:22 Tower kernel: TDH <0> May 19 19:44:22 Tower kernel: TDT <1> May 19 19:44:22 Tower kernel: next_to_use <1> May 19 19:44:22 Tower kernel: next_to_clean <0> May 19 19:44:22 Tower kernel: buffer_info[next_to_clean] May 19 19:44:22 Tower kernel: time_stamp <10018bab2> May 19 19:44:22 Tower kernel: next_to_watch <0> May 19 19:44:22 Tower kernel: jiffies <10018c801> May 19 19:44:22 Tower kernel: next_to_watch.status <0> May 19 19:44:24 Tower dhcpcd[1703]: br2: probing for an IPv4LL address May 19 19:44:24 Tower kernel: e1000 0000:04:00.1 eth2: Reset adapter May 19 19:44:24 Tower kernel: e1000: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX May 19 19:44:24 Tower kernel: e1000 0000:04:00.1 eth2: Detected Tx Unit Hang May 19 19:44:24 Tower kernel: Tx Queue <0> May 19 19:44:24 Tower kernel: TDH <0> May 19 19:44:24 Tower kernel: TDT <1> May 19 19:44:24 Tower kernel: next_to_use <1> May 19 19:44:24 Tower kernel: next_to_clean <0> May 19 19:44:24 Tower kernel: buffer_info[next_to_clean] May 19 19:44:24 Tower kernel: time_stamp <10018bab2> May 19 19:44:24 Tower kernel: next_to_watch <0> May 19 19:44:24 Tower kernel: jiffies <10018d000> May 19 19:44:24 Tower kernel: next_to_watch.status <0> May 19 19:44:24 Tower kernel: e1000: eth2 NIC Link is Down May 19 19:44:24 Tower kernel: e1000 0000:04:00.1 eth2: Reset adapter May 19 19:44:25 Tower kernel: br2: port 1(eth2) entered disabled state May 19 19:44:26 Tower dhcpcd[1703]: br2: carrier lost May 19 19:44:29 Tower dhcpcd[1703]: br2: carrier acquired May 19 19:44:29 Tower kernel: e1000: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX May 19 19:44:29 Tower kernel: br2: port 1(eth2) entered blocking state May 19 19:44:29 Tower kernel: br2: port 1(eth2) entered forwarding state May 19 19:44:29 Tower dhcpcd[1703]: br2: soliciting a DHCP lease May 19 19:44:31 Tower kernel: e1000 0000:04:00.1 eth2: Detected Tx Unit Hang May 19 19:44:31 Tower kernel: Tx Queue <0> May 19 19:44:31 Tower kernel: TDH <0> May 19 19:44:31 Tower kernel: TDT <1> May 19 19:44:31 Tower kernel: next_to_use <1> May 19 19:44:31 Tower kernel: next_to_clean <0> May 19 19:44:31 Tower kernel: buffer_info[next_to_clean] May 19 19:44:31 Tower kernel: time_stamp <10018e28d> May 19 19:44:31 Tower kernel: next_to_watch <0> May 19 19:44:31 Tower kernel: jiffies <10018e840> May 19 19:44:31 Tower kernel: next_to_watch.status <0> May 19 19:44:33 Tower kernel: e1000 0000:04:00.1 eth2: Detected Tx Unit Hang May 19 19:44:33 Tower kernel: Tx Queue <0> May 19 19:44:33 Tower kernel: TDH <0> May 19 19:44:33 Tower kernel: TDT <1> May 19 19:44:33 Tower kernel: next_to_use <1> May 19 19:44:33 Tower kernel: next_to_clean <0> May 19 19:44:33 Tower kernel: buffer_info[next_to_clean] May 19 19:44:33 Tower kernel: time_stamp <10018e28d> May 19 19:44:33 Tower kernel: next_to_watch <0> May 19 19:44:33 Tower kernel: jiffies <10018f040> May 19 19:44:33 Tower kernel: next_to_watch.status <0> May 19 19:44:34 Tower dhcpcd[1703]: br2: probing for an IPv4LL address May 19 19:44:35 Tower kernel: e1000 0000:04:00.1 eth2: Detected Tx Unit Hang May 19 19:44:35 Tower kernel: Tx Queue <0> May 19 19:44:35 Tower kernel: TDH <0> May 19 19:44:35 Tower kernel: TDT <1> May 19 19:44:35 Tower kernel: next_to_use <1> May 19 19:44:35 Tower kernel: next_to_clean <0> May 19 19:44:35 Tower kernel: buffer_info[next_to_clean] May 19 19:44:35 Tower kernel: time_stamp <10018e28d> May 19 19:44:35 Tower kernel: next_to_watch <0> May 19 19:44:35 Tower kernel: jiffies <10018f840> May 19 19:44:35 Tower kernel: next_to_watch.status <0> May 19 19:44:37 Tower kernel: e1000 0000:04:00.1 eth2: Detected Tx Unit Hang May 19 19:44:37 Tower kernel: Tx Queue <0> May 19 19:44:37 Tower kernel: TDH <0> May 19 19:44:37 Tower kernel: TDT <1> May 19 19:44:37 Tower kernel: next_to_use <1> May 19 19:44:37 Tower kernel: next_to_clean <0> May 19 19:44:37 Tower kernel: buffer_info[next_to_clean] May 19 19:44:37 Tower kernel: time_stamp <10018e28d> May 19 19:44:37 Tower kernel: next_to_watch <0> May 19 19:44:37 Tower kernel: jiffies <100190040> May 19 19:44:37 Tower kernel: next_to_watch.status <0> May 19 19:44:39 Tower kernel: e1000 0000:04:00.1 eth2: Detected Tx Unit Hang May 19 19:44:39 Tower kernel: Tx Queue <0> May 19 19:44:39 Tower kernel: TDH <0> May 19 19:44:39 Tower kernel: TDT <1> May 19 19:44:39 Tower kernel: next_to_use <1> May 19 19:44:39 Tower kernel: next_to_clean <0> May 19 19:44:39 Tower kernel: buffer_info[next_to_clean] May 19 19:44:39 Tower kernel: time_stamp <10018e28d> May 19 19:44:39 Tower kernel: next_to_watch <0> May 19 19:44:39 Tower kernel: jiffies <100190840> May 19 19:44:39 Tower kernel: next_to_watch.status <0> May 19 19:44:39 Tower kernel: e1000 0000:04:00.1 eth2: Reset adapter May 19 19:44:39 Tower kernel: br2: port 1(eth2) entered disabled state "
  21. Reseating card allowed it to work again for about 30 minutes. Now it is back to doing it again..... Card was seated tightly and this server has been running fine for about a year with every other update except this one....
  22. Immediately after upgrading and rebooting making no other changes my e1000 ethernet adapter has gone into a frenzy resetting itself. "May 19 21:19:31 Tower kernel: e1000 0000:04:00.1 eth2: Detected Tx Unit Hang May 19 21:19:31 Tower kernel: Tx Queue <0> May 19 21:19:31 Tower kernel: TDH <0> May 19 21:19:31 Tower kernel: TDT <1> May 19 21:19:31 Tower kernel: next_to_use <1> May 19 21:19:31 Tower kernel: next_to_clean <0> May 19 21:19:31 Tower kernel: buffer_info[next_to_clean] May 19 21:19:31 Tower kernel: time_stamp <1006fd4c8> May 19 21:19:31 Tower kernel: next_to_watch <0> May 19 21:19:31 Tower kernel: jiffies <1006fe200> May 19 21:19:31 Tower kernel: next_to_watch.status <0> May 19 21:19:32 Tower dhcpcd[1703]: br2: probing for an IPv4LL address May 19 21:19:33 Tower kernel: e1000 0000:04:00.1 eth2: Detected Tx Unit Hang May 19 21:19:33 Tower kernel: Tx Queue <0> May 19 21:19:33 Tower kernel: TDH <0> May 19 21:19:33 Tower kernel: TDT <1> May 19 21:19:33 Tower kernel: next_to_use <1> May 19 21:19:33 Tower kernel: next_to_clean <0> May 19 21:19:33 Tower kernel: buffer_info[next_to_clean] May 19 21:19:33 Tower kernel: time_stamp <1006fd4c8> May 19 21:19:33 Tower kernel: next_to_watch <0> May 19 21:19:33 Tower kernel: jiffies <1006fea00> May 19 21:19:33 Tower kernel: next_to_watch.status <0> May 19 21:19:35 Tower kernel: e1000 0000:04:00.1 eth2: Detected Tx Unit Hang May 19 21:19:35 Tower kernel: Tx Queue <0> May 19 21:19:35 Tower kernel: TDH <0> May 19 21:19:35 Tower kernel: TDT <1> May 19 21:19:35 Tower kernel: next_to_use <1> May 19 21:19:35 Tower kernel: next_to_clean <0> May 19 21:19:35 Tower kernel: buffer_info[next_to_clean] May 19 21:19:35 Tower kernel: time_stamp <1006fd4c8> May 19 21:19:35 Tower kernel: next_to_watch <0> May 19 21:19:35 Tower kernel: jiffies <1006ff200> May 19 21:19:35 Tower kernel: next_to_watch.status <0> May 19 21:19:37 Tower kernel: e1000 0000:04:00.1 eth2: Detected Tx Unit Hang May 19 21:19:37 Tower kernel: Tx Queue <0> May 19 21:19:37 Tower kernel: TDH <0> May 19 21:19:37 Tower kernel: TDT <1> May 19 21:19:37 Tower kernel: next_to_use <1> May 19 21:19:37 Tower kernel: next_to_clean <0> May 19 21:19:37 Tower kernel: buffer_info[next_to_clean]"
  23. I read most every page of this post but can seem to find anyone who addresses a solution with pre-clearing then rebuilding. One of my data disk, 8TB V2 Seagate was disabled by unraid because of 'read errors' but then the drive passed both short and long SMART test, how do we re-enable the drive without pre-clearing and rebuilding? I am thinking it was just a loose connection in the hot-swap bay. ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw read error rate 0x000f 114 099 006 Pre-fail Always Never 70716432 3 Spin up time 0x0003 091 091 000 Pre-fail Always Never 0 4 Start stop count 0x0032 100 100 020 Old age Always Never 134 5 Reallocated sector count 0x0033 100 100 010 Pre-fail Always Never 0 7 Seek error rate 0x000f 084 060 030 Pre-fail Always Never 258740560 9 Power on hours 0x0032 096 096 000 Old age Always Never 4142 (5m, 19d, 14h) 10 Spin retry count 0x0013 100 100 097 Pre-fail Always Never 0 12 Power cycle count 0x0032 100 100 020 Old age Always Never 49 183 Runtime bad block 0x0032 100 100 000 Old age Always Never 0 184 End-to-end error 0x0032 100 100 099 Old age Always Never 0 187 Reported uncorrect 0x0032 100 100 000 Old age Always Never 0 188 Command timeout 0x0032 100 100 000 Old age Always Never 0 189 High fly writes 0x003a 100 100 000 Old age Always Never 0 190 Airflow temperature cel 0x0022 068 053 045 Old age Always Never 32 (min/max 28/39) 191 G-sense error rate 0x0032 100 100 000 Old age Always Never 0 192 Power-off retract count 0x0032 100 100 000 Old age Always Never 1646 193 Load cycle count 0x0032 098 098 000 Old age Always Never 4359 194 Temperature celsius 0x0022 032 047 000 Old age Always Never 32 (0 15 0 0 0) 195 Hardware ECC recovered 0x001a 114 099 000 Old age Always Never 70716432 197 Current pending sector 0x0012 100 100 000 Old age Always Never 0 198 Offline uncorrectable 0x0010 100 100 000 Old age Offline Never 0 199 UDMA CRC error count 0x003e 200 200 000 Old age Always Never 0 240 Head flying hours 0x0000 100 253 000 Old age Offline Never 2739 (24 10 0) 241 Total lbas written 0x0000 100 253 000 Old age Offline Never 62043713366 242 Total lbas read 0x0000 100 253 000 Old age Offline Never 242636153850 SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed without error 00% 4114 - # 2 Short offline Completed without error 00% 4098