Jump to content

Whoa, where the parity errors come from?


TyantA

Recommended Posts

So... unraid has been trudging along pretty well (granted, getting more and more full) without a reboot for the past 60+ days. 

 

Last week I did a parity check in anticipation of receiving my first 6TB drive headed for the parity slot. The check finished without sync errors, however, tonight I just noticed the parity drive is reporting 123 errors. This isn't necessarily representative of a drive issue is it? More software?

 

2017-12-11_2017.png

 

My plan was to pull the current 3TB parity drive, plop the 6TB and begin the rebuild as soon as its 3rd pre-clear cycle completes. From there, I would bump a 2TB drive from the array, replacing it with the old parity drive. 

 

Are these errors cause for concern? Should I do another parity check before the swap? Diagnostics attached.  

diagnostics-20171211-1945.zip

Edited by TyantA
Link to comment

Read errors on the parity disk.

Try to run an extended smart test on that disk. IIRC the option is available through the dashboard in the drive properties.

 

Nov  6 03:41:35 Slipstream kernel: ata11.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
Nov  6 03:41:35 Slipstream kernel: ata11.00: irq_stat 0x40000001
Nov  6 03:41:35 Slipstream kernel: ata11.00: failed command: READ DMA EXT
Nov  6 03:41:35 Slipstream kernel: ata11.00: cmd 25/00:d8:b0:ed:a3/00:03:2e:01:00/e0 tag 13 dma 503808 in
Nov  6 03:41:35 Slipstream kernel:         res 51/40:60:28:f1:a3/00:00:2e:01:00/0e Emask 0x9 (media error)
Nov  6 03:41:35 Slipstream kernel: ata11.00: status: { DRDY ERR }
Nov  6 03:41:35 Slipstream kernel: ata11.00: error: { UNC }
Nov  6 03:41:35 Slipstream kernel: ata11.00: configured for UDMA/133
Nov  6 03:41:35 Slipstream kernel: sd 11:0:0:0: [sdl] tag#13 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
Nov  6 03:41:35 Slipstream kernel: sd 11:0:0:0: [sdl] tag#13 Sense Key : 0x3 [current] 
Nov  6 03:41:35 Slipstream kernel: sd 11:0:0:0: [sdl] tag#13 ASC=0x11 ASCQ=0x4 
Nov  6 03:41:35 Slipstream kernel: sd 11:0:0:0: [sdl] tag#13 CDB: opcode=0x88 88 00 00 00 00 01 2e a3 ed b0 00 00 03 d8 00 00
Nov  6 03:41:35 Slipstream kernel: blk_update_request: I/O error, dev sdl, sector 5077462448
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462384
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462392
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462400
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462408
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462416
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462424
Nov  6 03:41:35 Slipstream kernel: ata11: EH complete
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462432
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462440
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462448
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462456
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462464
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462472
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462480
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462488
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462496
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462504
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462512
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462520
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462528
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462536
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462544
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462552
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462560
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462568
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462576
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462584
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462592
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462600
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462608
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462616
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462624
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462632
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462640
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462648
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462656
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462664
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462672
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462680
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462688
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462696
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462704
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462712
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462720
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462728
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462736
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462744
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462752
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462760
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462768
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462776
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462784
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462792
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462800
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462808
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462816
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462824
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462832
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462840
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462848
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462856
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462864
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462872
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462880
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462888
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462896
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462904
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462912
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462920
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462928
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462936
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462944
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462952
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462960
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462968
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462976
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462984
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077462992
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463000
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463008
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463016
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463024
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463032
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463040
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463048
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463056
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463064
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463072
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463080
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463088
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463096
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463104
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463112
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463120
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463128
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463136
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463144
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463152
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463160
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463168
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463176
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463184
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463192
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463200
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463208
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463216
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463224
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463232
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463240
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463248
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463256
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463264
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463272
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463280
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463288
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463296
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463304
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463312
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463320
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463328
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463336
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463344
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463352
Nov  6 03:41:35 Slipstream kernel: md: disk0 read error, sector=5077463360

edit:

how 2 make the code windows expandable? o.O

Edited by Fireball3
Link to comment

Adding some background information:

It happened while the mover was running. The process is read-modify-write.
That means, the parity bits are read, modified according to the new data that has to be written and subsequently the write
of the new bits on the parity will take place.
Now the "read" was not successful at some sectors. I suppose the parity is being calculated and written to the parity regardless of the read error. During a parity check unRAID will do the same.
A read error on any disk will be corrected by using the redundant information available - either by rebuilding the
parity, if the error is on the parity drive, or by rebuilding from parity, if the error is on a data drive.

There were no write errors, else the drive would have dropped out.

 

In a single-drive scenario, this issue would have been data corruption.

I wonder why the errors are not counted in the smart log.

 

Taking into consideration that you ran the parity check a few days ago and no error manifested, you should have an
eye on this drive. If it's throwing more read errors you should consider replacing it.

 

Replacing the parity is fine, as the data drives seem to be healthy.
But if you had this kind of issues on a data drive and would replace the parity drive you would risk a rebuild error.

Link to comment
4 hours ago, johnnie.black said:

Those are read errors, and it was a disk issue, there are no pending sectors, you should run an extended SMART test.

 

Sigh, of course they are. 

 

4 hours ago, Fireball3 said:

Read errors on the parity disk.

Try to run an extended smart test on that disk. IIRC the option is available through the dashboard in the drive properties.

 

 

Thanks. Yep, with the drive spun up, it's an option when clicking the (in my case) green thumbs up beside Smart Status for the drive's attributes. I was also able to get there from the main tab, clicking on the drive. 

 

3 hours ago, Fireball3 said:

Adding some background information:

It happened while the mover was running. The process is read-modify-write.
That means, the parity bits are read, modified according to the new data that has to be written and subsequently the write
of the new bits on the parity will take place.
Now the "read" was not successful at some sectors. I suppose the parity is being calculated and written to the parity regardless of the read error. During a parity check unRAID will do the same.
A read error on any disk will be corrected by using the redundant information available - either by rebuilding the
parity, if the error is on the parity drive, or by rebuilding from parity, if the error is on a data drive.

There were no write errors, else the drive would have dropped out.

 

In a single-drive scenario, this issue would have been data corruption.

I wonder why the errors are not counted in the smart log.

 

Taking into consideration that you ran the parity check a few days ago and no error manifested, you should have an
eye on this drive. If it's throwing more read errors you should consider replacing it.

 

Replacing the parity is fine, as the data drives seem to be healthy.
But if you had this kind of issues on a data drive and would replace the parity drive you would risk a rebuild error.

 

Thanks for the bg info. 

Yes, it's "good" that it's happening to my parity drive, given I have a replacement just finishing up its 3rd pre-clear post-read... but I was counting on the added space I'd get by re-introducing this drive as a data drive. 

 

I have started an extended smart test. We'll see what it comes back with. You're right - odd that the smart status was still showing good. 

 

Perhaps as a complete coincidence, I went to watch a TV series... Sunday maybe, that had existed on the array for some time. Plex wouldn't play it. I thought it could be an issue with the file, so I tried to play the last ep we watched. It wouldn't play either.  I thought that was odd and that it was an issue on the plex player end. But the error was something like "make sure the disk is properly mounted".  Last night I dug around a bit and noticed that the episodes were in fact "missing" from where they should be. 

 

I found them in another folder on the same drive. (A while back I had been moving some series around). It's almost like... they were moved without parity knowing? Then when the parity check was run it "corrected" their location back to where they were. Is that possible?  

Link to comment
7 minutes ago, TyantA said:

I found them in another folder on the same drive. (A while back I had been moving some series around). It's almost like... they were moved without parity knowing? Then when the parity check was run it "corrected" their location back to where they were. Is that possible? 

Not impossible, but very, very unlikely.
The more plausible scenario is that your plex didn't notice that the files were moved.

 

If the parity check should have solved that issue it would have noticed errors.
According to your introducing post, there were no sync errors.
Probably the plex scraper ran in the meantime and updated the database.

Link to comment
8 minutes ago, TyantA said:

I was counting on the added space I'd get by re-introducing this drive as a data drive.

You can try preclear on that drive once you have replaced it.

That may reveal more issues. If it stays stable, you can still have it as a data drive but keep an eye on it

and maybe copy less valuable data on it.

 

Link to comment
11 minutes ago, Fireball3 said:

If it stays stable, you can still have it as a data drive but keep an eye on it

and maybe copy less valuable data on it.

Keep in mind if it's an array drive, you are relying on it to accurately rebuild any other failed drive. Don't keep dodgy drives in the array, period. Once bitten, twice shy.

 

Maybe use it as offline 2nd tier backup of less valuable data, but not an array drive.

Link to comment

Home from work. So extended smart test completed without error. (See attachment)

 

So because there were no sync errors last week, should I be good to swap in the new parity drive as its 3rd preclear cycle finished successfully?

2017-12-12_1729.png

 

Edit: back to the Plex thing - it's still weird because Plex knew where it was in the new location. It's like the data reverted to the old location on the same drive and Plex wasn't made aware of that. But I haven't made changes recently, and as mentioned, I had watched episodes within the last few days that suddenly weren't there anymore. Weird.

Edited by TyantA
Link to comment

Bah! I went to grab diagnostics once more before powering down. Left for ~5+ minutes then came back to a new window and requested shut down. I'm stuck at "retry unmounting disk shares" over and over. How do I get it to shut down properly? The GUI seems locked up. 

 

Edit: eventually GUI became unresponsive. Only option was to log in via SSH and issue reboot command. 

 

Edit 2: New drive is in place! First 6TB drive in the array. I may need to invest in some faster controllers soon... estimating 2 days, 11 hrs @ 28MB/s :(.

 

Edit 3: is there any reason NOT to start the pre-clear on the 3TB I pulled out now? I mean, the data on the data drives should be sound. I guess the risk is, if one of them checks out while parity is being synced/built on the new parity drive, I'm sunk. Whereas worst case if I still have the 3TB I'd be able to rebuild. 

 

Flip side, if I start it now (and nothing goes wrong) I'll be ready to drop it in, in place of a 2TB drive (assuming all checks out) when the parity sync is done.

Edited by TyantA
Link to comment
9 hours ago, TyantA said:

I guess the risk is, if one of them checks out while parity is being synced/built on the new parity drive, I'm sunk. Whereas worst case if I still have the 3TB I'd be able to rebuild. 

In order to do so you would need a copy of your old config - afaik.

It's always a good idea to pull a copy of the unRAID stick before changing the setup!

 

9 hours ago, TyantA said:

I guess the risk is, if one of them checks out while parity is being synced/built

Exactly.

Link to comment

Original PCI was 133 MB/s or 267 MB for 64-bit cards or 32-bit cards using 66 MHz (PCI v2.1).

 

While PCIe version 1 manages 250 MB/s per lane, increased to 500 MB/s for version 2 and just under 1 GB/s för version 3.

 

So interface type and supported version of the interface standard makes a huge difference in RAID solutions where there is a need to interface many disks concurrently. 

Link to comment
4 hours ago, Fireball3 said:

In order to do so you would need a copy of your old config - afaik.

It's always a good idea to pull a copy of the unRAID stick before changing the setup!

 

Exactly.

1

 

Right. That would have been smart. Been so long since I've made changes to anything!

 

Yep, I have two old controllers in there. And yes, it's never gone much over 30MB/s. I've begun plans to make a second unraid box for selective backups of the first. I'll poke around the forums and see what newer controllers will serve my needs , because 2+ day parity checks won't do. 

 

20% - whoop whoop!

Link to comment
13 hours ago, Fireball3 said:

 

Convenient. 

 

I just realized I actually have an empty slot in my case still! :o I've been limited by the 6 onboard ports + 2x 4-port legacy controllers. 14 ports total. The case has 14 bays but one is an SSD, so if I had more ports, I could stick one more 3.5" and could cram a few more SSDs in too. 

 

Too bad there weren't any good 6-port controller options. Two of those + my motherboard and I'd be set. 16 port seems excessive (given my case) and is likely expensive. 

 

This has me thinking... 

 

I have a P9X79 Pro in my workstation and a vanilla P9X79 in my unraid box. If I swap the two out... I'll loose bluetooth, better audio and a few other niceties, but would gain two onboard SATA ports. 8 Onboard (not to mention 2x eSata) + a nice new 8-port controller would work out nicely. 

 

There are a lot of "not yet tested" and crossflash required on that list. Looks like LSI might be the best to look at. Maybe the  LSI SAS 9201-8i HBA? If I'm going to make the jump, should probably do so to a PCi 3.0 card... 

 

Whoa nelly! ~$400+ CAD range for these things!? 

 

Edit: Interesting. Just checked in on my parity rebuild. It's jumped up to ~160MB/s and will be done in 2 hours. Woot! I guess that makes sense as there are no data drives above 3TB in the array just yet. Whelp, I'm going to jump the gun a bit and start pre-clearing the old parity drive I just pulled out. 

Edited by TyantA
Link to comment

So that's what I was hoping to see!

 

"Parity disk returned to normal operation" and "Parity sync finished (0 errors)."

 

Now if I had started the pre-clear of the old parity drive sooner, I could have decided its fate this morning. Either reintroducing to the main array or being relegated to my potential backup build. 

 

As it stands, it's 62% done zeroing, so that'll be tonight's decision. 

Edited by TyantA
Link to comment
10 hours ago, TyantA said:

There are a lot of "not yet tested" and crossflash required on that list. Looks like LSI might be the best to look at. Maybe the  LSI SAS 9201-8i HBA? If I'm going to make the jump, should probably do so to a PCi 3.0 card... 

 

Whoa nelly! ~$400+ CAD range for these things!? 

Right, the LSI SAS2008 based cards can be recommended. They can handle state of the art SATAIII disks to their full potential.

The only issue might be availability.

In EU those cards and the rebranded ones (the ones that need crossflashing) can be shot in the bay for ~40€.

DELL and Fujitsu cards are often on ebay in EU. LSI and IBM not so.

You have to check what's the situation at your place.

 

The LSI SAS 9201-16i is also often cited around here.

 

PCIe 3.0 cards can handle SSDs.

Link to comment

I found the LSI SAS9211-8I for ~$150 CAD

And an "OEM" 9300-8I PCI 3.0 on eBay from China for about the same price.

 

Is the latter a knockoff? I would much rather the PCI 3.0 version all other things equal. Do these cards usually come with the SAS > SATA cables or no? I guess I should expand my search to those that need to be flashed. Sounds like there are price breaks to be had!

 

In other news, the former parity drive not only completed the extended smart test but successfully finished a pre-clear cycle. I'm dropping it back in the array in place of a 2TB drive that just threw its first reallocated sector error. 

Link to comment
  • 3 weeks later...

I ended up (hindsight) looking at the used market in Montreal AFTER I was there over the holidays. Managed to find someone selling a Dell H200 with cables for $80CAD which is less than I was thinking I'd have to spend. Had I looked while I was there, I could have had it with me for this weekend's hardware swap but... instead I either have to wait for relatives to come visit or get 'em to package it up real good and ship it. Anyway, I'm happy to have found a faster card!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...