Ducky

Members
  • Posts

    36
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Ducky's Achievements

Noob

Noob (1/14)

2

Reputation

  1. Forgot to mention, mine was fine on Monday, just Sunday was not worky!
  2. As Johnnie said, whilst it 'may' be possible, in reality I would brace yourself for a rebuild. I had the same issue after swapping to a JBOD card, in the end I started again and restored my data from a backup.
  3. I've got a parity check scheduled for once a week, but the system does a check every day at same set time, not sure if this is a bug or not, as no errors are detected.
  4. I think on mine when I changed the card, the drives now show the model/serial in the Unraid interface using JBOD. In Raid0, they showed a generic name (provided by the raid card).
  5. Yup, it does. If I run it in the GUI it starts the test and then throws back the 'error' message, but if I check the progress in the CLI it shows it still running: ----------------- Self-test execution status: 52% of test remaining SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background short Self test in progress ... 6 NOW - [- - -] # 2 Background short Completed - 24740 - [- - -] # 3 Background short Completed - 24655 - [- - -] ------------------ and then eventually completed: ------------------ Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background short Completed - 24748 - [- - -] # 2 Background short Completed - 24740 - [- - -] # 3 Background short Completed - 24655 - [- - -] ------------------- There is a bit that says 'non-medium error count' and this goes up by one each time what ever that means!
  6. This is what I get using the 'smartctl -a' against one of the hard drives: It does look like SMART data is available, but whenever you run the test in the gui, it flags up errors (is it referring to the ones in the 'counter log' bit maybe)? thx ---------------------- smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.14.49-unRAID] (local build) Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Vendor: HP Product: EG0900FBVFQ Revision: HPDE User Capacity: 900,185,481,216 bytes [900 GB] Logical block size: 512 bytes Rotation Rate: 10020 rpm Form Factor: 2.5 inches Logical Unit id: 0x5000cca02211f1e0 Serial number: KPV9VY4F Device type: disk Transport protocol: SAS (SPL-3) Local Time is: Wed Sep 19 09:21:49 2018 BST SMART support is: Available - device has SMART capability. SMART support is: Enabled Temperature Warning: Enabled === START OF READ SMART DATA SECTION === SMART Health Status: OK Current Drive Temperature: 27 C Drive Trip Temperature: 60 C Manufactured in week 01 of year 2013 Specified cycle count over device lifetime: 50000 Accumulated start-stop cycles: 84 Specified load-unload count over device lifetime: 600000 Accumulated load-unload cycles: 1092 Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 11520304325066752 Error counter log: Errors Corrected by Total Correction Gigabytes Total ECC rereads/ errors algorithm processed uncorrected fast | delayed rewrites corrected invocations [10^9 bytes] errors read: 0 2532168 0 0 0 128424.213 0 write: 0 5717982 0 5717982 0 43665.787 0 Non-medium error count: 1127 SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background short Completed - 24740 - [- - -] # 2 Background short Completed - 24655 - [- - -] Long (extended) Self Test duration: 7308 seconds [121.8 minutes]
  7. Evening, Following on from another post, I switched out the 410 raid controller on my HP DL380G8 to an LSI 9207-8i, in the hope I would have more control of the drives, and no longer need to do the 'Raid0' thing for each drive. Long story short, I eventually rebuilt the system in the end as the drives were coming up as unknown (no big issue to restore everything). However, whilst hoping I would then have access to the SMART data, I find that Unraid is unable to read the capabilities of the SAS drives, but the SATA drives are fine. Each time I try running the SMART tests I get an error, or the system says the drive needs to be spun up (I think they are all spinning anyway, as I thought it wasn't possible for unraid to spin down a SAS drive? Was it a wasted upgrade? thx
  8. I picked up a Dell LSI9207-8i which I temp tested in another box with Unraid, works fine, so the next stage is to try one of the drives and see if the Raid0 is passed ok or not.... Will try as soon as I can, away this weekend.
  9. I use a Milesight PTZ camera (C2961S) on an old i3 desktop running Windows 10, this saves to a Nas share on unraid. I had a lot of issues trying to get the system to detect the camera under unraid, running on a VM. Eventually gave up and stuck it on a desktop. Not tried any other software except Milesight VMS Pro, mostly because I need the PTV options, but the software doesn’t feel very user friendly, I think it’s aimed at a more professional market. if it helps I don’t notice and network issues (1gb), but the camera has a 100mb link only.
  10. Yeah, I've heard horror stories about the chinese stuff! Will have a nosey for any 'server pulled' ones. No, it's stock server (in the garage) so noise wasn't an issue! lol
  11. That should be feasible, the only issue I have is my cache drives are 23+24 (14 drives on each connector not 16), but I could get rid of them while I make the switch. Are you thinking move the drive to the other side of the backplane connected to the HBA, and basically do the same as you mentioned earlier? I need to buy the HBA card before I proceed, I'm a bit wary about the cheap ones from China on Ebay - are they dodgy? I've seen a UK seller with a genuine one for twice the amount, not too fussed for peace of mind
  12. The current controller connects to the SAS back plane (two cables) each controlling 16 drives, so I guess the above isn't going to be possible sadly... Agreed on not spinning down drives to improve their life span, it was just for power reasons...but currently I have half the drives ejected as they're not in use, but I could do the same approach on JBOD, it was just a query really. cheers :)
  13. Ducky

    Disk Logs?

    Quick update, I've re-enabled Sync and I'm not seeing the issues, so I think most likely it was like 'itimpi' had suggested, and there was a large file in the cache drive which the mover was trying to copy back every morning. Since I deleted the vm file I've not had the problem.
  14. So realistically whilst it works...it's not ideal and the JBOD option would be best. Do you know if Unraid is still unable to spin down SAS drives? I can get Smart info, but only if I drop in the HP diag menus, which isn't very practicable.... Wonder if I should bite just the bullet and move everything to a LSI9211. I do have a second server I could setup as a trial unraid I guess, and then copy everything across, rebuild the original with a 9211, and then migrate it all back.... Thanks for you input btw.
  15. I have two R0 volumes assigned as parity drives. So potentially I'm safe, and no need to bother moving to a JBOD setup (unless I want the Smart reporting etc)?