Jump to content

luisv

Members
  • Posts

    195
  • Joined

  • Last visited

Posts posted by luisv

  1. 3 hours ago, trurl said:

    Just make sure they aren't too short. If a cable barely reaches then it is going to have forces acting on it which can affect the connection.

     

    Current cables are 18" long, going with 8 - 10" instead, but yes, I'll make sure there's no strain on the connectors.  Thanks!

  2. 21 hours ago, craigr said:

     

    This is what I was just thinking; maybe all cables are tied together or are running too closely to other components or power wires.

     

    I miss read the original post and thought it was one CRC on one drive, not one on all four.  You could have not so great SATA cables or a marginal controller on the motherboard.  

     

    Once you you get the array built I would run two or three parity checks to make sure the numbers don’t rise.

     

    Also, if the SATA power cable terminator is not so great or poorly connected, or if you don’t have a large enough power supply you can sometimes get these error.

     

    I once racked up thousands of CRC’s on four drives connected to the same 4-way power splitter... and discovered my SATA powered splitter was not plugged in all the was to the power supply SATA power connector.  Unplug and replug and problem was gone.  Many years later no CRC has never increased.

     

    craigr

     

    It's a small form factor ITX build, so yes, cable management is tight with SATA and power cables in close proximity of one another.  I used a SATA power cable adapter as it was easier to route than the PSU SATA cable, but I can remove it and reroute things.  I also ordered shorter SATA data cables to help with the rats nest.  I'm using an EVGA G3 650w power supply.  

     

    The preclear completed successfully and no additional errors were found; however, here are the UDMA CRC error counts:

    • Parity - 2
    • Disk 1 - 3
    • Disk 2 - 2
    • Disk 3 -1

    After 2 Parity checks, the CRC errors remained the same... no other errors found.    

  3. 17 hours ago, CHBMB said:

    I think Thanks should be directed at all those members that have been dealing with the mess the last few months. @dlandon

    and @Squid have been working behind the scenes to try and clean up, and obviously @Frank1940 has put a great deal of work in here.  There are many others too.

     

    The unsung heroes imho....

     

    This has highlighted how fragile things are, and those of us that maintain plugins/containers have to maintain these things and I feel we owe a responsibility to do so, or at least state that life has taken over as PhAzE did when he wound things down.

     

    For sure, many thanks and kudos to @dlandon @Squid @Frank1940 and those behind the scenes that worked on this!  

  4. Just noticed this in the log. 

    Mar 30 09:29:40 MiniVault kernel: ACPI Error: Method parse/execution failed \_SB.PCI0.SAT0.PRT0._GTF, AE_NOT_FOUND (20170728/psparse-550)
    Mar 30 09:29:40 MiniVault kernel: ACPI Error: [DSSP] Namespace lookup failure, AE_NOT_FOUND (20170728/psargs-364)
    Mar 30 09:29:40 MiniVault kernel: ACPI Error: Method parse/execution failed \_SB.PCI0.SAT0.PRT3._GTF, AE_NOT_FOUND (20170728/psparse-550)
    Mar 30 09:29:40 MiniVault kernel: ACPI Error: [DSSP] Namespace lookup failure, AE_NOT_FOUND (20170728/psargs-364)
    Mar 30 09:29:40 MiniVault kernel: ACPI Error: Method parse/execution failed \_SB.PCI0.SAT0.PRT0._GTF, AE_NOT_FOUND (20170728/psparse-550)
    Mar 30 09:29:40 MiniVault kernel: ata4.00: configured for UDMA/133
    Mar 30 09:29:40 MiniVault kernel: ata4: EH complete
    Mar 30 09:29:40 MiniVault kernel: ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
    Mar 30 09:29:40 MiniVault kernel: ACPI Error: [DSSP] Namespace lookup failure, AE_NOT_FOUND (20170728/psargs-364)
    Mar 30 09:29:40 MiniVault kernel: ACPI Error: Method parse/execution failed \_SB.PCI0.SAT0.PRT2._GTF, AE_NOT_FOUND (20170728/psparse-550)
    Mar 30 09:29:40 MiniVault kernel: ata1.00: configured for UDMA/133
    Mar 30 09:29:40 MiniVault kernel: ata1: EH complete
    Mar 30 09:29:40 MiniVault kernel: ACPI Error: [DSSP] Namespace lookup failure, AE_NOT_FOUND (20170728/psargs-364)
    Mar 30 09:29:40 MiniVault kernel: ACPI Error: Method parse/execution failed \_SB.PCI0.SAT0.PRT2._GTF, AE_NOT_FOUND (20170728/psparse-550)
    Mar 30 09:29:40 MiniVault kernel: ata3.00: configured for UDMA/133
    Mar 30 09:29:40 MiniVault kernel: ata3: EH complete
    Mar 30 09:29:41 MiniVault kernel: ACPI Error: [DSSP] Namespace lookup failure, AE_NOT_FOUND (20170728/psargs-364)
    Mar 30 09:29:41 MiniVault kernel: ACPI Error: Method parse/execution failed \_SB.PCI0.SAT0.PRT1._GTF, AE_NOT_FOUND (20170728/psparse-550)
    Mar 30 09:29:41 MiniVault kernel: ata2.00: configured for UDMA/133
    Mar 30 09:29:41 MiniVault kernel: ata2: EH complete

     

  5. Just now, johnnie.black said:

    The fact that's it's new doesn't mean there isn't a problem, but wait, as long as you don't get more you're OK.

     

    No sure totally understandable.  Just wanted to indicate that parts weren't repurposed.  

  6. 3 minutes ago, johnnie.black said:

    Single error is not a problem, but since it happen on all disks you might get more, and that would mean there's trouble, I would say 4 bad SATA cables it's not likely, so maybe controller or if they share some kind of enclosure.

     

    Everything being used in this second smaller system is brand new, no drive enclosure, no additional controller card, using the built-in controller of the Asrock Z370M-ITX Motherboard.   

     

  7. 13 minutes ago, craigr said:

    Probably a SATA cable issue, or even low on power.

     

    craigr

     

    Hmm... new cables and new 650w PSU.  Safe to assume, nothing to worry about at this point of time?

  8. 14 hours ago, ninthwalker said:

     

    When you ran the test report did a notification in the web UI say it was sending or sent successfull?

     

    besides the dockers log there is also a logs directory in the config file with additional logs. Does the nowshowing.log file there say anything or the main docker log?

     

    I was able to figure it out.  I noticed an error in the docker log with server IP and port listed twice, so I went back to the Plex tab and I mistakenly entered the port number along with the Plex server's IP address... I'm just used to typing in the port # so did it out of habit.  Once I removed the port number, and tried it again, it ran successfully and the email was sent.  It took another min or two to update the web page.  

     

    Screen Shot 2018-03-29 at 7.38.07 AM.png

    Screen Shot 2018-03-29 at 7.38.19 AM.png

  9. 5 hours ago, CHBMB said:

    You can remove the container and image, it's only the appdata that shouldn't be removed.

    You could remove it, then stop Tautulli, rename the plexpy appdata folder to tautulli and then edit your tautulli template and point it to the renamed folder.....

    Sent from my LG-H815 using Tapatalk
     

     

    Worked perfectly, thanks!

  10.  

    On 3/10/2018 at 7:32 PM, CHBMB said:

    Project has been renamed from Plexpy to Tautulli.

     

    To migrate:

     

    1.  Stop the current plexpy container.

    2.  Use Community Applications to install Tautulli, utilising your existing Plexpy appdata folder for /config.

    On 3/10/2018 at 11:06 PM, luisv said:

     

    Great, very easy process to migrate.  Should I leave the PlexPy docker in place, but not running or can it be removed?  

     

    Any thoughts on what I should do?  Migrated without an issue and Tautulli runs perfectly, but should I delete/ remove the PleyPy docker?   I ask as the migration instructions said to use the existing PlexPy appdata folder for /config, so I assume if I remove PlexPy it would delete that PlexPay appdata folder and break Tautilli.  

     

     

  11. 3 hours ago, CHBMB said:

    Project has been renamed from Plexpy to Tautulli.

     

    To migrate:

     

    1.  Stop the current plexpy container.

    2.  Use Community Applications to install Tautulli, utilising your existing Plexpy appdata folder for /config.

     

    Great, very easy process to migrate.  Should I leave the PlexPy docker in place, but not running or can it be removed?  

  12. 1 hour ago, mgworek said:

    Awesome updates to the app. Love the folders, takes care of the tabs I asked for.

     

    You should have links to the web, support, and issues pages in the settings section of the app to make it easier to find info. Any maybe a link to release notes. I didn't even know about the folders feature till yesterday.

     

    Folders?  Where do you set that?

  13. 7 minutes ago, johnnie.black said:

    There multiple ways and utilities for testing disks including preclear, but all are optional, clearing a disk is only mandatory when it's added as a new disk to a parity protected array, and unRAID clears the disk in the background since v6.2, it you want to burn in the disk you can use preclear, badblocks, the disks own testing tools like WD lifeguard, Seatools, etc.

     

    2 minutes ago, bonienl said:

     

    And because it is optional, you will see two camps. There is no right or wrong here, you decide what works best for you.

     

     

    No sure and I understand optional, no right nor wrong.   Some folks might not have another system to attach multiple drives to perform tests like WD Lifeguard, etc, so within the system that will or is running unRAID what are those options.  Since preclear, plugin or script, was the optional standard and recommendation, whats the new one?

     

  14. 15 minutes ago, wgstarks said:

    Opinions on this issue a split.

     

    One opinion is that preclears are the best way to test a drive by putting severe read/write stresses on it and see if it survives. The drawback is that the 3 cycles (the usual recommended minimum) of preclears will take a long time. IIRC the last 8TB drive I precleared took almost 2 weeks for 3 cycles. Can’t remember for sure, but more than a week anyway. This would mean that an existing array with a failed drive could be waiting for a replacement and at risk of data lose for an extended period while the new drive was being precleared if you don’t keep a spare drive on standby (hot spare).

     

    The other opinion is that preclears aren’t needed since unRAID will now clear new drives as part of the process of adding them to the array. UnRAID supports SMART and an extended SMART test will reveal any defects with a new drive. This process is much faster than preclearing (still probably measured in days though for large drives) and can all be done while the array is online just like preclearing.

     

    If you search the forum you’ll find a few discussions of preclear vs smart. I’m not qualified to offer any opinion on that. In the end it’s your decision which is best suited for your needs. In my case if I were building a new machine and stocking it with new drives I’d probably go ahead and preclear them since there wouldn’t be a rush to get the array online. This would be especially true if I was shucking the drives from external enclosures since many of them still don’t support SMART and once shucked you won’t be able to RMA them. If I had an existing array with a failed drive I’d probably skip the preclear to save time in this unprotected condition and just trust SMART.

     

    Totally understandable and agreed, I'm not qualified to accurately articulate the differences nor say which is better, a preclear and smart, but  shouldn't there be some guidelines and or recommendations?  Just as it was highly recommended to run the preclear script and or plugin, shouldn't there be replacement as a direction for new users that wish to perform some sort of test?

×
×
  • Create New...