Jump to content

jbuszkie

Members
  • Posts

    711
  • Joined

  • Last visited

Posts posted by jbuszkie

  1. 1 minute ago, landS said:

    Howdy folks

     

    An odd behaviour has cropped up recently.  After some period of time when I go to the WebUI, CP requires me to log in.  

     

    Is this just a change by CP to access the application from the (docker'ed) desktop --- or should I be concerned that backups are being impacted?

     

    Thanks!   

     

    I'm seeing this too..  I remember something sent to me about them requiring more log in 

     

    This what I found on in my e-mail

    Quote

    Security of your organization’s data is paramount at Code42. With the recent increases in ransomware attacks on the internet, we have implemented product changes to help you better protect your data.

    Effective March 3, 2020, Code42 will require all CrashPlan® for Small Business customers to enter their password to access the CrashPlan for Small Business desktop app. This change will help to further ensure the security of your CrashPlan data.
     

     

  2. Some history...

     

    I had a 8T drive fail (red ball) recently in slot will call it A.  I had a spare 8T unassigned device so I swapped it in and rebuilt fine.

    I ran pre-clear on the "failed" drive in slot A and it failed in the zeroing phase..  Ok maybe it was a bad drive.

    So I bought another 8T drive to keep as a spare.  I shucked it and threw it into slot B.  Slot B had an old 4T drive that I took out of service for some reason.

    I ran pre-clear on the new 8T drive in slot B and it ran fine.  No issues other than a couple UDMA CRC errors.  It was only like 4 or 5..  So I scratched my head and continued.  I want to remove 3 2T disks so I put my "spare" in slot B in the array.  I took all the stuff on one of the 2T disks and put it on the 8T drive in slot B.

    That ran fine an came up maybe with one or two more CRC errors.  I zeroed out the 2T drive and emptied the 2nd 2T drive on the 8T in slot B.  That went fine...  but a couple more CRC errors.  So I'm thinking maybe slot B has some issues with the cable?

     

    The first 2T was ready to be removed from the system So I removed it and created a new config and trusted parity.  That went fine.  I started a parity check and that was working fine so I stopped it.  I shutdown the array and was going to try the new disk in a different slot.  I moved the new 8T into slot A and removed the "failed" 8T

    I started the system and copied more files from the 2nd 2T disk that I wanted to empty.  After that was done I noticed that the new disk in Slot A red balled!!!  It had a bunch of errors (like 6k)  Shoot!  WTF!  So now I put the new 8T back in Slot B.  And it's currently being re-built!  It's moving along at 7% with no disk errors.

    It's up to 14 CRC errors.  but I think that's what it started with at the start of the re-build.

     

    So do I have a bad drive or bad slot(s).  The slots are in 2 different 5 bay norco drive cages.  

    Trying to figure out how to proceed..  Look to the experts here..

     

     

     

  3. Is there a way to just backup the flash drive?  The appdata is (supposedly) getting backed up by crash plan.  But The flash drive can no longer via crash plan.

    So I'd like to use this to just back up the flash drive (then crash plan will backup the backup).  I don't really want to stop all the dockers every time I want to back up the flash drive?

  4. This Stackoverflow post was helpful.  I was able to kill the process this way.  We'll see if I can stop/restart the array gracefully.  I need to wait for a disk zeroing to finish first before I can try...

     

    Quote

    All the docker: start | restart | stop | rm --force | kill commands may not work if the container is stuck. You can always restart the docker daemon. However, if you have other containers running, that may not be the option. What you can do is:

    
    ps aux | grep <<container id>> | awk '{print $1 $2}'

    The output contains:

    <<user>><<process id>>

    Then kill the process associated with the container like so:

    
    sudo kill -9 <<process id from above command>>

    That will kill the container and you can start a new container with the right image.

     

  5. I have a docker that won't stop.  I've tried manually killing it (docker kill containername) and I can't.

    I tried to kill the process in the container that was still running, but I got an error....

    My guess is I won't be able to shutdown/restart the array until the whole docker subsystem is happy.

     

    How can I kill/restart the whole docker processes?

     

    Thanks,

     

    Jim

  6. As I looked back into my old e-mail..  I've been running Unraid for over 10 years now!!!  I upgraded to pro back in April of 2009!  I can't find the original Purchase..  but it must have been before that!  Anyway..   I'm still using the original flash drive I started with.  I do have a second key and drive.. (It's probably not up to date  - Must add to todo list! And I'm not sure where it is LOL)

     

    When should I think about preemptively replacing the flash drive?  It's a 1GB Drive that's about 1/2 full..

     

    And a big Kudos to Tom and Limetech guys for such a great product and kudos for such a super great community!

     

    Jim

     

  7. 2 minutes ago, Squid said:

    Don't run that tool against all the drives or all the shares.  If it touches the appdata share, funky things may happen with your docker apps.  Hence why there's a docker safe new permissions tool right there also.  It will not allow you to run against appdata

    I didn't see the other one..  I'll have to look when the current one finishes.  I'm just running it on specific shares that got flagged.  So I should be safe?

     

    Thanks,

     

    Jim

  8. I'm trying to empty a drive using unbalance and it does warn me about file permissions..  I ran the extended test in FCP and I get a TON of these

    The following files / folders may not be accessible to the users allowed via each Share's SMB settings.  This is often caused by wrong permissions being used on new downloads / copies by CouchPotato, Sonarr, and the like:
    
    /mnt/user/Backup  nobody/users (99/100)  0770
    /mnt/user/Backup/acer_aspire  nobody/users (99/100)  0770
    /mnt/user/Backup/acer_aspire/post_factory.tib  nobody/users (99/100)  0660
    /mnt/user/Backup/acer_aspire/pre_install_image.tib  nobody/users (99/100)  0660
    /mnt/user/Backup/acer_aspire/Pre_office2.tib  nobody/users (99/100)  0660
    /mnt/user/Backup/acer_aspire/Pre_office.tib  nobody/users (99/100)  0660
    /mnt/user/Backup/Browser_bookmarks  nobody/users (99/100)  0770

    Wasn't there an "FIX" button for this?  What does it want me to change the permissions to?

  9. Ok.. I'll reply to my own message! 🙂

    I guess if I just click on the error thing I can acknowledge the error.   But I'm still guessing it will live in the smart report forever...

     

    Maybe I should open up the case and replace the cables in a couple of slots....

     

    Jim 

  10. I have 4 drives that have low UDMA CRC error counts.  They are not really increasing.  Do I have to live with 

    image.png.66a0d01353c63d3e0a2634951be32125.png

     

    these errors forever?  Unfortunately, one of these drives is brand new.  It's installed from a drive cage so the connections inside haven't been touched in years.

     

    Is there no way to clear them out?  My guess is no...

     

    Jim

     

     

  11. 5 minutes ago, itimpi said:

    This is probably the way to go if you want to remain parity protested throughout.

     

    You have got that backwards :(   If you only have parity1 present then you can reorder the disks without affecting parity.   This is not true if parity2 is present as the calculations for parity2 take disk position as one of the inputs.

    Crap..  That was a typo..  It should have said "I thought I read somewhere that with 2 parity drives you can't shuffle disk numbers around?"  I will correct the original post! 🙂

  12. I want to upgrade to 6.8.2 as I see some folks have some issues with some disk temps not showing up.  Not a big deal to me..  But I'd like to upgrade to 6.8.2 first.

    I'm currently on 6.7.2.  Is there a way to upgrade to 6.8.2 from the gui?  Or am I going to have to manually download and copy the files and reboot?  I like the fact that it keeps track of the previous revision and can restore that way. 

  13. Ok.  I had a drive die on my recently.  Bummer it was one of my newer 8T drives! (grrr...)  But I had a spare and I'm up and running again.

    But it got me motivated to  clean some things up and do some drive upgrades.

     

    Right now I have a single 8T parity drive.  I want to go with 2 parity drives.  That's the first upgrade.  I have a 10T precleared and ready to go.

    I also have 3x 2T drive I'd like to get rid of.  I have a precleared 8T drive ready for that!

     

    I'd like to remain as parity protected at possible.

     

    Should I add the second parity drive first?  This way when I can replace one of the 2T's with the 8T and it can re-build and I'm still protected if another drive dies while this is going on.  But then when I empty the other 2 2T drives and removed them from my system (shrink the array using dd), will it have to rebuild the second parity anyway?  Even if I do the dd and zero out the other drive and then do a trust parity?

     

    Or do I stay single parity and add the new 8T to the array and copy everything off the 2T's to the 8T and then zero out the 2Ts one at a time and remove and trust parity.

     

    I thought I read somewhere that with 2 parity drives you can't shuffle disk numbers around?

     

    Long term plan is to add another 10T to replace the first 8T.  I also want to add another NVME SSD cache drive.  But the NVME takes up two sata ports so I have to consolidate some drives to free up ports.  Right now I only have 2 free sata ports.  I like to keep on always free for a spare and then another to keep for a pre-clear spot? 

     

    I feel like such a noob!  I don't usually muck with my unraid as it just works! (Thanks limetech!)  So I have to relearn every time I have to do something with the system! 🙂

     

    Thanks!

     

    Jim

  14. It's been a while since I looked at it..  And I vaguely remember in the past it was a no-no..  But can I live hot plug my sata drives in my Norco 5bay enclosures?  I hate having to power down the server each time I want to swap some drives in a out.  It's not a big deal..  But if I can safely hot swap it would save me a little time...  Also I think I saw someone on  YouTube do it..

     

    The searches here all come up with USB hot plug topics...

     

    Jim

  15. 2 hours ago, Djoss said:

    I'm not aware of a such functionality.

    But you can do it yourself with to command "du -sh <path to directory>".

    Yeah...  The cloudberry guys said that it wasn't available too..

    What I need is a way to total random directories.  I can do it the hard way...  But I was hoping for something easier...

  16. Hello..  Coming over from crashplan to test out..

    This is a gui question..  Since I'll have to pay per GB...   Is there a way to see how many GB there are selected in the 

    backup before it actually starts backing up?  I'm trying to gauge how much it will cost me..

     

    I did a test backup and it only told me the size after I started the backup...

     

    Thanks,

     

    Jim

  17. 2 hours ago, jbuszkie said:

    Man this is a real bummer...

    I can rename the extension and it seems like CP will back it up...

    So this will work for backups for computers that don't exist anymore. (Why do I need them??? stupid hoarder of backups! :-) )

    But for active backups, I'll have to do something  more creative...

    I'll have to try softlinked tib files to renamed extension files for active backups...

    Well the soft-linked files may work.   I just tried to validate an Acronis backup set using soft links.  Acronis validated the backup just fine.

    We'll see if it will add to the incremental backup...  I will have to keep up with it as every time it updates, the update file will have to be renamed and then linked back if I want to be completely protected..

    What a pain...

     

    Maybe I will move to backblaze...   I'll have to see what I really *need* to be backed up...

     

×
×
  • Create New...