snowboardjoe

Members
  • Posts

    207
  • Joined

  • Last visited

Everything posted by snowboardjoe

  1. Issue fixed. I don't think it could deal with a USB2 connection. Both drives kept hanging on me over and over again. Spun up some spare hardware where I could plugin both hard drives to a real SATA interface. Preclear blasted right though and almost done on the post-read now.
  2. I triggered this plugin and it deleted what I though was old data for my CrashPlan PRO. It wasn't. It's was my local client configuration. Suggestions on how to get this working again? Is this a known issue?
  3. Was doing some maintenance and upgraded to 6.5.0 this evening. No problems and running smoothly there. Updated CrashPlan PRO to latest version and that worked fine. I triggered a plugin I have "CA Cleanup Appdata". I thought it was cleaning up an old, abandoned version of CP. I went to verify CP was still working and the GUI shows local client as "Initial backup not started". What!? I think it just wiped the local client data. Grrrrr. I've stopped the container for now. Next steps? Can I readopt my previous client? I don't want to do a full backup again of 12TB.
  4. Tried to clear one of my new 6TB WD Red drives. This is a used drive I got off of eBay. I'm out of SATA ports, so I had to use my USB connection to do this. It got 83% through the pre-clear read and that stopped. When I found it this morning, the drive was already spun down and idle. Nothing logged. Suggestions other than trying it again? Takes forever over USB (days) for a single pass. I'm going to try and start it again now and I'm headed out of town for a few days. If it fails again, I may fire up a spare PC and use DBAN to verify the disk.
  5. Ok, had no idea that button was down there. Assumed the page refreshed that automatically with each load. All containers updated including CP and it's synchronizing now. Thanks for the quick help there. Just haven't run into that before.
  6. I'm currently running 4.9.0. I don't know what version it's trying to upgrade to. Attaching a snippet from the logs in the CP application itself (I don't know how to copy the contents from the Web UI). I don't know where the Tools > History is located or what you are referring to here.
  7. That was the first thing I looked at. It's already at the latest version.
  8. Getting an error from CP that my system has not backed up in 5 days. Went to the console and I have an errors on the Web GUI stating, "CrashPlan PRO failed to apply an upgrade and will try again automatically in one hour". Appears the client keeps trying to do an internal upgrade and failing and has been doing this for days. It appears it will never attempt another backup until it can get past this upgrade problem.
  9. I'm still trying to see if I can takeover my existing 11TB of data. CPPro still shows the data is there after the migration and I've been struggling to get my new client to attach to it to keep from spending days/week backing it up all over again. I went through the takeover steps, but it still wants to start over from scratch. When I first got it initialized it shows all my shares were missing and guessing this was due to the path changes (using shares instead of storage and using flash instead of boot). The instruction say to just reselect the shares (not remove the old ones), but it still insists on backing up everything all over again. Is there any way around this? Is there something I should be mapping differently to make it all work? I thought the client would have figured out that data had moved and would not back it up again.
  10. Was starting to review steps needed to migrate to PRO and was so happy to see this thread here. A few questions, though: 1. I'm backing up 11TB and Code42 has already indicated it can't migrate anything over 5TB and will need to perform a full backup from scratch. I assume there is no need for me to migrate any data then? Or do I still need to do copy "/mnt/user/appdata/CrashPlan" to CrashPlanPRO? 2. Long ago I had to modify the code/config to avoid deduplication by CP as it bogged down multi-terabyte backups. Will I need to make that modification again with PRO? I need to find in my notes how I did this long ago. With dedupe turned on my backup never would not have finished as it was too aggressive.
  11. I just expanded my system from 12TB to 18TB a few weeks ago by adding another SATA controller and two 3TB drives. This morning at 4:40am unRAID reports that it lost contact with those new drives. Syslog reports it keeps trying to hard reset the connection, but getting nowhere. I was a little odd from the main web GUI how it still showed the drives as green?!? I rebooted the system to see if it would clear the error. Now it comes up and complains that it can't reach those two new drives. Even tried powering down, waiting a minute and firing it up again. Still, no joy. Pretty sure that controller is just dead in the water (IOCrest SI-PEX40064). I guess I will head out to Fry's this evening to see if they have something so that I can get this back online quick. I'm hoping that once I resolve the controller issue the drives will be back and things will be happy again. Will definitely run a parity scan once that is stable. I took a snapshot of the screen with the devices are their locations. Anything else that I should be on the lookout for when bringing this back online?
  12. Ah, good point. I could also just ssh into the system and leave a running tail of the log too and leave that running on my desktop. I'll try both of those out in case that crash happens again.
  13. OK, I got things going now. It's still in the middle of synchronizing block information and will take it awhile before I start seeing some real results. Here are some notes on my confusion and how I got things running. o Got Docker initialized fine, but it helps to read some of the unRAID information on how it works here. o Make sure you put your Docker container in the right place. o Do not use the instructions at the beginning of this thread—they're outdated as far as I can tell. o You will need the Community Applications plugin to get access to Crashplan containers (server and desktop client). o Verify you have good mappings to your devices (I had to /boot /boot to backup my flash drive). o Volume mappings will be different compared to your old CrashPlan configuration. o Add the new path(s) to your backup sets and let CrashPlan sort out the new mappings without having to back all of it up over again (don't delete any old ones even if they say they are "missing" yet). o Once you have a complete backup you can remove the "missing" entries. o Mac: To access the client you need CoRD and connect to port 3389 (don't use Remote Desktop Connection on Mac). Hope that helps some other new people.
  14. I had the same problem today with an unexpected server crash. I got the notice from another system plugged into the same UPS and it reported it lost contact with the apcups daemon on the unRAID server. So, I know there was not a power problem and the UPS did not fail since the other system was up and running fine. I created a separate thread to report the problem too. Hope a pattern emerges here.
  15. Had a crash today at 1:23pm PDT. I got the alert from another system that reported it lost contact with the apcups daemon, but it came back after a minute or two. I then got notification from unRAID that it was running a parity check. When I got home I verified the system rebooted today. I've read a few other threads here and looks like some people are having the same problem? I upgraded to 6.0.1 over the weekend, so I've not been on this version very long. Never had a crash before. I have Docker enabled, but I had the CrashPlan containers shutdown while I was still doing some work on them. No other customizations have been made to the system. Since syslog is written to memory, there's no way to get any information on why the crash happened. Wish there was a way to have logs sent somewhere else that was not so volatile.
  16. I'm still stuck myself and having a hard time finding documentation and information on this new setup. It finally dawned on my with this thread that I muse use the RDC over port 3389 to connect? I tried this, but all I get is a black screen. Tried restarting the CrashPlan-Docker, but still nothing. No errors in the logs either. Any further suggestions on how to diagnose this problem? I used to be able to connect my Mac client by modifying the ui.properties file, but that has no effect and always connects to the localhost now. Any other suggestions on how to get this working? I guess I would prefer the RDC method if I can get that working.
  17. I just upgraded to unRAID 6.0.1 this evening. The main application I need to get running again is CrashPlan. I'm new to Docker here as well. I followed the directions from the first post on this thread: docker run -d -h laffy --name=crashplan -v /mnt/user/crashplan:/config -v /mnt/user:/data -v /etc/localtime:/etc/localtime:ro -p 4242:4242 -p 4243:4243 gfjardim/crashplan Watched it install the components successfully, but then got the following error: root@laffy:~# docker logs crashplan *** Running /etc/my_init.d/config.sh... mv: cannot move '/etc/localtime.dpkg-new' to '/etc/localtime': Device or resource busy *** Running /etc/rc.local... *** Booting runit daemon... *** Runit started as PID 57 So, I guess it's running, but not sure. My Mac client is unable to connect to it. When I fire it up after modifying ~/Library/Application Support/Crashplan/ui.properties (as I've done many times in the past for years), the client keeps going back to my local mac client. I have no idea why that's happening. Perhaps it defaults to that if it can't connect to the specified IP? I tried to review the setting in the container, but when I go to the Docker tab in unRAID interface, I don't see any screens like the screenshots I'm seeing here. So, I have no idea what is wrong here and there is very little information to go on in troubleshooting this issue. UPDATE: I removed the entire Docker configuration and started over again. This time I used from the Community plugin and got that much working. At least that much is installed properly now and the interface is working. I'm still unable to connect my Mac client though and that's baffling. Probably need to modify the starting point of this thread that those instructions will not work for a new 6.0.1 installation.
  18. I reseated the cables and reorganized some of the wiring in case there were some interference issues going on. My preclear finished for both drives and the errors did pop up at random a few times for each run for the two drives. Preclear results were clean and passed (3 passes). I decided to press on with the drive replacements by stopping the array, changing the drive assignment for one drive and restart and let it rebuild the first new disk from parity. That was successful last night and running another parity check. There were no errors during the parity rebuild or the following check. So, I'm calling all clear at this point. I have one more drive to replace and will check the logs for any more errors after that. I'm guessing the preclear just drives the hard drive in a unique way that pushes the IOCrest card to the limits. If any more errors do show up, then I'll simply remove it and find another alternative. At this point, I think I'm good. It appears from the error that it can be a large variety of things: loose cable, bad cable, cable crosstalk, controller issues, controller/motherboard issue, power issue..., the list goes on. As long as the error is not coming up in normal operations, I'm pretty sure I'm good here.
  19. Running on v5.0 here and for a long time. There is a preclear running on a new disk, but that's about it. This was a bit alarming today as I use NFS between my Ubuntu VM's that access unRAID. While performing some updates on Plex I had it refresh the library where it then proceeded to remove tons of media from its database. When I logged into the VM to check the NFS mounts, the data for several subdirectories was unreadable. Yet I could see them when logged into unRAID. After I did that the NFS client could see the directories again. This is really weird and scary to see this kind of inconsistency from NFS. Here's a good example of what happened. A show that went missing showed up like this on the client: morris@plex01:/unraid/shows/Battlestar Galactica (2003)$ ls ls: cannot access Extras: No such file or directory ls: cannot access Season 00: No such file or directory ls: cannot access Season 01: No such file or directory ls: cannot access Season 02: No such file or directory ls: cannot access Season 03: No such file or directory ls: cannot access Season 04: No such file or directory Extras Season 00 Season 01 Season 02 Season 03 Season 04 morris@plex01:/unraid/shows/Battlestar Galactica (2003)$ ls -l ls: cannot access Extras: No such file or directory ls: cannot access Season 00: No such file or directory ls: cannot access Season 01: No such file or directory ls: cannot access Season 02: No such file or directory ls: cannot access Season 03: No such file or directory ls: cannot access Season 04: No such file or directory total 0 d??? ? ? ? ? ? Extras d??? ? ? ? ? ? Season 00 d??? ? ? ? ? ? Season 01 d??? ? ? ? ? ? Season 02 d??? ? ? ? ? ? Season 03 d??? ? ? ? ? ? Season 04 I then listed that same directory on the unRAID server: root@laffy:/mnt/user/shows/Battlestar Galactica (2003)# ls -l total 7 drwxr-xr-x 1 nobody users 5800 2013-09-15 13:48 Extras/ drwxrwxrwx 1 nobody users 112 2014-02-09 03:57 Season\ 00/ drwxrwxrwx 1 nobody users 48 2014-02-09 04:48 Season\ 01/ drwxrwxrwx 1 nobody users 752 2014-02-06 06:32 Season\ 02/ drwxrwxrwx 1 nobody users 496 2014-02-07 06:46 Season\ 03/ drwxrwxrwx 1 nobody users 112 2014-02-07 08:20 Season\ 04/ I then go back to my NFS client to list the contents again: morris@plex01:/unraid/shows/Battlestar Galactica (2003)$ ls -l total 7 drwxr-xr-x 1 99 users 5800 Sep 15 13:48 Extras drwxrwxrwx 1 99 users 112 Feb 9 03:57 Season 00 drwxrwxrwx 1 99 users 48 Feb 9 04:48 Season 01 drwxrwxrwx 1 99 users 752 Feb 6 06:32 Season 02 drwxrwxrwx 1 99 users 496 Feb 7 06:46 Season 03 drwxrwxrwx 1 99 users 112 Feb 7 08:20 Season 04 I'm finding many of my subdirectories are in this state and I have no idea why. Everything is consistent now, but..., wow, what the heck is going on here?
  20. Still getting errors trickling in... root@laffy:/var/log# grep exception syslog Feb 22 12:35:09 laffy kernel: ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Feb 22 12:35:38 laffy kernel: ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Feb 22 12:36:20 laffy kernel: ata7.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Feb 22 12:37:13 laffy kernel: ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Feb 22 12:38:03 laffy kernel: ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Feb 22 12:38:26 laffy kernel: ata7.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Feb 22 12:39:33 laffy kernel: ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Feb 22 12:41:03 laffy kernel: ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Feb 22 12:43:06 laffy kernel: ata7.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Feb 22 12:47:13 laffy kernel: ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Feb 22 13:47:12 laffy kernel: ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Feb 22 16:03:04 laffy kernel: ata7.00: exception Emask 0x0 SAct 0x1cf SErr 0x0 action 0x0 Feb 22 16:03:13 laffy kernel: ata7.00: exception Emask 0x0 SAct 0x7f SErr 0x0 action 0x0 Feb 22 16:47:13 laffy kernel: ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Feb 22 19:47:13 laffy kernel: ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Feb 23 08:31:32 laffy kernel: ata7.00: exception Emask 0x0 SAct 0xff SErr 0x0 action 0x0 Feb 23 08:31:35 laffy kernel: ata7.00: exception Emask 0x0 SAct 0xff SErr 0x0 action 0x0 Feb 23 08:32:18 laffy kernel: ata7.00: exception Emask 0x0 SAct 0x7fffffff SErr 0x0 action 0x0 Feb 23 08:32:22 laffy kernel: ata7.00: exception Emask 0x0 SAct 0x7fff0ff SErr 0x0 action 0x0 Feb 23 08:32:26 laffy kernel: ata7.00: exception Emask 0x0 SAct 0x3fffff SErr 0x0 action 0x0 Feb 23 08:32:30 laffy kernel: ata7.00: exception Emask 0x0 SAct 0x1fffff SErr 0x0 action 0x0 Feb 23 08:32:36 laffy kernel: ata7.00: exception Emask 0x0 SAct 0xfffff SErr 0x0 action 0x0 Feb 23 08:32:49 laffy kernel: ata7.00: exception Emask 0x0 SAct 0x7ffff SErr 0x0 action 0x0 Feb 23 08:32:52 laffy kernel: ata7.00: exception Emask 0x0 SAct 0x3ffff SErr 0x0 action 0x0 Feb 23 08:33:00 laffy kernel: ata7.00: exception Emask 0x0 SAct 0x1ffff SErr 0x0 action 0x0 Feb 23 08:33:03 laffy kernel: ata7.00: exception Emask 0x0 SAct 0x1ffff SErr 0x0 action 0x0 Feb 23 08:33:08 laffy kernel: ata7.00: exception Emask 0x0 SAct 0xffff SErr 0x0 action 0x0 Feb 23 08:33:12 laffy kernel: ata7.00: exception Emask 0x0 SAct 0x7fff SErr 0x0 action 0x0 Feb 23 08:33:16 laffy kernel: ata7.00: exception Emask 0x0 SAct 0x3fff SErr 0x0 action 0x0 Feb 23 08:33:20 laffy kernel: ata7.00: exception Emask 0x0 SAct 0x1fff SErr 0x0 action 0x0 Feb 23 08:33:25 laffy kernel: ata7.00: exception Emask 0x0 SAct 0xfff SErr 0x0 action 0x0 Feb 23 08:33:28 laffy kernel: ata7.00: exception Emask 0x0 SAct 0x7ff SErr 0x0 action 0x0 Feb 23 08:33:34 laffy kernel: ata7.00: exception Emask 0x0 SAct 0x3ff SErr 0x0 action 0x0 Feb 23 08:33:39 laffy kernel: ata7.00: exception Emask 0x0 SAct 0x1ff SErr 0x0 action 0x0 Feb 23 08:33:45 laffy kernel: ata7.00: exception Emask 0x0 SAct 0xff SErr 0x0 action 0x0 Feb 23 08:33:53 laffy kernel: ata7.00: exception Emask 0x0 SAct 0x7f SErr 0x0 action 0x0 Feb 23 08:34:00 laffy kernel: ata7.00: exception Emask 0x0 SAct 0x7f SErr 0x0 action 0x0 Feb 23 08:34:40 laffy kernel: ata7.00: exception Emask 0x0 SAct 0x7f SErr 0x0 action 0x0 Feb 23 08:35:01 laffy kernel: ata7.00: exception Emask 0x0 SAct 0x3f SErr 0x0 action 0x0 Feb 23 08:35:55 laffy kernel: ata7.00: exception Emask 0x0 SAct 0x3f SErr 0x0 action 0x0 Feb 23 08:36:02 laffy kernel: ata7.00: exception Emask 0x0 SAct 0x1f SErr 0x0 action 0x0 Feb 23 13:47:12 laffy kernel: ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Feb 23 13:47:23 laffy kernel: ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Feb 24 05:47:12 laffy kernel: ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Feb 24 10:47:12 laffy kernel: ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Feb 24 10:47:23 laffy kernel: ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Feb 24 14:47:13 laffy kernel: ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen This is very odd. ata7 is the one I'm still pre-clearing. ata8 has had no work for it since 22 Feb, so I'm baffled how I'm still getting errors there. What do ata7 and ata8 have in common? They're both on that SATA controller I just added. No other drives have ever reported this problem.
  21. Right now "Force NCQ disabled" is set to "Yes". I've never adjusted the disk settings. Pre-clear is now on the last state of the first disk and there have been no errors since yesterday evening. Will see how the other passes go and then move on to the next disk. Memory is completely fine, so no memory exhaustion here (biggest addon is CrashPlan at 300MB and very little beyond that).
  22. BIOS was updated about 6 months ago, so it's that far out. Maybe the new controller I added? I'll see if there are any updates needed there. Could be there are some configuration settings that need to be tweaked on that new controller too. Regarding NCQ, are you saying looking at the logs you can see that NCQ is off? Sort of looks like it's turned on?
  23. I just installed 2 3TB WD Reds in my existing unRAID system running 5.0. unRAID is up and running normally, so no issues there. As part of this addition I needed to add a new controller card, IOCrest PCI x1, 4-Port SATA6G (SI-PEX40064). Listed the available drives to pre-clear and looks good... root@laffy:/boot# ./preclear_disk.sh -l ====================================1.13 Disks not assigned to the unRAID array (potential candidates for clearing) ======================================== /dev/sdi = ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N1954014 /dev/sdh = ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N1975277 I executed a pre-clear in two different shells to get them both pre-cleared at the same time. In each window I used the following commands: ./preclear_disk.sh -A -c 3 /dev/sdh (window 1) ./preclear_disk.sh -A -c 3 /dev/sdi (window 2) Both looked fine and were busy processing at about 125MB/s each. However, I happened to have a running tail on my syslog and started seeing this (full syslog since boot attached)... Feb 22 12:38:03 laffy kernel: ata8.00: NCQ disabled due to excessive errors Feb 22 12:38:03 laffy kernel: ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Feb 22 12:38:03 laffy kernel: ata8.00: failed command: IDENTIFY DEVICE Feb 22 12:38:03 laffy kernel: ata8.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in Feb 22 12:38:03 laffy kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Feb 22 12:38:03 laffy kernel: ata8.00: status: { DRDY } Feb 22 12:38:03 laffy kernel: ata8: hard resetting link Feb 22 12:38:03 laffy kernel: ata8: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 22 12:38:03 laffy kernel: ata8.00: configured for UDMA/133 Feb 22 12:38:03 laffy kernel: ata8: EH complete I figured there was some contention going with having two pre-clear's running, so I stopped the second one. Those errors appear to have subsided except for this... Feb 22 12:47:13 laffy kernel: ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Feb 22 12:47:13 laffy kernel: ata8.00: failed command: SMART Feb 22 12:47:13 laffy kernel: ata8.00: cmd b0/d1:01:01:4f:c2/00:00:00:00:00/00 tag 0 pio 512 in Feb 22 12:47:13 laffy kernel: res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Feb 22 12:47:13 laffy kernel: ata8.00: status: { DRDY } Feb 22 12:47:13 laffy kernel: ata8: hard resetting link Feb 22 12:47:13 laffy kernel: ata8: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 22 12:47:13 laffy kernel: ata8.00: configured for UDMA/133 Feb 22 12:47:13 laffy kernel: ata8: EH complete But, it's been quiet for the last 16 minutes, so I'm assuming it has stabilized and pre-clear continues to chug along. Anything to worry about here? Bad idea to run two pre-clears in parallel? syslog.20140222-1256.txt
  24. For capacity, I would spec a UPS where you don't push it more than 50% of its limit. That allows for plenty of shutdown time without significantly draining the batteries. Cyberpower and APC are good brands to go with, but try to stick with those with AVR that tend to have better electronics and more reliable. As for the pure sine wave, yes, many UPS's do generate a square sine wave, but that's only while it's running on battery. When on commercial power it's delivering the same sine wave to your host. There are certain electronics that don't like this square since wave, but most modern power supplies have no issues and it's not long term (as in hours). I'll let some others from the UK suggest models available across the pond from us. Here is one that I found (UK versions look different than the US versions)... http://www.amazon.co.uk/CyberPower-Value-800EILCD-800VA-Interactive/dp/B00BUJCERC/ref=sr_1_2?ie=UTF8&qid=1392930467&sr=8-2&keywords=cyberpower+ups
  25. I used APC's for the past 25 years both at home and in business. The Back-UPS and Smart-UPS models were very reliable and I did make a point to change the batteries about every 3 years which is reasonable. About 5+ years ago I purchased some of the compact/desktop units that had no display or controls. They looked like an engorged power strip. Those started to fail me a year or two after purchase. New batteries did not help either. The electronics were inferior compared to the more heavy duty line. I switched to CyberPower and I've been pretty happy. They have good electronics and fit the budget better. I would still consider the non-desktop APC UPS's as good products, but CyberPower is excellent competition IMHO. When I purchase UPS's I spec out models that will not exceed 50% of its capacity. Unless I need to extend the time, I configure attached computers to shutdown after just being on batteries for 5 minutes (if power ain't restored by then, it ain't comin' back for a long time). Lead acid batteries are not intended to be drained and it will definitely shorten their lives after a few cycles of that. Several have mentioned heat which also plays a factor (our home sits at about 70oF year round here in Seattle).