Jump to content

tr0910

Members
  • Posts

    1,449
  • Joined

  • Last visited

Everything posted by tr0910

  1. Oops, after running for 10 minutes, we are still on Disk 1. Don't think it will ever move to disk 2. I don't have all my disks set up in sequence. I have a Disk1, Disk3, Disk5, Disk8, Disk9. Is that the problem? Performance testing /dev/sda (Disk 1) at -1618340 GB (hit end of disk) (100%) Performance testing /dev/sda (Disk 1) at -1618350 GB (hit end of disk) (100%) Performance testing /dev/sda (Disk 1) at -1618360 GB (hit end of disk) (100%) Performance testing /dev/sda (Disk 1) at -1618370 GB (hit end of disk) (100%) Performance testing /dev/sda (Disk 1) at -1618380 GB (hit end of disk) (100%)
  2. Is the following true? Bitrot works best on static files. Not so good on working files that get edited frequently. If a text file has been edited by a text editor, the hash will fail on the next check. And you won't know if the reason it fails is because of a simple edit, or a deeper and darker problem in your system. The only way to make sure hash fails are not false alarms would be to have the operating system generate a new hash upon every save of the file.
  3. I have been running the latest bunker on 6b14b and noticed this error on one run. Subsequent runs are fine. bunker -a /mnt/disk3 Scanning for new files... \ awk: cmd. line:1: fatal: division by zero attempted Finished. Added 1 files. I'm also trying to understand the differences between Bunker and jbartlett's Bitrot I find only these differences. Bunker can't, but Bitrot can: Recover lost+found files by matching the SHA key with an exported list bitrot.sh --recover -p /mnt/disk1/lost+found -f /tmp/shakeys_disk1.txt Verify SHA on a specific file bitrot.sh -v -p /mnt/user/Documents -m modifiedword.doc Bitrot can't but Bunker can: bunker -c -f /tmp/disk1_keys.txt check SHA key from user defined input file - no path Are there other differences I've missed??
  4. Presently doing 2x2tb raid 0 as parity on one of those bargain Areca cards. Could definitely scale to 4x2tb this way. Haven't tried raid 5 yet though. But should be doable. Check the recent Areca activity.
  5. Ugh, looks like the service log has lots of errors like: [04.13.15 20:21:50.386 ERROR QPub-BackupMgr backup42.service.backup.BackupController] OutOfMemoryError occurred...RESTARTING! Zipped log file was too large to attach. I'll just up the memory size in run.conf and reboot the server unless you want to try something else.
  6. Here is the wider output and I've attached the correct file you referenced. I did get -Xmx512m increased to 1024m. Maybe I need to go higher? root@Server1:~# ps -ww -fp 24542 UID PID PPID C STIME TTY TIME CMD root 24542 1 1 08:26 ? 00:03:32 /usr/local/crashplan/jre/bin/java -Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=CrashPlan -Xms20m -Xmx1024m -Djava.net.preferIPv4Stack=true -Dsun.net.inetaddr.ttl=300 -Dnetworkaddress.cache.ttl=300 -Dsun.net.inetaddr.negative.ttl=0 -Dnetworkaddress.cache.negative.ttl=0 -Dc42.native.md5.enabled=false -classpath /usr/local/crashplan/lib/com.backup42.desktop.jar:/usr/local/crashplan/lang com.backup42.service.CPService root@Server1:~# run.conf
  7. Do either of these give you what you want?? root@Server1:~# ps l 24542 F UID PID PPID PRI NI VSZ RSS WCHAN STAT TTY TIME COMMAND 0 0 24542 1 39 19 1279948 124444 futex_ SNl ? 3:10 /usr/local/ root@Server1:~# root@Server1:~# ps -F 24542 UID PID PPID C SZ RSS PSR STIME TTY STAT TIME CMD root 24542 1 1 319987 124476 0 08:26 ? SNl 3:11 /usr/local/cr root@Server1:~# Is this the right file (see attached run.conf) from /boot/packages/crashplan-install/scripts Wrong file noted, removed.
  8. I had reduced the backup size from 1,341gb to 907gb and from 221,000 files to 142,000 files. That didn't help. I just now changed the options for compression to none, and deduplication to minimal. We'll see if that helps. I have your docker running on 6beta14b on a test server. I will have 16gb RAM once I change over to v6. I wonder if your docker version will have more scalablity than the v5 method? Do you know of any users of your Docker with very large backups?
  9. I have had Crashplan running for over a year just fine, but in the last 3 months, it has been making my server just groan. And its not backing up any new files. It just sits trying to backup, but no new files get sent to crashplan central and it has about 1000 new files to backup. I have 4 gb ram in the v5.05 server and it is installed with the process: https://lime-technology.com/wiki/index.php/CrashPlan The memory available to crashplan has been increased via: http://support.code42.com/CrashPlan/Latest/Troubleshooting/Adjusting_CrashPlan_Settings_For_Memory_Usage_With_Large_Backups but here is top reporting crashplan taking 144% of the CPU. It's not always that large, but its around 100 always. Tasks: 139 total, 1 running, 138 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.8%sy, 35.0%ni, 64.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4136248k total, 3076572k used, 1059676k free, 265212k buffers Swap: 0k total, 0k used, 0k free, 2158016k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 12328 root 39 19 1200m 73m 9.9m S 144 1.8 0:05.04 java 12283 root 20 0 2472 1004 756 R 1 0.0 0:00.03 top 10 root 20 0 0 0 0 S 0 0.0 0:35.33 rcu_sched I am soon ready to move it to v6 and Docker, but want to get it working with 100% before I cut over to v6.
  10. Are both of those motherboards "Tams" data center pulls? I have both the Intel and the Amd versions. Only running v5 though. And I don't see any difference in performance doing parity checks compared with this Amd server.
  11. What is about Docker and AMD that you don't like?? I've just found that on my H8DME-2 (AMD-based) motherboard parity check (not sync) is going ridiculously slow. All thing equal motherboard X7SBE (Intel-based) provides 2-2.5 faster parity check. I blame this on kernel-level inability to properly (re)distribute load between the CPUs/CPU cores. I've just started a non-correcting parity sync on my AMD server running a few dockers and it is running at between 110 and 120mb/sec. This is a bit slower than the initial parity generation, but not the kind of numbers you are referring to. Shutting off all the dockers boosts it about 10 mb/sec., but depending on how hard the dockers are working, the parity will slow down. Intel will too. The only thing I notice as difference between my Sandy Bridge Intel Xeon server and this one is that the AMD sucks about 30 watts more power. Dockers are Transmission, Plex, Owncloud, Tonido, MariaDB, It has an Areca 1280 with 2x2 tb parity and other 9 other 3 and 4 tb drives passed through. Nothing is connected to the motherboard sata ports at present. Other specs: Gigabyte Technology Co., Ltd. - GA-880GA-UD3H CPU: AMD Phenom II X6 1055T @ 2800 Memory: 16384 MB (max. installable capacity 16 GB) Network: eth0: 1000Mb/s - Full Duplex
  12. "Screen -d" should detach from one terminal session and "screen -r" to reattach from another. This assumes you are using screen which allows pre clears to keep running if the terminal session is lost. Check out screen.
  13. A new drive has successfully rebuilt as Drive 20, and now we try a parity sync and see what happens. Will it red ball? I will try your reseating suggestions, and will order a new power supply for good measure once we have a the results from the parity check.
  14. I thought he moved disk20 from one row to another. I am pretty sure it is the same disk. No, it is different disks. A total of 4 different 3tb drives have been used in this role and all have red balled. Power supply is an Antec Neo Eco 620 that has served well for over 2 years powering this array of 13 (2tb and 3tb) disks. I have had some other strange things going on with this server lately as well. Every once in a while the beeper on the motherboard will go crazy while the server is running with a DEE / DAH - DEE / DAH sound. This is the CPU overheat warning on the X9SCM motherboard. In the last 6 months, it will come on at boot every time, and randomly thereafter (the cpu fan is running normally and no dust is built up). The server has a Xeon 1220 and 4 gb RAM. I have just pulled all the plugins and add ins off the server and rebooted and am rebuilding onto the 5th disk now. I will then do a parity check tomorrow naked (no plugins) to eliminate that possibility.
  15. No power splitters are in use. The last rebuild moved to a completely different row on the 4224 so a different breakout would have been used.
  16. Wow you are fast. I just attached the syslog. Yes, I thought cabling too, but a different disk and a different slot for the rebuild should have eliminated that. Don't you think? I'll post a smart report tomorrow when I am at that location.
  17. I then rebuild disk20 without issue onto a spare disk in a different slot in the Norco 4224 and the server runs fine until I do another parity check. 4 times this has happened. I've included a syslog that shows the last 2 times. Since a new disk is used, and a new slot, I can't understand the cause of this issue. Thankfully I have a backup of disk20, but what is going on here? (Unraid 5.05 on a Supermicro X9scm with 2 1015 disk controllers) syslog.zip
  18. Is https possible? TJ, what is your preferred security model?
  19. Has anybody tried it and owncloud to the point they can compare and contrast the two? What I see so far. OwnCloud supports more granular user based management. Tonido seems to be a simpler product, easier to set up and access remotely. but I'm sure there are many more differences.
  20. I agree. I don't mind spending money but . . . . . prefer hundreds not thousands.... Are you using the enterprise version of Synaman? It looks like their Pro version is fully featured for $99.
  21. I am playing with OwnCloud right now and would be interested in your opinion. It sounds like you are familiar with the pros and cons. I am attempting to set up a global sharing server for my business. I need security and business features. Some replication will be required, but mainly just secure global networking that I can control very carefully and log completely. Drive mapping will be required for Windows and MAC clients. Other levels of security like VPN will be deployed as necessary.
  22. I feel dense. Like this?? I'm still being given the red no go signs....
  23. On the docker setup page?? Here is what mine looks like.
×
×
  • Create New...