Jump to content

landS

Members
  • Posts

    824
  • Joined

  • Last visited

Everything posted by landS

  1. Well... despite hoping to no longer need the command line I find myself in the middle of a file migration for an RFS to XFS swap on all disks and unBalance is not cooperating... First - Great plugin! Second - how the heck to we resume screen after closing the browse window?... I believe the old command, now not working, was; screen -r Thanks!
  2. On my high power server, just re-built and re-exported - this took less than 1 minute for 3 nearly full 4 TB disks... likely no need for earlier version users to manually create the file. Now all Green Checks, very nice!
  3. Fantastic Info here - also clears up a bit of ignorance on my part Good idea, but I need to work out how to do this in a reliable way. perhaps write a CSV file to the flash that is read each time the gui page is accessed? Disk#, InitialBuild, InitialExport 3,4,5 don't necessarily have anything to do with converting depending on your specific situation. I was able to convert all my data disks to XFS without removing any drives or rebuilding parity just by moving things off each drive and onto others with free space, then formatting the empty, then repeating with others. There is a lot in that first thread you linked. Cheers, I am going through these links now! LOTS of info here! I do not have enough room on the current array to free up a disk I will need to add a fresh XFS disk to the array in order to move files I do not want to leave the new disk in the array as it will have a home as a 2nd parity drive (Hopefully soon).
  4. Nope... Sorry for the poorly worded panic posting. I ran the File Integrity Plugin on a machine with all RFS disks 1 - Should I kill the plugin, convert to XFS, then rerun the plugin? 2 - Any risk of Data Loss on the RFS disks now that the plugin has been run?
  5. POO! My Primary/low power server is still on REIERFS... was hoping to keep it REIERFS as the Backup Server uses XFS and it is nice knowing that the stable REIERFS recovery tools exist. Perhaps THIS is what locked up the server and not the 100% CPU utilization when trying 3 disks at once? Any risk of DATA loss with REIERFS now? I just finished triple pre-clearing another 4TB HGST for the hoped for dual parity. Would the advised course of action be 1 - Disable automatic checking 2 - Switch out each disks filesystem to XFS as per http://lime-technology.com/forum/index.php?topic=37490.msg346739#msg346739 ... only using unBalance rather than screen rsync http://lime-technology.com/forum/index.php?topic=39707.0 3 - Remove empty disk from the array 4 - New Config 5 - Build Fresh Parity 6 - Redo the Initial Integrity Build Per disk Thanks
  6. Again, fantastic plugin Bonienl! My request is based on the need to do each disk separately due to the slow atom cpu... each 4TB disk takes about 20 hours to complete. If I had more disks remembering which I was on could be a bit bothersome. I am also thinking of the flag being a nice feature for when I add a disk in the future, and 2 months later wonder If I did the initial build on the disk or not.
  7. Working Great! Thank you SMALL feature request for future bumps... a flag indicating if an initial build and a flag indicating an export has been accomplished per disk.
  8. If you started all three disks simultaneously, it can introduce a full load on your processor. Alternatively you can build one disk at the time to lower the load, but it will take longer to complete all disks. Perhaps adding nice/ionice tunable to the process to keep it from dragging down the system? 2016.01.04a, running on a single disk. Takes 67% of the dual core hyperthreaded CPU - no other tasks currently running. So far, so good
  9. Just to confirm, After the initial build & export Hashing of new or date modified files is done automatically on all disks (daily) with no configuration required The Scheduled disk hash rechecks all the files by disk for potential corruption ... Will this use more than 1 tread/disk? It appears that a Atom D525 - 1.8 Ghz Dual Core can handle 1 disk check plus 1 additional task (writing data, streaming data, or parity check) before being overloaded and locking up. Cheers,
  10. Really excited for this - initial check nearly done on my i5 server (3 4Tb data disks). Is it normal for a lower specd server to become fully unresponsive during the initial check (specs in signature)
  11. Last Update 2015 Dec 21 Version 6.1.6 Original Build Spring 2014 System "Name" - tower2 Case - fractal design define R3 CPU - Intel i5 2500 - vt-d capable Motherboard- Asus P8H77-V LE RAM- CORSAIR Vengeance LP 16 GB DDR3 PSU - SeaSonic X Series X650 Gold HBA / SAS Card - Supermicro AOC-SASLP-MV8 flashed to Firmware_3.1.0.21 - Data Drives Only 4x 4 TB HGST RAID NAS 7200 rpm drive 1x 500 GB SSD EVO Cache - On MOBO Tested for Windows 8.1 XEN Pass-through, currently out of machine, have yet to test KVM Virtualized Server with passthrough: http://lime-technology.com/forum/index.php?topic=33396.0 GPU-XFX AMD HD 7750 Single Slot FX-775A-ZNP4 PCI Rosewill NEC 4+1 internal USB RC-101 Chipset NEC 720101 PCI Syba SD-SATA-4P Flashed to b5500 bios (IDE) Sata 4 port for ODD Chipset Sil3114 PCIe 1 StarTech Sata 2 port for ODD Chipset ASM1062 NIC(s) Realtek 8169 based on-board NIC (bridge for all VMs) Plugins Powerdown Package Dynamix Cache Dirs Dynamix Active Streams Community Applications Preclear Disks Dockers Crashplan Crashplan GUI Guacamole Unifi
  12. THANK YOU! LJM42 & gfjardim The backup itself has been running trouble free but I have been checking this tread daily as the need to add 2 new folders was exploding!
  13. thanks for the great work! This functionality is FANTASTIC
  14. @dlandon - Great plugin but would it be possible to issue updates via the Plugin 'update' button as opposed to remove/reinstall? Thanks!
  15. I LOVE this plugin! Thanks for the fantastic Work Similar to the preview status... any chance a Smart Attributes snapshot icon/popup can be added?
  16. Dockers should not be affected by the changes from 6.0 to 6.1, only some plugins. Just updated from 6.0.1 to 6.1.2... forced me into an update of crashplan 4.3 from 4.2. Great work here! Thanks. I have the advanced extra paramater in both crashplan and desktop dockers... is this correct? It appears to be working.
  17. I have both Crashplan and Crashplan-Desktop working flawlessly on 6.0.1 Any reason to NOT upgrade to 6.1/6.1.1? Thanks!
  18. If your crashplan is writing when you wake the Mac (likely because you are pointing to the Mac as one of the sources), then you will have at least 1 disk spin up to accommodate the write. If your crashplan is scanning through all the files to back up to read modified dates/created dates/etc, then this should help. When using Cache_dir My Crashplan is setup to only check every 6 hours. My Cache disk spins up when I write something out (mover is once a day), and my Cache disk spins up (when files are on it) when Crashplan hits a 6 hour mark. When not using Cache_dir all my disks spin up every 6 hours.
  19. Install Dynamix Cache Directories / Cache_dirs
  20. note... after the 4.2 GUI upgrade I had to reset the 0.0.0.0 in the XML. Works great now
  21. Yup, I messed this one up. My guess is this has more to do with messing up GUID than moving from 4.xx back to 3.xx. Not really wanting to prove out chicken or egg ATM. Do you accept donations? The Crashplan dockers are the most useful expanded functionality to Unraid 6 for me. My Windows machine is almost completely defunct. I still have one for firing up a Steam game stream that doesn’t run under Wine/Mono. Heck, even my .NET programing/compiling is now done in Ubuntu. IF this happens again I will know to swap over (for as long as the OS is needed for other duties) This would be a cool feature
  22. Same problem as others, it looks like when an update is found on the Docker page, if you click on upgrade it moves the Crashplan version to 4.2. If you uninstall the Crashplan Docker and Re-install to 3.7 this wipes out your saved cloud files and starts the backup process from the beginning. If you restart your server and manually start the Docker it appears to force the update if an update is available. App.Log in the crashplan docker folder: CPVERSION = 4.2.0 - 1425276000420 (2015-03-02T06:00:00:420+0000) - Build: 61 How do we obtain a 4.2 GUI Mate Docker? ***edit, decreased annoying font size
×
×
  • Create New...