Jump to content

Rebel

Members
  • Posts

    42
  • Joined

  • Last visited

Posts posted by Rebel

  1. Hi All,

       I currently have a separate proxmox server that I would like to power off for a while (UK electricity prices are a killer and going up again in April so I am trying to consolidate) and I've got a few quite important VMS that I want to keep running but don't want to fully migrate into unraids KVM engine (there are some reasons) so how practical would it be to back up the vms from the old machine, spin up a new cache pool with a pair of mirrored SSD's for the vm storage and spin up proxmox as a guest inside unraid? 

    I have no worries about CPU capacity and I think I can squeak down the RAM to fit with some rationalisation between the exaisting proxmox vms and what I am moving over, if not I can transplant a couple of memory sticks although I take a performance hit on mobo to do so (and the other ram is smaller) - thoughts?

  2. 16 hours ago, FDM80 said:

    I was fiddling with my plex container the other day and noticed that the template did in fact change.  The part of the template with the directions about switching to advanced view to add the extra parameter, etc is no longer present anymore.  The NVIDIA_VISIBLE_DEVICES environment variable field now needs to be added manually.

    I will try to add it as a manual variable, I  just wanted to make sure I wasn't losing my marbles.

  3. Hi,

      I installed the container using the app install (having migrated from the old home plug-in that stopped working last week.)  it starts up but whenever I try and click on the "replace existing" option to adopt the backup it just goes back to the first screen after login.

     

    Running UR 6.6.0

     

    Edit- I've tried the inbuilt console on Firefox and Chrome and VNC direct.

     

    2018-10-03 00_51_29-CrashPlan for Small Business.png

  4. On 21/02/2017 at 7:59 PM, John_M said:

    External caddies? No free interfaces? How are they connected - USB? It looks like the Preclear plugin is seeing 3 - 2.2 = 0.8 TB where 2.2 TB is a known limitation. If you're using a USB connection it's probably a limitation of the bridge chip in the caddy. Do you have any more information on this caddy?

    Yeah it's connected via a 2 port USB caddy, it's interesting as I've had large disks in there before but not on the NAS before and honestly can't remember if I'd tried  a disk this big before, all the SATA ports are in use, as these disks were purchased to be upgrades / spares and I was planning on preclearing them before putting them in the cupboard for when the oldest disks die.

     

     

    On 21/02/2017 at 8:19 PM, jonp said:

    This really belongs in plugin support as the pre-clear plugin you are referencing isn't built into the OS natively.

    Sorry, you are right, I've been using it that long I forgot that it wasn't a native 6 tool.

  5. Just acquired 3TB disks but preclear disk only sees them as 806G, anyone got any insights?

    Wondering if it's coz they are in external caddy (no free interfaces, these disks are for upgrades / future spare replacements..

     

    Title Information
    Model family: Toshiba 3.5" MG03ACAxxx(Y) Enterprise HDD
    Device model: TOSHIBA MG03ACA300
    Serial number: 25FEKEVKF
    LU WWN device id: 5 000039 61bd81414
    Firmware version: FL1A
    User capacity: 3,000,592,982,016 bytes [3.00 TB]
    Sector size: 512 bytes logical/physical
    Rotation rate: 7200 rpm
    Form factor: 3.5 inches
    Device: In smartctl database [for details use: -P show]
    ATA version: ATA8-ACS (minor revision not indicated)
    SATA version: SATA 3.0, 6.0 Gb/s (current: 1.5 Gb/s)
    Local time: Tue Feb 21 19:50:34 2017 GMT
    SMART support: Available - device has SMART capability.
    SMART support: Enabled
    SMART overall-health: Passed

     

     

    PDScript.PNG

  6. Some recent Unraid updates had significant changes that required this of most previously installed dockers.

     

    Tell me about it, a couple of week ago I had to rebuild couch potato for what seems like the 20th time after the DB got corrupted AGAIN after an updated (apparently DB corruption is an issue with it anyhow before you factor in running it as a docker on unraid :( )

  7. How big are your backup sets? Default is set to a Gig of ram which as a rule of thumb gives about 1TB backups / 1m files.

     

    http://crashplan.probackup.nl/remote-backup/support/q/keeps-stopping-and-starting.en.html

     

    Thanks for the idea! Yes, it seems it was that! After adding more RAM to Crashplan it seems to work fine again! Thank you!! :)

     

    My pleasure, I've got 10+ years of DSLR photos covering several TB and I hit the Java memory thing long ago (actually before migrating over to docker).

     

    I gave mine 4gb of ram and nothing :( I still get the same error as I did before.

    Not wanting to be THAT guy but have you tried removing the container and the appdata files and just starting the config from scratch? you can adopt the old PC from the cloud and it will pick up from where it left off so it not a total loss (again you probably knew but I was just covering bases)

     

  8. Just an insight into how I do it in the house here, we use a common "backup" share but everyone uses crashplan (free for PC to PC use) and then everyone has different archive keys everything is encrypted in such a way only the data owner can see / recover their files.

     

    It was simply choice for me though as I run crashplan on unraid anyhow as I carry a sub to backup the unraid servers to their cloud, 10 years+ DSLR photos, vids I've made and just plain work file backup as an offsite.

×
×
  • Create New...