Rebel
-
Posts
42 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by Rebel
-
-
16 hours ago, FDM80 said:
I was fiddling with my plex container the other day and noticed that the template did in fact change. The part of the template with the directions about switching to advanced view to add the extra parameter, etc is no longer present anymore. The NVIDIA_VISIBLE_DEVICES environment variable field now needs to be added manually.
I will try to add it as a manual variable, I just wanted to make sure I wasn't losing my marbles.
-
-
20 hours ago, Djoss said:
This is a known issue with the latest version. See https://github.com/jlesage/docker-crashplan-pro/issues/134#issuecomment-425216067 for the workaround.
Thank you sir, you are a god amongst us mere mortals.
Disappointing in Code42 just throws that out and doesn't fix it, we can't be the only people in the world using custom keys. -
Hi,
I installed the container using the app install (having migrated from the old home plug-in that stopped working last week.) it starts up but whenever I try and click on the "replace existing" option to adopt the backup it just goes back to the first screen after login.
Running UR 6.6.0
Edit- I've tried the inbuilt console on Firefox and Chrome and VNC direct.
-
On 21/02/2017 at 7:59 PM, John_M said:
External caddies? No free interfaces? How are they connected - USB? It looks like the Preclear plugin is seeing 3 - 2.2 = 0.8 TB where 2.2 TB is a known limitation. If you're using a USB connection it's probably a limitation of the bridge chip in the caddy. Do you have any more information on this caddy?
Yeah it's connected via a 2 port USB caddy, it's interesting as I've had large disks in there before but not on the NAS before and honestly can't remember if I'd tried a disk this big before, all the SATA ports are in use, as these disks were purchased to be upgrades / spares and I was planning on preclearing them before putting them in the cupboard for when the oldest disks die.
On 21/02/2017 at 8:19 PM, jonp said:This really belongs in plugin support as the pre-clear plugin you are referencing isn't built into the OS natively.
Sorry, you are right, I've been using it that long I forgot that it wasn't a native 6 tool.
-
Just acquired 3TB disks but preclear disk only sees them as 806G, anyone got any insights?
Wondering if it's coz they are in external caddy (no free interfaces, these disks are for upgrades / future spare replacements..
Title Information Model family: Toshiba 3.5" MG03ACAxxx(Y) Enterprise HDD Device model: TOSHIBA MG03ACA300 Serial number: 25FEKEVKF LU WWN device id: 5 000039 61bd81414 Firmware version: FL1A User capacity: 3,000,592,982,016 bytes [3.00 TB] Sector size: 512 bytes logical/physical Rotation rate: 7200 rpm Form factor: 3.5 inches Device: In smartctl database [for details use: -P show] ATA version: ATA8-ACS (minor revision not indicated) SATA version: SATA 3.0, 6.0 Gb/s (current: 1.5 Gb/s) Local time: Tue Feb 21 19:50:34 2017 GMT SMART support: Available - device has SMART capability. SMART support: Enabled SMART overall-health: Passed -
Can you mount the NAS in unraid as a UD SMB mount then pass that over to crashplan running on the unraid server itself?
-
Some recent Unraid updates had significant changes that required this of most previously installed dockers.
Tell me about it, a couple of week ago I had to rebuild couch potato for what seems like the 20th time after the DB got corrupted AGAIN after an updated (apparently DB corruption is an issue with it anyhow before you factor in running it as a docker on unraid )
-
How big are your backup sets? Default is set to a Gig of ram which as a rule of thumb gives about 1TB backups / 1m files.
http://crashplan.probackup.nl/remote-backup/support/q/keeps-stopping-and-starting.en.html
Thanks for the idea! Yes, it seems it was that! After adding more RAM to Crashplan it seems to work fine again! Thank you!!
My pleasure, I've got 10+ years of DSLR photos covering several TB and I hit the Java memory thing long ago (actually before migrating over to docker).
I gave mine 4gb of ram and nothing I still get the same error as I did before.
Not wanting to be THAT guy but have you tried removing the container and the appdata files and just starting the config from scratch? you can adopt the old PC from the cloud and it will pick up from where it left off so it not a total loss (again you probably knew but I was just covering bases)
-
Stupid question, I've had an issue in the past where connecting by IP instead of hostname worked, just try that and see if it works for you.
-
Or if you want the VM data to be stored on the array put it onto a share that doesn't use the cache disk, but your write speeds will suffer.
-
How big are your backup sets? Default is set to a Gig of ram which as a rule of thumb gives about 1TB backups / 1m files.
http://crashplan.probackup.nl/remote-backup/support/q/keeps-stopping-and-starting.en.html
-
What does windows say when you try and get to a samba share?
-
Just an update, it would appear to have been the snmp on unmenu (even though it wasn't configured) - Thank you Trurl for your help, you seem to have solved it.
-
Hmm, I can't update, just keeps saying the repo is not tagged with latest
-
-
Curious if you are wanting to use unraid mostly as a VM platform and not a storage platform why don't you just go proxmox?
-
Thanks, I will give it a try and let you know in a couple of hours
-
Worth noting that if you have changed the network space you will need to update the default gateway to whatever you set your cisco router to.
-
-
True but then you will know for sure if your disk actually has a problem or if it's controller / cable / 1000 other issues.
-
Just an insight into how I do it in the house here, we use a common "backup" share but everyone uses crashplan (free for PC to PC use) and then everyone has different archive keys everything is encrypted in such a way only the data owner can see / recover their files.
It was simply choice for me though as I run crashplan on unraid anyhow as I carry a sub to backup the unraid servers to their cloud, 10 years+ DSLR photos, vids I've made and just plain work file backup as an offsite.
-
You also get that sort of behaviour if the key file gets corrupt / missing...
-
The logs will say why it's failing, sometimes you can get data errors that can flag the disk that the drives own smart doesn't alarm out on.
Can you pull the disk and run something like spinrite on it?
Migrating Proxmox onto Unraid
in VM Engine (KVM)
Posted · Edited by Rebel
Hi All,
I currently have a separate proxmox server that I would like to power off for a while (UK electricity prices are a killer and going up again in April so I am trying to consolidate) and I've got a few quite important VMS that I want to keep running but don't want to fully migrate into unraids KVM engine (there are some reasons) so how practical would it be to back up the vms from the old machine, spin up a new cache pool with a pair of mirrored SSD's for the vm storage and spin up proxmox as a guest inside unraid?
I have no worries about CPU capacity and I think I can squeak down the RAM to fit with some rationalisation between the exaisting proxmox vms and what I am moving over, if not I can transplant a couple of memory sticks although I take a performance hit on mobo to do so (and the other ram is smaller) - thoughts?