jtroberts

Members
  • Posts

    31
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jtroberts's Achievements

Noob

Noob (1/14)

2

Reputation

  1. @binhex@jonathanm I'm having trouble with the local client. No issues with the docker, that all works fine. I ran the script from here. I can see the binary created on the unraid server in /usr/local.bin and I can execute the bin file but I am jsut not sure what parameters need to be included. PS: my client is not discoverable in the docker GUI. /usr/local/bin# ./urbackupclientctl start -c 172.16.1.66 -f Error starting backup. No backup server found. I should also add I have a dual-homed system. eth0.10 = private lan to VPN outbound (default system route) Eth1 = Plex an a few other dockers on the user lan I control each dockers access by choosing which network. UrBackup is on br0.10 - 172.16.1.0/24 Thanks in advance [edit] Fixed. Allowed Host access to Custom networks in the Docker setup. [How-to] 1. Install and start URbackup client TF=$(mktemp) && wget "https://hndl.urbackup.org/Client/2.4.11/UrBackup%20Client%20Linux%202.4.11.sh"> 2. CD to URbackup client path cd /usr/local/bin 3. Add Directory to be backed up urbackupclientctl add-backupdir -d /mnt/user/Videos urbackupclientctl add-backupdir -d /mnt/user/Pictures 4. Start backup urbackupclientctl start -f
  2. sorry for the late reply. that's exactly what I did. Changed the cable for disk 7. reboot and disk 7 came online. then I ran a parity check. all good.
  3. My parity disk went offline and I could not bring it online. I did a reboot to see if that would fix it and it did not. I then removed the failed parity drive, put in a new drive. Powered on, and now a data drive is offline. This leaves me with no parity and a failed data drive. ouch! So I thought let's try to put the original parity drive back in and use a new SATA connector, fantastic, the drive comes up. I choose the OLD parity drive in the GUI, but the system thinks this drive is "NEW" and as such because there is also a failed data drive I cannot start the array. So the question I think is this... How do I get Unraid to use the old/original parity drive to bring the array up. then use the parity bits to re-build the failed data drive. I have made ZERO changes other than what's described. Please help. thanks
  4. I'm trying to login to @sparklyballs mythtv container. I need to upgrade the version of mythtv to .29. I have done the docker exec -it command several time on other containers and it works fine. I ALWAYS do a hostname command just to make sure I'm working in the container and not on the unraid OS. So I have 2 issues..... 1. I cannot use container name. for example. "docker exec -it sparkly-mythtv /bin/bash" Instead I always use the container ID. "docker exec -it f0bcfb2831c1 /bin/bash" Any idea why can't I use the container name like every example on the internet shows? 2. When I run "docker exec -it f0bcfb2831c1 /bin/bash" I get no error. The root@Plex-TRW:~# changes from green to white. (FYI my unraid hostname is Plex-TRW) - I immediately run "hostname" and that returns "Plex-TRW" So I'm hesitant to run any command for fear that they will be run locally on not in the mythtv container. Any ideas? Now I have seen before where bash does not work (I assume it's not installed in the container) and instead I would use /bin/sh But this gives the same result when I run hostname I have also ran docker exec -it f0bcfb2831c1 ifconfig This unfortunately shows my the local host (plex-TRW) ifconfig and not of the container. Please help. thanks
  5. You are correct, I did mix up MBps and Mbps when comparing LAN/disk and movie bitrate. Bluray today is 40Mbps max. I swear I read somewhere h.265 will max out ~200Mbps but the only thing I could find real quick is a max of 50Mbps for UHD bluray. sorry for the confusion everyone. Physical media is dying, yes but there are just simply too many homes that don't have or may never have speeds fast enough to watch HD let alone 4k. So for that reason physical media will not die anytime soon. that is my opinion. True 4k. Let's explain what that is. That is 60fps with a 4:4:4 bit depth. I believe we will see 60fps, 4:4:4 I have my doubt. I have no idea what the size of a 2 hour movie will be at those specs but would guess we're approaching 100GB and maybe more. thanks everyone.
  6. Good feedback @johnnie.black. I never thought about the all cache pool option. Now do I trust RAID 5 on btrfs? @Helmonder I have setup crashplan before but I guess I liked the simplicity of just a raw copy of the data and no other software or layers between recovering my data. Your solution is 100% accurate and does ultimately solve the root problem. Although this still doesn't solve the "true" 4k streaming issue. IE: streaming greater than 125MB/s. But to be fair when and or if that will ever be realised is anyone's guess. Not to mention it will mean the Plex client will either need large caches and a delayed start of the movie, to stream anything greater than 125MB/s or the client hardware will need 10G. I don't see the Nvidia Shield getting a 10G NIC. 10G USB3? I guess that problem is wayyyy off in the future if ever. Is my reasearch correct that the btrfs RAID 5/6 issues as of Kernel 3.19 are only relevant on a dirty shutdown?https://btrfs.wiki.kernel.org/index.php/RAID56 Thanks guys!
  7. I've been happily using Unraid for 2.5 years but I'm running into an issue I need to solve, faster backups. More precisely I cannot exceed 125MB/s over the LAN. I have over 20Tb of data and this takes FOREVER to backup over a gigabit LAN. Sorry, I should clarify my backup is a separate server running unraid as well. So simple solution was add a 10Gb NIC. Well that doesn't solve the problem because a single disk cannot read any faster that 130-150MB/s give or take. So I've been looking at several other software solutions but nothing really solves the issue that doesn't use traditional md-raid. ZFS and freenas are promising but having to add new VDEV's in sets of 3 or 4 disks at a time isn't friendly to the wallet. My system works best when I can add a single disk at a time. So for me the question is simple... How can I continue to use Unraid and stripe my data across all disks to greatly increase read/write performance. IE: similar to traditional RAID 5/6. I really, really LOVE the dockers and docker support community @ Unraid. Now I already know a lot of answers I'm going to receive but let me answer why I don't like them... 1. Cache pool. This really only improves data that is written into unraid. If I want to read 20TB of data from unraid to backup server it's done file by file from a single disk. Yes, multi-thread the backup, that's an option. Any tool/utility recommendations? I've been using BTsync. NOTE: doesn't solve 125MB/s 4k LAN issue below. 2. Yeah I know the name says UNraid, but that doesn't mean read performance should suffer. 3. Separate issue that has come to the foreground, I need a solution to bit-rot. IE: ZFS and btrfs. NOTE: I'm not afraid to use btrfs 4. Future proof. TRUE 4k (not the Netflix BS we see today) @ 4:4:4 60FPS will greatly exceed the 125MB/s max of a LAN, so I need a solution that can grow. Can I create a btrfs pool/volume that stripes the data across all disks and writes parity to the 1 or 2 parity disks? I know there are some native issues with btrfs RAID 5/6 but most of those issues have been addressed since kernel 3.19, assuming you have a good UPS and don't have dirty shutdowns. I would love to continue to ride the unraid train, but not striping the data just sucks. Here's another question... The Linustechtips 48TB NAS with SMB over 1GB/s, how was this done? I just don't see how it was/is possible when a single file is written to 1 and only 1 physical disk. They talk about "tweaks" with the unraid team but don't share what they were. My setup.... 7 disks ranging from 3-5Tb. 120Gb * 2 disks btrfs cache pool 16Gb RAM 8 core AMD User shares defined as... - Movies, music, pictures, home-videos etc... (NOTE: I did not create a single share and sub-dirs. not really sure why I did it this way) How can I improve performance? Thanks
  8. perfect. worked exactly as expected. Thanks
  9. I would like make the "unRAID OS GUI Mode" the default boot option. IE: No Prompt, default. I'm a little unsure how to do it and fear I'll make my system unbootable if I screw up. So do I simply modify syslinux.cfg? Current syslinux.cfg default /syslinux/menu.c32 menu title Lime Technology, Inc. prompt 0 timeout 50 label unRAID OS menu default kernel /bzimage append pci-stub.ids=8086:105e initrd=/bzroot label unRAID OS GUI Mode kernel /bzimage append pci-stub.ids=8086:105e initrd=/bzroot,/bzroot-gui label unRAID OS Safe Mode (no plugins, no GUI) kernel /bzimage append initrd=/bzroot unraidsafemode label Memtest86+ kernel /memtest Is this correct? Would I simply move the line "menu default" from "label unRAID OS" to "label unRAID OS GUI Mode" like below??? default /syslinux/menu.c32 menu title Lime Technology, Inc. prompt 0 timeout 50 label unRAID OS kernel /bzimage append pci-stub.ids=8086:105e initrd=/bzroot label unRAID OS GUI Mode menu default kernel /bzimage append pci-stub.ids=8086:105e initrd=/bzroot,/bzroot-gui label unRAID OS Safe Mode (no plugins, no GUI) kernel /bzimage append initrd=/bzroot unraidsafemode label Memtest86+ kernel /memtest Thanks in advance
  10. Ok so I found the issue. hopefully this will help others. I was saving copies of ./ngnix/sites-confs/default file in the same directory. you can't do this. any file in that directory will be read by ngnix.conf because of this line... " include /config/nginx/site-confs/*;" so I removed all files except for "default" to a different directory in ./ngnix/config_backups. restarted, and all worked. OM!%^&$#$G!!! sometimes computers really piss me off I though I was doing a good thing by making a copy before editing.
  11. ok, found this in /var/log... 2016/08/25 17:32:43 [emerg] 120#120: unexpected "E" in /config/nginx/site-confs/default.all_domains_not_working:38 I'll google it. anyone have an idea? Thanks
  12. First off thanks for the docker and help so far Aptalca. I know when I get this working it'll be golden! So I've made a lot of progress. My cert issue is resolved and I've moved on to the proxy. I got the proxy to work with the code below. I the edited it creating a new server section for each subdomain. at the same time I edited the allowed "ssl_ciphers" to basically remove support for old browsers etc... There was a post in this thread that I followed. anyhow after modifying the ciphers nginx won't start. simply says "FAIL". so I restored the original unmodified default file in ./ngix/site-confs but nginx still won't start. so two questions.... 1. where is more detailed logging? I can't find anything in the appdata dir or inside the container. 2. any idea what I need to repair / replace / edit to get it working again. I don't want to remove and start over for fear let'sEncrpty will blacklist me. thanks server { listen 443 ssl; server_name movies.mydomain.com; ssl_certificate /config/keys/fullchain.pem; ssl_certificate_key /config/keys/privkey.pem; ssl_dhparam /config/nginx/dhparams.pem; ssl_prefer_server_ciphers on; client_max_body_size 0; location / { proxy_pass http://192.168.XXX.XXX:5050; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 150; proxy_send_timeout 100; proxy_read_timeout 100; proxy_buffers 4 32k; client_max_body_size 8m; client_body_buffer_size 128k; } }
  13. perfect. Ty I did read it but forgot what I read