bertrandr

Members
  • Posts

    43
  • Joined

  • Last visited

Everything posted by bertrandr

  1. Ahh - thank you @itimpi! So the key is that any old HDD CAN be moved into a "new" array so long as there is no existing Parity in the target array -that makes sense. So I will wait to create the parity in the new array until I can decommission the old array. Thanks, Bertrand.
  2. Based on this I can NOT move my disks to a new server unless I also move the existing USB drive. -Correct? Adding data drives from an existing server to a NEW server / USB forces the drives to get cleared? Is there no way to do a physical drive move / import? I have close to 8TB of mixed data on the current drives... Thanks, BR
  3. I know this has been asked before, but I am doing something a little different. I am building a new server from scratch with NEW cache, larger parity drive, CPU etc. To replace a 5 year old system. Question - I have 4 HDD's in my old server I'd like to transfer into the new along with all the data intact. Can I simply re add the folders & data from the old drives to the "new" shares (same name) on the new server? OR do I need to transfer my USB key and all the settings? I'm trying to start fresh where I can. I was planning to use a new USB key and request a license transfer. New server is currently on a trial license. I have no encryption or anything unusual... I plan to rebuild all my dockers and restore my VM's from backups. Thanks in advance, BR
  4. I am about to upgrade my 5+ year old UR server and would love some feedback on my build ideas. I will be using the new server as my home NAS, shared Plex server with HW transcoding (GTX1050), VM host for my primary Win 10 PC using GTX1060 passthrough and a long list of Dockers and smaller VM’s The new build will be based on a Asus x99 Rampage Extreme with a reliable Intel 5960x 8c/16t and 32GB Ram. I am planning on adding a new 1TB NVME M.2 drive as my new cache + a large number of existing HDD’s from my current server and some new data drives. Including standalone? SATA SSD’s for torrents and VM drives. My questions: I am not really concerned about cache "data protection" and creating a cache pool. BUT if I wanted to, can I mix NVME and SATA in the cache pool? -any pros / cons? Any pros / cons to keeping my discrete SSD’s OUT of the cache pool to be used as specific dedicated workload targets? One SSD will be dedicated to my Torrent docker. One will be dedicated for my W10 VM… OR do I put all the SSD’s including the NVME drive into the cache pool and just let all the data live in the default locations? I don’t recall if there is any sort of load balancing in the cache pool… When migrating from my old server, Can I (should I) simply put the current USB flash boot drive in the new system and start everything up? – OR should I start fresh and build up the new server with a NEW boot flash and migrate my dockers, VM’s, existing data drives etc under a trial license? Then move the license over to the new server? -not sure if this is possible… Thanks in advance! Bertrand.
  5. Hi All - sorry if this has been asked, I did search and did not see an answer... How can I reduce logging or enforce log rotation? After only a few weeks NginxProxyManager is generating and keeping a LOT of log files. especially in "appdata\NginxProxyManager\log\letsencrypt\" Thanks, BR
  6. Thanks, my little ITX based server board is already maxed out for internal drives, hence the reason I use the great UD plugin. I will soon upgrade my cache drive to 512 or even 1 tb since that hosts most of the busy workloads on my server. Is there anything I can/should be doing to make sure the usb3 connection is using the fastest/ stable config under Unraid? Cheers, BR
  7. Hi All, UnRaid 6.6.6 UD Plugin 2018.12.30a Mounting an external 8TB USB3 HDD - "STEB8000100" Freshly formatted the entire 8TB as XFS I've noticed "lately" (I think since the 6.6.6 update?) that I often loose my UD drive mount. Rebooting the entire server seem to bring it back but it is not an ideal solution. I use the UD mount for 3 main operations. 1.) A target for the latest "CA Backup / Restore Appdata" plugin - backups go into separate folders. 2.) A folder share for my "LibreELEC_Kodi" VM where I store & play very large (40-60GB) UHD/4K movies. 3.) A backup file path / location for my "binhex-rtorrentvpn" docker -mounted with the SLAVE R/W option. I have a single small 250GB cache SSD where most of my active torrents seed from. However when seeding several large torrents I sometimes move them to the UD folder to free up space on my SSD. My UD mount can have anywhere from 5 to several hundred open files - depending on the Torrents I might be seeding. In general performance from the UD mount is awesome. No problem streaming 50GB 4K MKV's to the Kodi VM. While simultaneously seeding a few torrents. My question is: Is there a practical limit to what sort of load I should expect from a decent 7K SATA USB3 drive mounted via Unassigned Devices? Is having a few hundred open files across the USB3 link too many? -And maybe causing the plugin to crash? In general if I leave it alone, UD seems fine. But If I do a large file copy to/from from my W10 PC it seems to suddenly dismount and things go sideways... Thanks for all the great work. Bertrand.
  8. Fantastic work! Followed instructions to a "T" and now my moderate home network is add free!!! Thank you, BR
  9. Nope but LE 8.2 is easy to install from scratch. I just upgraded to a passive GTX 1030 in my UnRAid server, rebuilding the VM from scratch took less than 5 min. Follow these steps: http://lime-technology.com/forum/index.php?topic=44683.0 You don't need to mess with the XML file anymore if you follow the general idea in the following video. and these: https://www.youtube.com/watch?v=SMTU7Ufm9Bw Butter smooth video and complete HD audio pass-through. The only thing I noticed was idle VM CPU cycles seem to be higher than the old 6.02 OpenElec I was running before. Cheers, BR
  10. Great docker! Thanks for the hard work -I much prefer over CouchP... Questions: 1) Can an Indexer be weighted or preferred. When two very similar results are found from I would prefer to download from my primary private tracker (IPTorrents) 2) If I have my DL client (ruTorrentVPN docker by Binhex) configured to "move completed DL's via Hardlink" -which copy does Radarr import from? The original (still seeding) location in my /incomplete folder, or the moved & completed copy in my /completed folder? Ideally after Radarr grabs a movie I want to move (hard-link) it off my smallish SSD cache drive to a "completed" share on UnRaid. From there import the movie into Radarr & my library... The goal is to download active torrents on the SSD cache drive them move & continue seeding the completed torrent from a regular UnRaid share. (Beer fund donation sent!) BR
  11. I noticed there is a new version - any specific updates? FYI - I have started to use the Web GUI -much easier than the VNC session. BR
  12. SMTP settings? Maybe I'm just blind, but where do we put the SMTP server for email notifications?.... BR
  13. Bad news - changing the advanced settings makes little / no difference. Backups still "stop" randomly. I've opened another ticket with CloudBerry and they quickly resonded: "Hello Bertrandr,Thank you for reporting. That is a known issue. We are working on the fix and will let you know once it's ready.We apologize for inconvenience caused." I'm still on the trial version (which I like), but until this issue is fixed I'm not buying... BR
  14. Yea my backups to B2 seem to randomly stop as well. I was hoping this recent update would fix it but my backup stalled again this morning. It seems to happen with very large files over 1GB (home video archives). I have since been tweaking the options under the advanced settings. I read somewhere that Backblaze B2 stores in 100mb chunks or blocks and the recommendation was to try and match the incoming "chunk size" from Coudberry to that of Backblaze... But you also need to make sure that CloudBerry has enough RAM assigned to support the number of worker threads and chunk size... The formula to calculate the minimum RAM allocation is <threadcount*2*chunksize> Personally I am using 3 threads, 100mb chunk and 700mb RAM allocated. Read this: https://www.cloudberrylab.com/blog/slow-backup-how-to-speed-up/ So far so good, I have been running a very large backup (87 GB) with many large files for over 3 hours without stalling. I'll report back tomorrow... Cheers, BR
  15. Forced an update to pull down the latest image without any issues. Still using the CloudBerry trial key and Backblaze B2 storage over my 15 mbps (upload) internet connection (Shaw 150 unlimited) Test backup is a 33GB folder on my Unraid with a couple thousand JPEGs Update looks VERY promising!!!! CPU utilization is negligible (5-6% overall) even with encryption enabled -as it should be. Upload speed is maxing out my internet link at a steady 1950+ KB/s (1.95 MB/s) Running for 20 min with little fluctuation in CPU or upload speed Next test will be to see if it stalls after an hour like the previous version. I'll report back when the test finishes... Thanks for the quick turnaround Djoss! BR
  16. Not sure if it helps you, but I just heard back from CloudBerry support - apparently the <just posted today> Linux version 2.1.0.81 has a fix?
  17. Yes I do, turning encryption on/off made no difference. I also experimented with enabling / disabling the bandwidth throttling feature but it made no difference either. -Restarted the docker and backup job after each change. Without throttling, encryption or compression -I'm not sure why the app is so CPU hungry if all it is doing is copying jpeg files into Backblaze. I have opened a ticket directly with Cloudberry and apparently it is a "know issue"? and a "fix is on the way" ? I'm open to suggestions and willing to try other ideas! I know I could pin the Docker to a single CPU thread, but that seems more like a band-aid, I would rather help find a fix. Cheers, BR
  18. Anyone else seeing very high CPU utilization during backup? I have a decent quad core E3-1220 v3 @ 3.10GHz in my server and this CB docker is pushing all 4 cores to 75-100% while running backups. I have tried adjusting the Threads, Chunk size and Memory settings but they don't seem to make much of a difference. My current "test" backup is mostly jpg files, encryption is on, compression off -backing up to Backblaze B2 buckets. Bandwidth throttle set to 1024 KByte/s on a 15 mbps upload intent connection. Cheers, BR
  19. My 2c, Sometime in the past 24hrs Deluge VPN stopped being able to resolve "nl.privateinternetaccess.com" (discovered with DEBUG logging enabled) -Without it, Deluge VPN won't properly start and allow access to the Web console unless you turn the VPN function off. I ended up hard coding the current IP for nl.privateinternetaccess.com as 109.201.135.220 and everything started working. I consider this a temporary fix, as the name resolution is critical when the NL PIA gateway changes IPs.... I wonder if this had something to do with the DDoS attack today on some of the Internet's DNS servers?... I actually like the fact that Deluge VPN shuts down when it can't maintain a secure VPN - better safe than sorry! Cheers, BR
  20. FYI - Just updated from 6.1.9 I like what I see so far! File system update was smooth, I was however bit by the Docker bug. My image file was on my main cache SSD, but all my config files were on a external mounted USB SSD via Unassigned Devices. I ended up deleting the docker.img, copying all my config folders into the new appdata path, and recreated my 10 different dockers into a new image file and updated appdata paths. All my docker configs and any associated data was saved and restored except for Plex - I needed to rebuild my Plex library from scratch.... It looks like my custom (non template) Openelec VM might need to be rebuilt as well, I get the following error popup when I try to start it. Not a big deal, I rebuild it for fun every once in a while... - was thinking about trying LibreELEC... << Error Starting OpenELEC VM>> Execution error internal error: process exited while connecting to monitor: 2016-09-21T04:31:19.353644Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=pcie.0,multifunction=on,x-vga=on: vfio: error opening /dev/vfio/1: Operation not permitted 2016-09-21T04:31:19.353662Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=pcie.0,multifunction=on,x-vga=on: vfio: failed to get group 1 2016-09-21T04:31:19.353668Z qemu-system-x86_64: -device vfio-pci,host=01:00.0,bus=pcie.0,multifunction=on,x-vga=on: Device initialization failed Cheers, BR
  21. Thank you so much for the great comparison! I have tried two of the 3 -Openelec template & John's full install (my current setup) and Windows7 + Kodi... I too have noticed very random audio dropouts (2-5 / movie) using OE with Tru-HD or DTS-HD pass-through audio in either setup. But not in Windows! Some users may not care, but I do. After the first audio drop I stop enjoying the movie as much and find I am listening /waiting for the next one... I didn't spend over $2k on a surround setup to not have the audio BluRay perfect .... I have also noticed that both versions of OE crash if left running but not connected to HDMI for several days... It seems to pin the Vcpu for extended times until I force reboot the VM or connect the HDMI and it reboots on its own... I will try the KodiBuntu next -how much ram/cpu did you assign the VM? -and only a 8GB Vdisk? My Linux kung-fu is average at best so I have avoided it up to now, but this inspires me to try it! Thanks, BR
  22. I have the same problem running Chrome on Win7 - cannot control the cursor <with my mouse> over the CrashPlan app in the VNC window... ("Local Cursor" is grayed out within the VNC settings) But using IE on the same PC it works fine... (Local cursor is enabled within the VNC settings) I've isolated the issue, it seems to be related to my laptop having a touchscreen. I can "touch" the CrashPlan app via noVNC just fine using my laptops touch screen using Chrome browser, but soon as I use the mouse it stops! Something about using a mouse on a touchscreen enabled device and noVNC over the Chrome browser... breaks. My 2c.
  23. +1 This type of solution is very much needed. Having the ability to choose which application uses the VPN provides a great amount of control and flexibility. As an example VPNing docker applications like Sickbeard, Sickrage, Sonarr, Couchpotato, Sabnzbd, to name a few, but not VPNing Plex Server since it has an issue with losing it's connection once a VPN connection is established. Hope to see this type of solution come to fruition in the future. OR... You can use your external firewall / router. Load a DD-WRT or Tomato firmware on your router and using the OpenVPN client & the active routing policies - you can "route" any number of specific internal source IP's out through the VPN. Keeps the networking on the edge and it is actully quite easy to set up (If I can do it! ;-)) My 2c,
  24. im not sure which is cheaper but i can tell you PIA are pretty darn cheap if you go for a year, i think im paying around £3 per month for unlimited usage, speeds are ok, not lightning fast but adequate for me (approx 800 KB/s to 1.3MB/s DL on a 20Mb/s line) +1 for PIA, I am in Canada and pay $6.96 USD / month for PIA. Speeds are great, my current PIA endpoint for Deluge is in the Netherlands and I still reach 2-4 MB/s (Bytes NOT bits!!) on well seeded torrents. I also create a separate tunnel to the US on my Tomato Router for Netfilx using sourced based routing and openVPN -also fast and reliable! Downloads caps (800 GB/mo) are set by my ISP - PIA doesn't care... My 2c BR
  25. I've been running OE on a standalone HTPC for years and avoided AMD based cards & iGPU's. I specifically wanted bit-streamed passthrough HD audio in my movies, soon as I switched to Nvidia GPU's everything just worked with less fuss and headaches. Anything from the GT 720, GT 730 & GT 610 series has had success from other OE & UnRaid forum users. With my personal hardware setup -I even had compatibility issues with ASUS specific branded GT 720 cards (same reset issue as you!), so I switched to a cheapo low power passive EVGA GT720 and now everything is perfect! For the record I am using this card: http://www.newegg.ca/Product/Product.aspx?Item=N82E16814487062 It would be nice if the hardware compatibility was a little more forgiving -but we are seriously pushing the envelope by running a visualized HTPC within a low cost home/SMB NAS solution on consumer grade hardware... My 2c Bertrand