snowboardjoe

Members
  • Posts

    261
  • Joined

  • Last visited

Everything posted by snowboardjoe

  1. Tight, but possible. However, I would only be able to use four of the ports since they’re stacked on top of each other in the same orientation. I bit the bullet and purchased a new Rosewill server chassis. Plenty of space and room to add more drives (I have 8 now and can add 4 more easily). I could have bought the HBA and cables, but for about $125 more I’ll have an expandable system and no headaches.
  2. Was working on my upgrade this past weekend and it failed miserably. The 8 SATA ports built into the board (one of the primary reasons for buying it) face towards the front of my chassis in direct conflict with my two Rosewill 3x4 cages. There is no space to plug in my cables there and route them to my drives. Extremely frustrating. Tried looking for other cases that may be deeper and allow my existing 3x4 cages and I've come up with nothing (either not enough 5.25" external bays to install cages or unavailable). Still reviewing options right now. Looking at getting SATA controller and connecting drives that way and never using the built-in SATA ports--seems like such a waste. Wondering if anyone has run into similar problems.
  3. It should not make any difference in how it operates with the cache setting. It should be set to Only for performance though and allow the other disk(s) storing data to spin down. Worst case, the application would be sluggish or pause waiting on a data drive to spin up and provide the data it needs.
  4. Did some more research into this and now realizing how complicated it can be. The choice of Ryzen CPU, motherboard and memory is very tricky. I found the QVL document for what is supported for the Auss X370 Pro: https://www.asus.com/uk/Motherboards/PRIME-X370-PRO/HelpDesk_QVL/ It's a long PDF and takes some time to understand how things are sorted there. The goal is to find the maximum speed supported by all components and your budget. So far, this appears to fit my budget and it's available (can run at 3000 with Ryzen 5 2600): https://www.newegg.com/Product/Product.aspx?Item=N82E16820232498 G.SKILL TridentZ RGB Series 16GB (2 x 8GB) 288-Pin DDR4 SDRAM DDR4 3000 (PC4 24000) Memory (Desktop Memory) Model F4-3000C16D-16GTZR Still digging around some more, but appears this may be the one. The QVL spreadsheet is still damn confusing.
  5. What command are you running? It should look something like this: sudo -u abc php /config/www/nextcloud/occ files:scan --all
  6. From what I see in the specs for the RAM I chose, the speed is 2400. I'm assuming a lower speed won't be an issue for the CPU, but will be patient and wait for others to chime in as well here. And, yes, only unRAID and docker containers. No interest in running any VM's here. I have other hardware I could use for that if needed, but would still likely stick with docker.
  7. Hmmm, ok, will look at that again. That was the recommended speed from Critical. I saw the clock speeds from the specs here: 4 x DIMM, max. 64GB DDR4 3200(O.C.)/2933(O.C.)2666/2400/2133 MHz, ECC and non-ECC, un-buffered memory From that information I take it I could use 2666 (ignoring the overclocking speeds), but I see looking at the CPU specs they recommend 2933 which I did not consider yet. What speed should I use at this point? What's the significance of using 2933 on the motherboard since it mentions 2933 as an overlocking speed? Maybe I'm misinterpreting something here. I've been doing IT for decades, but rarely get into fine details like this on custom PC systems.
  8. Time to upgrade as I'm running out of RAM and want to add a few more Docker containers. Current motherboard only supports 8GB. It's been a great system for 6 years, but time to move on now. Current system only runs unRAID and 14 Docker containers. New system will do all of that plus a few more containers and Plex server (decommissioning existing, dedicated Plex server). Not running any other virtualization and don't plan to. Current configuration Mobo: Asus M5A78L-M LX Plus (6 years old) CPU: AMD FX-4100 Quad-core RAM: 8GB (at capacity and main reason for upgrading) SATA: Syba SI-PEX40064 4 port SATA III (mobo has 6 SATA ports) Storage: 6 x 6TB WD Red (parity and data), 2 x 1TB WD Red (cache, mirrored) Proposed configuration Mobo: Asus X370-Pro CPU: AMD Ryzen 5 2600 RAM: Crucial 16GB Kit (2 x 8GB) DDR4-2400 UDIMM - CT2K8G4DFD824A (may double this) Video: EVGA GeForce 8400 GS DDR3 (using an old one due to Ryzen CPU) SATA: No expansion cards since mobo has 8 ports on board Storage: No change Any thoughts or comments about this new configuration?
  9. I have tentatively chosen an Asus X370-Pro motherboard (still researching and verifying specs). That comes with 8 SATA ports, so I could ditch the need for a controller card completely. The old Syba card did case some issues a few years ago when I first installed it when it would sometimes stall out on me. I think later version of unRAID resolved that as there have been no problems for a long time now. Would rather be rid of it regardless.
  10. Working on a project to upgrade my existing unRAID system (new motherboard, new CPU, new RAM). Trigger was RAM as I've maxed out what the motherboard supports. Here's the current setup: Mobo: Asus M5A78L-M LX Plus (6 years old) CPU: AMD FX-4100 Quad-core RAM: 8GB (at capacity and main reason for upgrading) SATA: Syba SI-PEX40064 4 port SATA III (mobo has 6 SATA ports) Storage: 6 x 6TB WD Red (parity and data), 2 x 1TB WD Red (cache, mirrored) When looking at motherboards, should I try to procure one that has 8 SATA ports built-in, or rely more on a separate controller? Maybe use an 8-port exclusively instead of the on board SATA ports? Just keep using existing controller in new system? Does it make any difference in performance?
  11. Go to unRAID dashboard. Go to Docker. Select CrashPlanPRO. Scroll to bottom and expand "Show more settings...". Locate Maximum Memory setting (the default is 1024M). If you've got plenty of RAM to spare, I would bump up to 2048M. Click Apply at bottom and container will be restarted with new settings.
  12. The amount of RAM allocated to that docker is not enough for what the application needs. Change the Maximum Memory setting in the docker config for CrashPlan provided you have enough free RAM for it (but keep it reasonable as you can always increase it later as needed).
  13. Well, son of a b..., there it is. I had no idea that was an area that scrolls. Feel like an idiot too. Thanks!
  14. Attached here. I've also opened a support request with Code42. The flash is present, but my storage directory is not present (yet, it is backed up nightly without any issue and verified). Very strange to see this inconsistency. Can't add/remove/modify backups of any share right now.
  15. I haven't looked at my CP config in awhile (nothing has changed in structure in ages). Was just doing some verification today and noticed I can't manage my media files anymore. CP still shows I have 13TB backed up which is on the low side. All I see are the local configuration files and the flash drive. Container config shows /shares is mapped to /mnt/user. However, there is no /shares directory from the WebUI. Not sure how long it's been in this state. I logged into console and I see my mappings there, but not in the WebUI. Suggestions on where to look on why my mappings aren't showing up in the WebUI? Just seems like an incomplete list and does not match what I see from console (some system items are there, but random). EDIT: Did some more checking. Clicked on Restore and I see up to date information there including all of my media files (last updated last night). So, this is really weird why I can't manage this from the WebUI to add and remove things.
  16. Ahh, sounds like the environment is not getting passed and/or paths not getting setup. Try using the console connection for the container.
  17. I've not run into this problem. I get incoming connections from peers just fine. At the same time, I've also verified all traffic is being tunneled through NordVPN (they only see my VPN IP). Due to the nature of how these programs work, how else would you store them? The applications must know what the passwords are to submit the services you're subscribed to. If you encrypt it, the application has no method to extract it to submit the credential to the service on every startup of the container. Best thing to do is to make sure such files are only readable for the application that needs it.
  18. Something is not right with your Docker container then. This is for the nextcloud container from the console?
  19. Programs run as abc--not www-data. Pretty sure you need to specify the PHP interpreter too. So, it should look like this: sudo -u abc php /config/www/nextcloud/occ db:convert-filecache-bigint
  20. Make sure your copy and paste is accurate. Here were my results: root@cfeed5230df2:/$ find /config/nginx -name \*.conf -exec grep -H client_max_body_size {} \; /config/nginx/nginx.conf: client_max_body_size 0; /config/nginx/proxy-confs/nextcloud.subdomain.conf: client_max_body_size 0; /config/nginx/proxy.conf:client_max_body_size 0; root@cfeed5230df2:/$ Sounds like something got truncated after the -exec parameter. It's possible it found no *.conf? That would be weird.
  21. You can find all files that use that setting and output what the value is using this command from the console of letsencrypt docker: find /config/nginx -name \*.conf -exec grep -H client_max_body_size {} \; This will spit out all matching files and the value assigned. The Nginx config has redundant configuration options and each iteration as it parses through the config resets the value (last one wins).
  22. These is more than one place (file) where client_max_body_size Is set. I believe there are at least two. Set both and restart the service or restart the docker. It’s a redundancy in the config and causes some confusion. I would provide more details, but on vacation right now away from a real computer.
  23. If you're trying to do this as admin, you need to login as that user. Then you can set email address, phone, address, etc. Admin can't do that (at least, not in v13).
  24. Unless you changed any of the default settings (mine is customized as I use MariaDB for several applications), it's possible a fresh install with the default config.php will restore access. You will still need to add in settings for letsencrypt, but that's documented on how to modify it. Don't give up hope you've lost everything. It should all be there.
  25. You should only have to rebuild the file config.php from scratch--not NextCloud itself. That and you need to reset the admin password to a known password for the config.php. A little tedious, but once you have the right config.php, everything else should be fine.