joshz

Members
  • Posts

    10
  • Joined

  • Last visited

Everything posted by joshz

  1. That's the problem with BTRFS, it has no graceful recovery from problems like production file systems. All software works great when there's no problems. Good software is differentiated from bad software when it can handle issues and not crap the bed. BTRFS craps the bed at the slightest provocation. As evidenced here. The fact that I have to literally blow out the whole raid, reformat, and recreate it is indicative of not being ready for prime time. Is there any way to switch my cache drive to something more stable? Anyway, here is the requested output: [/dev/sde1].write_io_errs 0 [/dev/sde1].read_io_errs 0 [/dev/sde1].flush_io_errs 0 [/dev/sde1].corruption_errs 0 [/dev/sde1].generation_errs 0 [/dev/sdf1].write_io_errs 0 [/dev/sdf1].read_io_errs 0 [/dev/sdf1].flush_io_errs 0 [/dev/sdf1].corruption_errs 0 [/dev/sdf1].generation_errs 0
  2. Hmm... well, BTRFS strikes again. It's a real shame you're forced in to using it. It's not production ready and I really detest it as a file system for production use. It's great for hobbiest and testing, but it's garbage when it comes to a live environment. Any idea why Unraid forces the cache drives to be btrfs?
  3. I rebalanced with the parameters --dconvert=raid10 --mconvert=raid10 and it's showing 1TB now. It most definitely did not do it automatically though; it required a forced rebalance. All appears to be well at this point, but the additional drives still don't show up in the diagnostics or listing. newmediaserver-diagnostics-20171214-1634.zip
  4. Which diagnostics do you mean? Apparently, I have to rebalance the cache pool? Upon reading more about btrfs raid 5, it seems it's unstable and prone to data loss, is that correct? So doing a raid 5 with btrfs is a bad idea?
  5. I guess I'm not understanding what's going on under the hood in Unraid. I have 2x 500 GB Samsung Evo SSDs in my cache pool right now. They have been operating as expected and total capacity is 500 GB. I bought 2 more 500 GB Samsung Evo SSDs and put them in. I expected the cache pool to expand to at least 1TB, if not 1.5TB... but that's not the case. The cache pool is still only using 2 devices and the size remains the same. All 4 cache drives are listed under Cace Devices, labled Cache, Cache 2, Cache 3, Cache 4. There's no configuration option to change things up... so I'm not really sure what I should do at this point to capture that additional space. Ideally, this would be running as a RAID 5 and I could use 1.5 TB and maintain some redundancy. But if it needs to run RAID 1 without any option to do RAID 5, that's ok, as it should give me 1TB of space at least. Can anyone help? Thanks
  6. Ok, so for anyone stuck like I was, here's how to get Rocket Chat up and running with a MongoDB docker image: Install and Start MongoDB docker image Install Rocket.Chat docker image Change the MONGO_URL to reflect your host IP address ( Example: mongodb://192.168.1.10:27017/rocketchat ) Just be sure to use your host IP and the port you assigned to the MongoDB docker Change the ROOT_URL variable to also point to your host IP address ( Example: http://192.168.1.10:3000/rocketchat ) Click apply to create the docker and it should start on its own There doesn't appear to be any particular need to link the containers, but you could do that if you wanted and fiddle with the two variables above to reflect that.
  7. Trying to get this up and running, but it doesn't work. Your instructions: Add in the attribute "Extra Parameters" the value "--link "MongoDB:db"". MongoDB is the name of the MongoDB docker container. Don't really make any sense. There's no place to add "attributes" only variables, and variables require 3 parameters: Name, Key, and Value. The Value field can not contain quotations. So the above has to be done when creating the container under the "Advanced options" which isn't shown by default. Well and good, but even adding that doesn't help. Still fails to start MongoError: failed to connect to server [mongo:27017] on first connect [MongoError: getaddrinfo ENOTFOUND mongo mongo:27017]
  8. Ok, fair enough. Do I need to manually set it up for private key authentication vs passwords, or is there a GUI switch for that?
  9. Hi Frank, Thanks for getting back to me. The problem with root ended up being root login was turned off in the sshd config file for some reason. Turning it on solved that particular issue... but the problem with users still remains. I can fix all these problems via the CLI, but the questions remain: 1. Why is unRaid preventing standard user logins from SSH 2. How to allow them via the GUI I have no problem doing everything via the CLI if that's what's required (I've been a Linux system admin for 20+ years now), but I would rather use the GUI... not because I prefer it, but I don't want to get the unRaid configurations and settings out of sync with the OS, so I would prefer to let the unRaid interface handle whatever needs to be handled so all the various switches and levers get flipped when they should. I would like to turn off root logins from SSH and require user level logins and the SU as needed. I would, in fact, like to turn off passwords all together and use my U2F or Yubikey as my sole authentication mechanism for CLI access.
  10. I have changed the root password in the GUI, and I am able to login to the GUI with the new root password without a problem... but I am now unable to connect via SSH with the new root password, just says it's invalid... does anyone have any ideas how I can get access to the CLI now? I have my regular user (and I created a test user), both of them can login with their respective passwords, but are immediately disconnected without being given a shell. Not really sure what to do at this point.