snowboardjoe

Members
  • Posts

    261
  • Joined

  • Last visited

Everything posted by snowboardjoe

  1. So, this was a Nextcloud update? Maybe to version 14? I'm still on 13 here and have been trying to figure out how to get to 14. Anyway, if that's the case, your data should be fine as that's stored in the database and in your share. You'll need to recover the credentials to gain access again. You may need to do some work to find the database, reset the password for that account so it's known again and rebuild your config.php from scratch. Hate that the upgrade lost your original config. I've added making a backup of my /appdata/ to my list too.
  2. Yep! As an example, server.domain.com:444 is the standard format.
  3. Would you let us in the forum know what proxy setting resolved the issue?
  4. That's still syntactically incorrect. You have unbalanced quotes in your value. Missing single quote before "mysite.duckdns...". Should be 'overwritehost' => 'server.domain.com:444'
  5. I'm trying to get transmission-openvpn up and running using NordVPN. Right now I just get this in the log: Using OpenVPN provider: NORDVPN Supplied config default.ovpn could not be found. Using default OpenVPN gateway for provider nordvpn Setting OPENVPN credentials... adding route to local network 192.168.0.0/24 via 172.17.0.1 dev eth0 Options error: In [CMD-LINE]:1: Error opening configuration file: /etc/openvpn/nordvpn/default.ovpn There were no ovpn files included. I cloned them from the repo and have those in place at /mnt/user/appdata/Transmission_VPN/openvpn/nordvpn/. I'm not sure how to point the container to this new location. I also fixed a bad symllink for default.ovpn. I'm guessing I need to add a mount for the /etc/openvpn/nordvpn path? UPDATE I got further along. Added a mapping for /etc/openvpn/nordvpn to map to /mnt/user/appdata/Transmission_VPN/openvpn/nordvpn. Now it can find my default.ovpn file and I got connected to NordVPN. I can only map to one file and would like the option to include multiple files. At least that much is working. Still can't get the Transmission UI to load, though. Just times out over port 9091. So, still digging here. UPDATE 2 Got the GUI to load. I missed an item on the local network and had the wrong subnet. RTFM. I have some tests to do, but I am wondering about how to better manage the OPVN files. I just have it using default.ovpn and then I just change the symbolic link as needed. Would be nice if I could put a list in there. Not sure how to do that in the OPENVPN_CONFIG that appears customized for PIA. Would I just remove this variable listing and create my own where I could then list multiple items? Let me know if I'm violating any best practices here.
  6. No devices found after earlier unRAID crash this evening. I've mapped all of the /dev/sg* entries. However, /dev/sr0 is no longer present, so I removed that from the config as that will cause an execution error. MakeMKV starts, but no drives found. Been pulling my hair out every time I modify a device on my system as that changes the entries and I have to modify the MakeMKV container over and over again. This time, the lack /dev/sr0 appears to be the issue following the crash, so no idea what to do next now. EDIT: Fixed it. When I moved the device to a different USB port, it initialized as /dev/sg11 and /dev/sr0 appeared again. Updated the container with the new device and it's good again. It is maddening how my system keeps remapping devices if there is even a slight change.
  7. Server crashed this evening while I was away. There were some storms in the area, but a second server attached to the same UPS remained online. I know the main logs are written to a volatile filesystem. Is there a way to have one or more of those logs written to my flash drive, or is that too much write activity for it? Anyway, system recovered on its own and another parity check was fired off. Only thing running at the time was a preclear (disk erase) on an old and unassigned drive.
  8. Got it! While array was still stopped: Removed second cache drive. Set cache slots to 1. Set cache slots back to 2. Assigned second cache drive (blue icon!) Started array. I now see R/W activity for both devices.
  9. Unassigned and reassigned. Still green. Unassigned, formatted and reassigned. Still green.
  10. Stopped Docker. Migrated all data off of cache and verified empty. Shutdown array. Ran wipefs commands as stated above: /dev/sdg1 was not found (not surprised there) No output from wiping /dev/sdg (returned 0 exit code) Refreshed main screen. Cache 2 still has a green dot (never went blue). Suggestions on next steps? Remove, format and re-add?
  11. Sorry, that was not clear. Your message implied that I needed to shutdown if the device name had changed (and it hadn't). There is a lot about the cache process that is complicated and not documented. Wish there was a better way to evaluate the config with a command before firing up the array. I'll go through the process again and see how that goes later this morning or possibly tomorrow. Which process should I follow? Follow your original steps and wipe it again while the array is stopped?
  12. Oh? I was never told to stop the array. I did not do that. I will try it again later this morning using the procedure you described.
  13. Found this post as well which explains things a little more detailed. The warning message needs some tweaking to be less confusing.
  14. Sig updated. Sorry, been awhile and forgot about it. OK, will delete and recreate again once I'm certain the cache issue is resolved. Will also look for that Previous Apps feature. Would be nice to understand this issue and why it keeps referencing a beta version of unRAID 6.
  15. Did some work last week upgrading cache pool. After I went through that work and brought Docker back online, I got the following: Your existing Docker image file needs to be recreated due to an issue from an earlier beta of unRAID 6. Failure to do so may result in your docker image suffering corruption at a later time. Please do this NOW! I did as it said after doing some brief research and everything was fine. I'm debugging a cache issue again today (covered under a separate thread) and needed to migrate everything off of the cache and move it back including shutting down Docker just before. Following that work I have the same warning again. I really don't want to reload all of my images yet again. What problem is persisting here? Something with migrating docker.img off of the cache and back? Is it a valid warning? I'm on unRAID 6.5.3 here.
  16. Shutdown Docker. Migrated all data off of cache drive. Ran wipefs commands on sdg1 and sdg successfully. Moved data back to cache drive. Restarted Docker. Reset disc statistics on Main screen. Still no write operations to the second cache drive.
  17. Thanks! I'll review those steps later this afternoon. Will have to move some Docker stuff off of there to be safe and will attempt that. Will report back on findings. How should the main screen look when redundancy is working?
  18. Attached. Thanks for taking a look at this. laffy-diagnostics-20180901-0944.zip
  19. Last week I upgraded my cache settings from a single cache drive to a dual cache drive (included snapshot). That view makes it look like the second drive is idle and sundown. Is that just showing misleading information? I don't know how to verify I'm truly operating as RAID 1. These are two 1TB drive and df shows it as a 1TB filesystem. root@laffy:/mnt/cache# df -h . Filesystem Size Used Avail Use% Mounted on /dev/sdb1 932G 47G 884G 5% /mnt/cache
  20. Yeah, I understand that. The only way that I can get it to boot unRAID is to make sure all disk slots are occupied and reconfiguring BIOS to point back at USB drive. If any disk slot is open, BIOS gets confused and won't boot from USB.
  21. I think this is my motherboard doing this (ASUS M5A78L-M LX PLUS AM3+ AMD 760G Micro ATX AMD). Had it for years as I've been building my unRAID server from 3 to 8 drives over time. I'm trying to remove on of my data drives physically (already removed cleanly from the unRAID config). This occupied /dev/sdb. Any attempts for that slot to be vacant results in an unbeatable system. I've tinkered with the BIOS settings and boot priority and it keeps changing it to not boot from the USB drive. When I reset the BIOS config again and re-insert the old drive, it will boot into unRAID (otherwise it's just a blank screen). I'm using Rosewill drive cages too, so not sure if that's a factor here too, but I don't think so. Anyone else run into this with an ASUS motherboard? What the hell is happening here? Why does /dev/sdb need to be attached to a hard drive to boot?
  22. Yeah, who knows what exactly happened, but turned out to be nothing serious. If I try to run the app again it does not see it as abandoned anymore. All is well.
  23. All of my drives were formatted as reiserfs when I started using unRAID. When I added drives about two years ago, it formatted them as xfs. I'm in the middle of a project of replacing all of my 3TB drives with 6TB drives. Already did the parity and just replaced the first data drive this morning. I expected it would be formatted as xfs, but it ended up maintaining reiserfs for that slot. Not worried, but just unexpected. Rebuild in progress and everything else is normal. Is this expected behavior? I thought everything new going forward would be xfs, but I may have missed a memo on that.
  24. Yep, all good this morning. Back to normal state and backup is in sync. It was the combination of upgrading the plugin at the same time that threw me off. Thanks!