ehawman

Members
  • Posts

    23
  • Joined

  • Last visited

Everything posted by ehawman

  1. Unraid 6.12.3 So I followed this guide and got dnsmasq set up on my server. Except I couldn't install or launch dnsmasq because... it was already running. Running `ps -feww | grep dnsmasq` yielded this: UID PID PPID C STIME TTY TIME CMD nobody 10470 1 0 22:34 ? 00:00:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp script=/usr/libexec/libvirt_leaseshelper root 10471 10470 0 22:34 ? 00:00:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper Ok so I'm guessing that in the intervening 14 years Unraid has added a dnsmasq instance. Printing the contents of the default.conf reveals that libvirt is generating these files automatically. So all this to say: Can I use my Unraid tower as a DHCP/DNS server with dnsmasq or would attempting to do so break libvirt? If I can, what is the most correct way to go about that?
  2. Oh derp. I saw "sha256 hash", saw the hashes below, and immediately skipped the rest of the post thinking it was irrelevant. My bad.
  3. Apologies if this is already asked, but the default Docker system always shows the containers as having an available update. This causes Fix Common Problems to have a hissy fit. I can ignore those warnings if I have to, but is there any way for the Compose Up//Update Stack processes to tell Unraid's Docker implementation that they've done an update? If not, is there a way (in your opinion) that the problem could be addressed from the Unraid side? Thanks!
  4. I have a few requests for your consideration. First, I would really like to have the latest fish shell version included. I'm using the version from Masterwishx because they're a hero, but it would be even better if this was integrated. Secondly, a couple of packages that I like which aren't on the Modern Unix list: croc: croc is a tool that allows any two computers to simply and securely transfer files and folders. micro: micro is a terminal-based text editor that aims to be easy to use and intuitive, while also taking advantage of the capabilities of modern terminals. It comes as a single, batteries-included, static binary with no dependencies; you can download and use it right now! neovim: "Agressively refactored vim" Finally, let me link the contents of the (IMO) excellent Modern Unix list, with my personal thoughts in (parens). Items that are already in NerdTools are omitted. Of these, I would call bat, fd, ripgrep, sd, and zoxide my "essential" packages. bat: A cat clone with syntax highlighting and Git integration. exa: A modern replacement for ls. lsd: The next gen file listing command. Backwards compatible with ls. (Personally I prefer lsd to exa but they both are stepping in to the same role) delta: A viewer for git and diff output. dust: A more intuitive version of du written in rust. duf: A better df alternative. broot: A new way to see and navigate directory trees fd: A simple, fast and user-friendly alternative to find. (This one beats the pants off of find in many cases) ripgrep: An extremely fast alternative to grep that respects your gitignore. ag: A code searching tool similar to ack, but faster. mcfly: Fly through your shell history. Great Scott! choose: A human-friendly and fast alternative to cut and (sometimes) awk. jq: sed for JSON data. sd: An intuitive find & replace CLI (sed alternative). (Holy smackerel this thing slaps) cheat: Create and view interactive cheatsheets on the command-line. tldr: A community effort to simplify man pages with practical examples. (There are actually a number of implementations of this, tldr++ and tealdeer being some of the best imo) bottom: Yet another cross-platform graphical process/system monitor. glances: Glances an Eye on your system. A top/htop alternative for GNU/Linux, BSD, Mac OS and Windows operating systems. gtop: System monitoring dashboard for terminal. hyperfine: A command-line benchmarking tool. gping: ping, but with a graph. procs: A modern replacement for ps written in Rust. httpie: A modern, user-friendly command-line HTTP client for the API era. curlie: The power of curl, the ease of use of httpie. xh: A friendly and fast tool for sending HTTP requests. It reimplements as much as possible of HTTPie's excellent design, with a focus on improved performance. zoxide: A smarter cd command inspired by z. (I LOVE THIS ONE. Seriously any z jump implementation would be incredible, esp since we can define the database location to be on a persistent share. Makes your life so much easier.) dog: A user-friendly command-line DNS client. dig on steroids
  5. The goal: Run a Firefox container which uses the qbittorrentvpn container as its network. My stack: https://pastebin.com/E6Naetvt Result: qbittorrentvpn works perfectly, the Firefox container successfully hooks in to its network and has an internet connection (confirmed via CLI), but attempting to connect to port 3000 times out (accessing from a different machine on the same network). Change: Manually edited iptables in the qbittorrent container via CLI iptables -I INPUT 11 -i eth0 -p tcp -m tcp --dport 3000 -j ACCEPT iptables -I INPUT 12 -i eth0 -p udp -m udp --dport 3000 -j ACCEPT iptables -I OUTPUT 11 -o eth0 -p tcp -m tcp --sport 3000 -j ACCEPT iptables -I OUTPUT 12 -o eth0 -p udp -m udp --sport 3000 -j ACCEPT Tested and this works just as I expected. So my two questions are: 1. Is what I did "safe"? Both from a network security and a stability perspective? What entity set the other rules (like the ones regarding ports 8080)? 2. What is the "correct" way to change these iptables so I don't have to manually touch this in the future? I can't find a way to do this through the compose file. Thanks for all the amazing work you do!
  6. @JorgeB @trurl THANK YOU! Y'all saved my bacon So glad to be off the external drive and in compliance at last. Y'all rock.
  7. @JorgeB To clarify, rebuilding is doing a New Config with "Parity is already valid" checked, right?
  8. @trurl @JorgeB Here they are tower-diagnostics-20221005-2252.zip
  9. @trurl Thanks for the advice. I'll try to take a look at it soon but A) It's my birthday and B) I'm heading out of town tomorrow so my time is extremely squeezed. Apologies for stringing y'all along like this. I really appreciate you taking the time.
  10. @JorgeB Will do. Unfortunately I can't seem to use "Local access" via My Servers (I'm at work at the moment). It's showing ERR_CONNECTION_TIMED_OUT. However My Servers says it's online and correctly identifies the array as stopped? Weird.
  11. @trurl Here you go! tower-diagnostics-20221005-0038.zip
  12. @JorgeB @trurl Parity complete. What's the next step?
  13. @JorgeB@trurl Running parity now. Estimated 1d10h
  14. @trurl Correct. I thought it didn't make a difference since parity was building from the same drive. Cool. I'll wait. At the end of the day I can redownload stuff manually. Annoying but not the end of the world. Note that this is two runs, first without then with the -L flag. Not sure how I'd even make a backup at this point. Again, losing the data would be quite annoying but I wouldn't be heartbroken. xfs_repair.txt
  15. @trurl I shucked the drive, did the New Config, but now it's saying "Unmountable: Unsupported partition layout". I ran an xfs_repair and that didn't solve the problem. Attaching diagnostics tower-diagnostics-20221002-1709.zip
  16. I'm transferring an external to internal (assuming typo?). Are you saying that when I shuck an external data drive, and convert it to an internal data drive, it clears zeroes? If so, what is my best option for cloning this disk given that I don't have an extra 14TB drive sitting around? I'm currently using slightly under 3TB, so maybe I could warehouse it somewhere else and copy back? How do I ensure that Unraid sees it as the same drive/data upon copying back? (or is that even relevant?) EDIT: Perhaps I could use the new 14TB as the shelter, shuck the external, then set *that* as the parity drive?
  17. I was working on a 14TB external drive (shame on me) along with an extra 2TB drive, and no parity (double shame on me). I have purchased a 14TB internal drive, designated it as parity, and am waiting for the parity process to complete. Once done, I intend to shuck the external and transfer it inside. What concerns, if any, should I have during this process? I'm guessing that the shucking will change drive identity in some way. Will this cause hiccups in Unraid?
  18. @stephn311 I think you'll have much more success starting your own thread. This appears to be a different issue. Are you even using My Servers? I don't see any reference in servers.conf.
  19. Problem solved! TL;DR: Looked in /etc/nginx/conf.d/servers.conf and found the correct URL to be using. Once in, I updated the My Servers plugin, refreshed the page, signed in, rebooted, and voila! Shows up in the Dashboard and everything
  20. Background: Server was hiccupping its way through life since I was on a 14TB external hard drive (bad, shame on me). I didn't have money for a new drive so I've let it sit idle for about a month and change. I have since purchased an internal 14TB drive and wish to use it as the parity so I can shuck the external and make everything right. So I boot it up, and My Servers reports it is signed out. I can't access the GUI via the local address because of networking rules that I didn't have the slightest ideas about at the time of setup (and honestly barely have a better grasp of now). I can, however, SSH in, which is how I discovered that my SDA1 drive needed an xfs_repair done. Did it. Cool. It mounts now. I can access the shared drive from my desktop! GUI? Nope. At this point I'm quite stumped. Unraid tower can talk to local devices and the web. I can ping it, I can SSH into it, I can see it starting up services and whatnot. Currently on 6.10.3 but will be upgrading once everything is sorted. Attached is the syslog from my machine (been fighting off a cold or something which is why the long time gap). Thanks in advance to the tech wizards on here. EDIT: Breakthrough! I ran cat/ /etc/nginx/conf.d/servers.conf and yoinked the server_name URL. Took me to the login screen and I'm now seeing the GUI. *However*, My Servers is still showing me as logged out. On my Unraid server, clicking on my profile in the top right pops up a scary warning. So yeah I'm scared that if I sign out I'll never be able to sign back in. Going to try updating all my plugins and hope that's the issue... syslog.txt
  21. That plus an xfs_repair did the trick! Thank you very much. I'll be using an internal drive once I have more to back this one up (requiring me to be less poor)
  22. Aug 12 21:58:00 Tower emhttpd: error: hotplug_devices, 1719: No such file or directory (2): Error: tagged device WD_Elements_25A3_59354B5A58424243-0:0 was (sda) is now (sdg) This is my biggest drive with most of the content on it. How do I safely get it to identify as SDA again? tower-diagnostics-20220813-0015.zip
  23. Firstly, I am muddling my way through it, so please excuse the jargon I'm sure I'm abusing. I'm attempting to create a tool that will generate packages for NerdPack (and DevPack) automatically, given very little input from the user (ideally just URLs for slackbuilds.org and source code tar.gz, along with which version(s) of Unraid to generate packages for). There's a process that I'm ironing out with dmacias72 to actually build the package. NerdPack packages are grouped by Unraid version. You have an Unraid install of X version, you build your package for it, you save that package to the appropriate folder in NerdPack's structure, you restart the whole process with a different version of Unraid. Rinse and repeat until you've covered all the Unraid versions you want to do. My question to y'all is, A) Realistically, do we need to be generating a new package for each X.Y.Z version? Just X.Y? Just X? and B) What is the best/cleanest way to go about this? I haven't worked extensively with Docker containers before (certainly not with creating them), but I *think* that's the right way to go here. My thinking is that Docker has a lot of flexibility with the CLI, which my app can take advantage of, and also I can map volumes to circumvent SSH shenanigans almost altogether. What would y'all recommend?