Jump to content

ehawman

Members
  • Posts

    25
  • Joined

  • Last visited

Posts posted by ehawman

  1. I haven't used this computer in around a year now. I want to tinker with it so I boot it up. Unraid Connect shows it as offline. I have a custom HTTPS port but trying to access that via my local network goes to an nginx 404 page. I'm pretty sure I have the correct HTTP port, but it gets ERR_CONNECTION_REFUSED and I seem to recall that being disabled.

     

    I am able to SSH in from another machine on my network and I have root creds so there's a foothold. I can also ping public IPs from it so it definitely has internet access.

     

    I see suggestions on this forum to boot into safe mode, but the only instructions on how to do that use the WebUI.

     

    I would like to just update my current config rather than restart from scratch. How can I get into the WebUI from here?

     

    Thanks!

  2. Unraid 6.12.3

     

    So I followed this guide and got dnsmasq set up on my server.

     

    Except I couldn't install or launch dnsmasq because... it was already running.

     

    Running `ps -feww | grep dnsmasq` yielded this:

     

    UID PID PPID C STIME TTY TIME CMD
    nobody 10470 1 0 22:34 ? 00:00:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp script=/usr/libexec/libvirt_leaseshelper
    root 10471 10470 0 22:34 ? 00:00:00 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper

     

    Ok so I'm guessing that in the intervening 14 years Unraid has added a dnsmasq instance. Printing the contents of the default.conf reveals that libvirt is generating these files automatically.

     

    So all this to say: Can I use my Unraid tower as a DHCP/DNS server with dnsmasq or would attempting to do so break libvirt? If I can, what is the most correct way to go about that?

     

  3. Apologies if this is already asked, but the default Docker system always shows the containers as having an available update. This causes Fix Common Problems to have a hissy fit. I can ignore those warnings if I have to, but is there any way for the Compose Up//Update Stack processes to tell Unraid's Docker implementation that they've done an update?

     

    If not, is there a way (in your opinion) that the problem could be addressed from the Unraid side?

     

    Thanks!

     

  4. I have a few requests for your consideration.

     

    First, I would really like to have the latest fish shell version included. I'm using the version from Masterwishx because they're a hero, but it would be even better if this was integrated.

     

    Secondly, a couple of packages that I like which aren't on the Modern Unix list:

    • croccroc is a tool that allows any two computers to simply and securely transfer files and folders.
    • micromicro is a terminal-based text editor that aims to be easy to use and intuitive, while also taking advantage of the capabilities of modern terminals. It comes as a single, batteries-included, static binary with no dependencies; you can download and use it right now!

    • neovim: "Agressively refactored vim"

     

    Finally, let me link the contents of the (IMO) excellent Modern Unix list, with my personal thoughts in (parens). Items that are already in NerdTools are omitted. Of these, I would call bat, fd, ripgrep, sd, and zoxide my "essential" packages.

    • bat: A cat clone with syntax highlighting and Git integration.
    • exa: A modern replacement for ls.
    • lsd: The next gen file listing command. Backwards compatible with ls. (Personally I prefer lsd to exa but they both are stepping in to the same role)

    • delta: A viewer for git and diff output.

    • dust: A more intuitive version of du written in rust.

    • duf: A better df alternative.

    • broot: A new way to see and navigate directory trees

    • fd: A simple, fast and user-friendly alternative to find. (This one beats the pants off of find in many cases)

    • ripgrep: An extremely fast alternative to grep that respects your gitignore.

    • ag: A code searching tool similar to ack, but faster.

    • mcfly: Fly through your shell history. Great Scott!

    • choose: A human-friendly and fast alternative to cut and (sometimes) awk.

    • jqsed for JSON data.

    • sd: An intuitive find & replace CLI (sed alternative). (Holy smackerel this thing slaps)

    • cheat: Create and view interactive cheatsheets on the command-line.

    • tldr: A community effort to simplify man pages with practical examples. (There are actually a number of implementations of this, tldr++ and tealdeer being some of the best imo)

    • bottom: Yet another cross-platform graphical process/system monitor.

    • glances: Glances an Eye on your system. A top/htop alternative for GNU/Linux, BSD, Mac OS and Windows operating systems.

    • gtop: System monitoring dashboard for terminal.

    • hyperfine: A command-line benchmarking tool.

    • gping: ping, but with a graph.

    • procs: A modern replacement for ps written in Rust.

    • httpie: A modern, user-friendly command-line HTTP client for the API era.

    • curlie: The power of curl, the ease of use of httpie.

    • xh: A friendly and fast tool for sending HTTP requests. It reimplements as much as possible of HTTPie's excellent design, with a focus on improved performance.

    • zoxide: A smarter cd command inspired by z. (I LOVE THIS ONE. Seriously any z jump implementation would be incredible, esp since we can define the database location to be on a persistent share. Makes your life so much easier.)

    • dog: A user-friendly command-line DNS client. dig on steroids

    • Upvote 1
  5. The goal: Run a Firefox container which uses the qbittorrentvpn container as its network.

     

    My stack: https://pastebin.com/E6Naetvt

     

    Result: qbittorrentvpn works perfectly, the Firefox container successfully hooks in to its network and has an internet connection (confirmed via CLI), but attempting to connect to port 3000 times out (accessing from a different machine on the same network). 

     

    Change: Manually edited iptables in the qbittorrent container via CLI 

     

    iptables -I INPUT 11 -i eth0 -p tcp -m tcp --dport 3000 -j ACCEPT
    iptables -I INPUT 12 -i eth0 -p udp -m udp --dport 3000 -j ACCEPT
    iptables -I OUTPUT 11 -o eth0 -p tcp -m tcp --sport 3000 -j ACCEPT
    iptables -I OUTPUT 12 -o eth0 -p udp -m udp --sport 3000 -j ACCEPT

     

    Tested and this works just as I expected.

     

    So my two questions are:

    1. Is what I did "safe"? Both from a network security and a stability perspective? What entity set the other rules (like the ones regarding ports 8080)?

    2. What is the "correct" way to change these iptables so I don't have to manually touch this in the future? I can't find a way to do this through the compose file.

     

    Thanks for all the amazing work you do!

  6. @trurl 

     

    Quote

    Doesn't look like you let parity build complete.

     

    Correct. I thought it didn't make a difference since parity was building from the same drive.

     

    Quote

    I think the way forward is going to be to build parity, then rebuild the disk from parity, but let's have @JorgeB take a look. 

     

    Cool. I'll wait.

     

    At the end of the day I can redownload stuff manually. Annoying but not the end of the world. Note that this is two runs, first without then with the -L flag.

     

    Not sure how I'd even make a backup at this point. Again, losing the data would be quite annoying but I wouldn't be heartbroken.

    xfs_repair.txt

  7. I'm transferring an external to internal (assuming typo?).

     

    Are you saying that when I shuck an external data drive, and convert it to an internal data drive, it clears zeroes? If so, what is my best option for cloning this disk given that I don't have an extra 14TB drive sitting around? I'm currently using slightly under 3TB, so maybe I could warehouse it somewhere else and copy back? How do I ensure that Unraid sees it as the same drive/data upon copying back? (or is that even relevant?)

     

    EDIT: Perhaps I could use the new 14TB as the shelter, shuck the external, then set *that* as the parity drive?

  8. I was working on a 14TB external drive (shame on me) along with an extra 2TB drive, and no parity (double shame on me). I have purchased a 14TB internal drive, designated it as parity, and am waiting for the parity process to complete. Once done, I intend to shuck the external and transfer it inside.

     

    What concerns, if any, should I have during this process? I'm guessing that the shucking will change drive identity in some way. Will this cause hiccups in Unraid?

  9. Background: Server was hiccupping its way through life since I was on a 14TB external hard drive (bad, shame on me). I didn't have money for a new drive so I've let it sit idle for about a month and change. I have since purchased an internal 14TB drive and wish to use it as the parity so I can shuck the external and make everything right.

     

    So I boot it up, and My Servers reports it is signed out.

     

    image.png.036a2aba0eea069321a8f58cf2073304.png

     

    I can't access the GUI via the local address because of networking rules that I didn't have the slightest ideas about at the time of setup (and honestly barely have a better grasp of now).

     

    I can, however, SSH in, which is how I discovered that my SDA1 drive needed an xfs_repair done. Did it. Cool. It mounts now. I can access the shared drive from my desktop!

     

    GUI? Nope.

     

    At this point I'm quite stumped. Unraid tower can talk to local devices and the web. I can ping it, I can SSH into it, I can see it starting up services and whatnot.

     

    Currently on 6.10.3 but will be upgrading once everything is sorted.

     

    Attached is the syslog from my machine (been fighting off a cold or something which is why the long time gap). Thanks in advance to the tech wizards on here.

     

    EDIT: Breakthrough! I ran cat/ /etc/nginx/conf.d/servers.conf and yoinked the server_name URL. Took me to the login screen and I'm now seeing the GUI. *However*, My Servers is still showing me as logged out. On my Unraid server, clicking on my profile in the top right pops up a scary warning.

     

    image.png.b65e373e5b6ff7b6d783932b4edc2879.png

     

    So yeah I'm scared that if I sign out I'll never be able to sign back in. Going to try updating all my plugins and hope that's the issue...

    syslog.txt

  10. Firstly, I am muddling my way through it, so please excuse the jargon I'm sure I'm abusing.

     

    I'm attempting to create a tool that will generate packages for NerdPack (and DevPack) automatically, given very little input from the user (ideally just URLs for slackbuilds.org and source code tar.gz, along with which version(s) of Unraid to generate packages for).

     

    There's a process that I'm ironing out with dmacias72 to actually build the package.

     

    NerdPack packages are grouped by Unraid version. You have an Unraid install of X version, you build your package for it, you save that package to the appropriate folder in NerdPack's structure, you restart the whole process with a different version of Unraid. Rinse and repeat until you've covered all the Unraid versions you want to do.

     

    My question to y'all is, A) Realistically, do we need to be generating a new package for each X.Y.Z version? Just X.Y? Just X? and B) What is the best/cleanest way to go about this? I haven't worked extensively with Docker containers before (certainly not with creating them), but I *think* that's the right way to go here. My thinking is that Docker has a lot of flexibility with the CLI, which my app can take advantage of, and also I can map volumes to circumvent SSH shenanigans almost altogether.

     

    What would y'all recommend?

×
×
  • Create New...