JokesOnYou77

Members
  • Posts

    20
  • Joined

  • Last visited

Everything posted by JokesOnYou77

  1. If this can be done in an OS update, it would be awesome. But I don't have that power, unless there's something I'm missing in the code on github. But what I can do to solve this problem for myself and others in the immediate term is to write about 50 lines of code/JSON with instructions on my github. But by no means am I proposing that we mark this topic as closed/solved. Building this into the OS would be great. But I think that's a much longer discussion around the right scope, user experience, and implementation. To me, that means it's likely to take a while. To get that discussion started though, let me share some of what I've learned so far (I lost the source links though, sorry): 1. The change can't be made through CSS. Seems like there are a few contributing factors here but the big one is that the drawing is all done in Canvas. 2. Ttyd is started once and has to be restarted to change the server-side configuration. In addition to the above, I would add another point which is that I have observed, from my own behavior, that I access the web terminal from different devices with different screen sizes at different times and I re-attach to the same tmux/screen/byobu session each time and I'd like to have a different font size on different client machines. Moreover, if restarting ttyd kills the running multiplexer session because ttyd is the parent process that would not be a great user experience (not to mention other stuff like losing command history depending on how users have that configured). Both the screen size and multiplexer issues make me think that a client/browser-side solution would be preferable to restarting ttyd on the server side. That said, a server-side "base" configuration might not be a bad idea either. I don't have the time this instant, but a good next step might be good to run a test by starting a tmux session, starting htop or something, then restarting ttyd and seeing if tmux is still running.
  2. The modified move command makes sense to me and I can change my commands accordingly. But I seem to be getting conflicting information here in that I absolutely am getting duplicate files and I can delete the one on the cache without removing the one on the array disk (checked and double-checked but I'll check again later today to confirm).
  3. Hi all, I have two shares, downloads and Media. Downloads uses a cache pool and Media does not. When a download finishes, I ssh in and mv downloads/some_dir/ /mnt/user/Media/. But this results in two copies of the data, one in /mnt/cache/Media/some_dir/ and the other in /mnt/disk1/Media/some_dir/. It seems like this is some kind of conflict between the default Linux mv behavior, which simply changes the reference for where data is stored when moving data on the same physical disk, and the UNRAID behavior that follows caching rules. Is this expected/desired behavior? It seems very counter-intuitive to me and it results in a warning from Common Problems because a share that doesn't use a cache gets a path on a cache drive. It also raises questions to me about how reads will behave and if there are potential race conditions (seems unlikely but possible).
  4. @ljm42 My PHP isn't very good, but it doesn't look like there's a place to inject logic for this in the UNRAID webGUI on github. That leaves browser plugins or greasemonkey scripts as the best way to distribute this in an easy to use fashion. If I were to build such a thing, what would be the right forum to announce it in?
  5. I'd also like to see the web terminal don't size made configurable. As it is, I can't increase the terminal font size without also zooming in on the main UI page which blows that up and makes it difficult to have terminal and GUI both open and readable. Has anyone done the work to see how we could actually use xterm-webfonts? Maybe we can make a browser addon to inject it? UPDATE: I spent some time messing with it and reading docs and made a little progress. Entering the following in the browser JavaScript console gets the desired behavior: term.setOption("fontSize", 20) Just need to package as a browser addon to distribute. I'm unlikely to have a ton of time to do that so I'll probably run manually for the time being, but I hope this helps others. In the future, it looks like ttyd has options to override the client-side options on the initial start: https://github.com/tsl0922/ttyd/wiki/Client-Options#basic-usage I suspect that UNRAID users could be given a config file with options they could add to the ttyd call at some point if that's determined to be a desirable direction.
  6. I'd like to request the addition of the pixz compression tool. It's a wonderful extension to the standard xz (LZMA) compression tool that allows random access to files in the archive. For large tarballs, the ability to extract any arbitrary file, and show the archive index, without reading the rest of the files in the archive (more or less constant time). I deal with 2-3 GB archives in my day to day work and I am guessing that I'll be dealing with a lot more as I get into UNRAID and it would be really nice to have random access to my compressed/archived files.
  7. Packaging as Docker container is a great idea, I do that at work to make executables. If pixz didn't need pthreads I'd even be tempted to use scratch as the base image. I'll start with a petition to Nerdpack though. For future reference: https://quip-amazon.com/sH7iAO0uaX82?tracking_code=1fFk5sAtuCu2A#NDA9DAoFFTp
  8. I've installed the Nerdpack plugin but it's missing a command I'd like to have: pixz I've seen other posts suggesting that UNRAID isn't a "full" linux distribution and I'm guessing that means I need to download and compile from source, and I'm comfortable with exploring that if that's the route to go. But if there's a better/recommended way I'd like to try that first. To address questions I assume will come. Yes, I know xz is included, and I know xz now supports blocked archives and parallelism. It's just not the same as the ease of use of the random access to files provided by the index with pixz (try it for yourself and you'll see what I'm talking about). And I also saw that pigz is in Nerdpack, which is a nice substitute but still not the same. I'm sure I could make do with alternatives, but as I explore UNRAID during my trial period I want to see what I can and can't do to the system (with the help of the community).
  9. Ok, I would not say that my tests were definitive, but I am confident enough to say that this was user error (my fault). The ultimate issue sprang from how I made my key. I tried to concatenate a few binary files to make a file key and, for whatever reason, the key that actually registered was only the first of the concatenated files (I made sure to test with a combined keyfile size of less than 8 MB). While I don't have a complete, in-depth understanding of what happened, I think I have enough information for me to keep going with my build.
  10. Hi all, I'm pretty new here, just starting my UNRAID journey, and I just noticed there is a polls section on the website. I took a look at the RAM poll and saw the first page was from 2014 and many people were using 2 GB of RAM. I thought it would be interesting to look at how the poll results have changed over time and maybe share that back to the community. So I thought I should ask a few questions to start with: Has this kind of thing been done before? I.e. looking at forum poll results over time. Are there approved methods for scraping the forums or downloading a dump/snapshot? Are there politeness rules or access rules for scraping the forum? -- Jokes
  11. If I plug in a USB drive and directly copy data from it to local share, or cp a local file from somewhere else to a share, is the data parity protected? E.g.: cp -r /mnt/disks/External/Movies /mnt/user/Media/. I looked at system stats and drive I/O and it looks like the answer is, yes, but I didn't want to assume since I'm new to this and I don't have a good mental model for how the parity operations are integrated into the system (kernel? user-space program? something else?).
  12. I tried a bunch of things that didn't appear to fix the problem (wipefs, write the first 4 MB with zeros). That made me more sure it's something I'm doing wrong. I'll post again when I can confirm. @JorgeB Thank you for your help and prompt replies. I appreciate it.
  13. Not strictly relevant to fixing my issue, but shouldn't UnRAID have prevented me from making an array with a new key if it wasn't going to work? Is there a command I can run to quickly test if a disk will take a new key so that I can "fail fast"?
  14. I erased and formatted when adding to a new array in the GUI. And the new disks were both brand new and pre-cleared. I can try putting them on a different machine and doing a "quick erase" with /dev/zero and dd and then come back to the unRaid box.
  15. Hi all, I just finished setting up my first unRAID server, or at least I thought I did. Before my final setup I took a few days to play with a test configuration with fewer (smaller) drives on my final hardware and some play data to make sure my hardware would all work as desired. I tested a bunch of things including array encryption with a dummy keyfile. Then I shut down, put in my shiny new 8 TB drives, booted up, used the "New Config" tool to start over, and started my new configuration with my SSD cache and my new strong keyfile. Then, when I got to the step in my setup procedure, "Full power cycle to make sure everything works", it didn't. I couldn't decrypt my array. I had the fancy new keyfile that I used when creating the new Array but no matter how many times I tried, it didn't work. On a whim, I tried the dummy key file and voila, it worked. I'm guessing that what's going on here is that I didn't do the New Config before the shutdown and I maybe unlocked the old array while messing around with setting up the new one which led to the LUKS master key not being reset by New Config. I would think that creating the new array with a different keyfile should either raise an error and fail loudly or add a new key to the LUKS key list (wildly insecure without if the user is unaware) so I think this is either a bug (silent failure/wildly insecure default behavior) or I just have no idea what's going on (a definite possibility). It also looks like the drives are still associated with a particular mount point after running New Config, I can't completely tell how to format them in the GUI without adding them to an array so maybe this is related? If this is a bug fixing it is definitely important, but the purpose of this post is to get me back on track with my server build. So how do I reset the encryption keys? I haven't paid for the full version yet so if I can just wipe and re-make the USB drive with the trial that's fine with me (do I need to copy the trial key first?) but I'd prefer not to invalidate the current USB key if possible. I'm also comfortable in the terminal and am happy to execute commands for a fix (though I'm really a Debian/Ubuntu/RHEL guy and not a Slackware person) but as this is a matter of security, it's essential that this be a clean fix, not a workaround that may add an attack surface in some way. Thoughts? P.S. My money is on formatting the USB drive or new USB drive but I'm interested to hear and learn from the community.
  16. Great project, really made it easy to set up metrics. How do I configure the polling rate and the retention strategy for the data? I don't have any prior experience with this stack, but I am hoping there is some way to configure, for example, polling drive usage only every 5 minutes, while polling CPU use every second and CPU load every minute and then configuring a retention strategy such that 5s metrics are available for 1 week, then they are replaced with 30s metrics (aggregated based on mean or some other reduction operation) which are available for a month, and then so on. I'd like to monitor long term behavior of my system so I can see the effects of changes I make. Also, I'm pretty new here so please excuse my ignorance, but how do I tell where the actual data/database files are being stored? Cache vs array? I see the data in /mnt/disks/appdata/Grafana-Unraid-Stack/ so I guess UnRAID is just magically presenting me with a unified filesystem and the appdata/ share cache/array strategy is just magically doing its work? I'm used to LUKS and whatnot but the chachepool thing is taking some getting used to.
  17. Glad I could be helpful. I'll definitely continue to keep an eye out for documentation issues as I progress on my build.
  18. Sorry if this isn't the right place for this but it seemed like the best option. Issue The descriptions of Share access controls differs between different parts of the official documentation. The descriptions of Private and Secure controls appear to be swapped between the Storage Management docs and the Security Best Practices guide. Given the desire to make security best practices clear and easy to follow, this discrepancy could make it hard for new users to understand what they should do. Suggested Fix Update docs to be correct Find a way to have a single source of truth for the future (maybe through templating?) I'm pretty new to the UnRAID community so I apologize if I've missed any posting norms. -- Jokes
  19. Hi all, It looks like the default storage configuration that most people are using is to put the InfluxDB data on a cache pool. I'd like to know if anyone has tried using the data array for data storage or found some way to split the database files across cache and pool (e.g. historical data on array, recent data on cache). It seems like storage needs should be pretty minimal--monitoring 16 metrics stored as 64-bit floats once per minute is only about 5.3 MB per month--but it's hard to tell how much space I'll actually need and I have a pretty small SDD cache and I'd like to at least consider alternative configurations.