jumperalex

Members
  • Posts

    2000
  • Joined

  • Last visited

Everything posted by jumperalex

  1. Be sure you do not have any old plugins installed like the dynamix.plg that was a beta test version. I would start in safe mode and see if that works, and then start re-installing plugins to see if one is breaking unRAID. I tried this already but was not able to start docker with 6.3. I deinstalld all docker plugins and deaktivated docker, then upgrade to 6.3 and then i tried to create a docker image and start it. Creating was possible but no start. Did all from scratch severel times but no luck - as soon 6.3 is on, docker cant be startet. Start 6.3, attempt to start the docker, give logs. If no docker is even shown for you to be able to start, then post logs. Either way, needs logs or can't help.
  2. How can i do a restart command? Dashboard -> click on Plex to show context menu -> restart -> -> Profit!
  3. FWIW, plex wasn't accessible, despite the docker "running" until I issued a restart command for it. Then all was good. This was after first reboot after update.
  4. I seem to have the same issue. Did you erase the appdata openvpn folder and start over fresh? I have all my devices setup and would hate to have to recreate it all. Thanks Yes that is what I had to do. I only have one user and two devices so I wasn't too concerned about retaining settings. You might get away with just moving the bin folder out of there and see if it recreates correctly. Or go the other way and start with a fresh appdata folder and then can you import your device connection settings?
  5. hmmm I just tonight told it to update from the Docker tab. I take I just need to delete the whole thing? So I removed the container and the image. Repulled from CA and I'm still looking at 2.1.2 Would it be because I reused my appdata folder without emptying it? EDIT: [sigh] yup that was it. cleared out appdata. I'm sure somewhere in this thread it is mentioned.
  6. sooo openvpn-as is now up to version 2.4 but I only seem to grab 2.1.2 .... am i missing something?
  7. So just to bring my own thread back OT I feel like we're got a few open questions concerning the real threat surface of: The mover script / MV at risk of malformed file names; I can't imagine this is easy or lucrative enough for a black-hat to even try Docker apps being granted R/O access to shares they don't need R/W access; this can at least reduce the impact of an attack Docker itself and how to mitigate its compromise; I think the answer is backups ... anyone else? Compromise of any media player, like Plex's custom ffmpeg, via malformed media file posted to torrent sites; could be lucrative if its possible. Let's say, a poisoned GoT torrent hmmmm Ditto for a malicious RAR and the tool being used to extract it I mean these are the server side attacks that I can think of for a box like mine, but the way I see it, all except a Docker exploit can only hit my media/torrent which is less of a concern than my PC backups which hold actual important data. Still it would be nice to discuss ways to further prevent an attack and limit the extent of the damage should it happen.
  8. Seems to be a bit of a consistency challenge in these comments :) Hey discussing, researching, and evaluating various security best practices are well within my job description
  9. Ohhh now settle down old man, I'm working I am just doing "other things" while awaiting my scripts to run. Well, at least that is what I tell the guy sitting behind me now git awf MA lern
  10. Well remember Move and Docker do not access shares via SMB. They access them directly through /mnt. So rework your entire thinking with that in mind. so yes Mover can access ALL file shares, not the least of which because it is run as root (99% sure of that at least hehe). But windows has no way of making mover do anything because its only toehold into the server is via SMB. That means 1) dropping a file into a share that is malicious and 2) an application with root (or elevated to root through chicanery) executing that file (everything I said about rutorrent, rar, and plex above). But the mover doesn't execute files persay. I suppose a really malformed filename might make the mover script or MV crash and execute arbitrary code. But I mean, I haven't heard about that happening and what would that filename even look like? I have the same thoughts about cron scripts. Docker is, as I said above, a whole other animal. The docker service runs with deep hooks into the system so anything that can break out of that is cause of severe concern, but little we can do about it other than "physical" isolation of backups. Again, just my musings as I avoid work long enough to just leave. I hope someone comes in and corrects me because learning will occur
  11. So the three places that come immediately to mind for my server are .torrents, rar files, media files (that might have been rar'ed or not in the first place). Looking at each file type and their ability to cause arbitrary code execution / privileges escalation if maliciously crafted: .torrents would need to trigger rutorrent rar files get auto extracted after download. So the unrar application (i think bundled with ruTorrent or part of the Docker system) would need to be exploited. media files get processed by Plex (specifically their instance of ffmpeg) sometimes but are more often than not direct streamed to my Roku, so the malicious file would need trigger Plex's ffmpeg or the Roku to execute arbitrary code or elevate privileges. Now, I have no idea how to audit that, so I can only look into how do we do our best to isolate those dockers so they can do no harm. First and foremost is the rule of Least Privilege; don't give the docker access to more than it needs and don't give it R/W access if it doesn't need it. Right now, as best I can tell, the Docker App can only access the /mnts we give it access to. Plex of course has access to all media, but it really only NEEDS read access unless we want to allow it to delete from the GUI (I actually like that for deleting things after watching them). I'm not at my machine, can we limit Dockers to R/O of mounts? rutorrent would require a safe space to write torrents to that is regularly swept into an R/O area. In my case that is easy enough though it means double storage. In the case of rar'ed files I have that already anyway (keeping the rar files for seeding) so all I'd need to do is sweep the extracted file. for non-rar'ed files I'd be making a second copy. That seems fair enough. So anything that can compromise the application w/in the container should be limited to damaging anything the container has R/W access. Then of course there is worst case, breaking out of the container. In my book, that is what backups are for because there is a limit to what I can do to prevent exploiting Docker beyond keeping my server up to date with Limetech. Just my thoughts as I avoid work ;-)
  12. ooff for that kinda money I'll just pay for enough cloud storage to also protect myself from theft / catastrophe. Then again my data isn't THAT important hahaha My firebox is one of the better ones, but of course still not up to your standard. I've also considered upgrading my thumbdrive to something a bit more robust to tolerate higher temps inside any enclosure. Like, all metal and waterproofed. Any worries about malware triggering the WOL? I don't know enough to know if that is a silly question Either way, to do that would require a second backup only server, which is an option, but I just haven't committed yet.
  13. And I agree with you. There are better ways to handle this that all require large changes in how the share system works in unRaid. But even there the plugin does have its place, but one of its major short comings is that the more options you enable in it, the more likely an inadvertent innocent trip of the plugin becomes more likely. yeah honestly that was some of, having to deal with inadvertant triggers and fine tuning it. Since my system was set to all PUBLIC before this weekend I figured that was the bigger bang for the buck in my case. for sure. And that's why I might eventually add it if I can't make one of my other ideas work. Basically a R/W "drop box" for every share that is emptied into its paired share periodically / on inotify trigger so that that the final destination never needs to be R/W except when I want to delete over SMB ... and right now I do that even less than I currently write over SMB. for sure. I'm about as locked down as I think I can get with FF running NoScript and uBlock and my SO is running FF with AdBlock+ and both of us running Windows Defender (not always the best in tests, but not always the worst). I'll probably convert her to uBlock soon but her tolerance for breakage is much lower than mine ... well MY tolerance for dealing with her when there is web breakage is low but that is also why I made an effort to isolate our backups from eachother that way no single machine can take down the whole system. Also neither of our machines is actually sharing with each other in anyway but I need to research more about how to prevent malware from crawling my network. But as with everything networked, there is no ONE most important thing, it is all defense in depth and creating firebreaks where possible; because it is impossible to Secure anything 100%
  14. Yeah that was one thing I wasn't sure about. Are you saying that, for the most part, a hidden, unmapped SMB share is ransomeware safe? My gut says it should be since how would any ransomeware know to even find it. I mean, if they're REALLY good they'd write something that sits and waits for a a few days to collect info on hidden shares, but is any ransomwhere doing that?
  15. An excellent point. For sure I will be self-auditing my own behavior over time to see if I find myself constantly flipping any shares to/from R/W. If I do I'll certainly look into other options such as more robust backups to mitigate the risk of an open sharing being hit. One thing I did do was put in a feature request http://lime-technology.com/forum/index.php?topic=54196.0 looking for better visibility, auditing, and bulk control of share permissions (technically RobJ asked for that last bit but I was trying to take baby steps ) That would make it easier to avoid mistakes. FWIW, over time I've been unmapping and hidding many shares as a test and I've found myself not missing them. So that was sort of my first test to see if I could go this far. Honestly no. I applaud the author for his efforts and I might even add the tool myself. But I consider it a line of defense AFTER appropriate application of permission policies and good backups. I also think there are more robust solutions that are possible but require Limetech's buy-in to make them happen. They've been discussed in various ransom-ware threads Yeah I wish I could have a portable anything at work, but that isn't an option. You're right about the firebox. Worse it sits 3 ft from the server I'd move it but its literally in the safest place in the house (basement, surrounded by concrete, least amount of timber overhead). I should probably buy another and put it in the shed that is 50 ft from the house. Of course the better option is to settle on a cloud solution for my really important files (not that numerous) and consider an external spinner for my media collection (which really isn't critical but would be annoying to lose). A single 8TB would do me well for a while especially since I don't horde media, I delete 90% of it once watched. I'd consider a second unraid with rsync if I had a geographically separated place to put it, but I don't, so an HDD placed outside the house is my best bet without paying for large cloud storage. Then again, honestly, cost isn't the factor other than hating to waste money and wanting control over my data, so the right solution is still something I'm looking for.
  16. hahaha yeah i had almost all those ideas after I posted and while I continued to re-engineer my system, but I didn't want to sound greedy I was really jazzed by the idea of a scheduled r/w period for my backups. And I still would. But for now my solution is documented in the below thread where I'm seeking advise / audit of my method. https://lime-technology.com/forum/index.php?topic=54210.0
  17. Hey everyone. With the new spat of social media ransomware attackes, and an SO that loves her social media, I though I'd check in to see if the illuminati had any thoughts on my system lockdown. So here is my setup. Media Shares - All media shares are SECURE and all users set to READ-ONLY -- These are written to, typically by a torrent docker which of course can access them without needing smb permission. -- In the occasional scenario when I need to add something over SMB I can set my PC's user (jumperalex) to R/W for the few minutes it takes to copy over and then revert. -- I've considered creating a new user, "jumperalex_rw" (with R/W access to all media shares) that I can enter into Windows Network Credentials dialog pop-up when I try to access a media share. So long as I don't select "Remember my credentials" I feel like this should prevent ransomware on any external machine from writing to the SMB shares. Data Backup Shares - I created backup shares for each computer; "Alex_PC_Backup" and "Lisa_PC_Backup" and set them to PRIVATE, but not yet hidden - I created two new users "alex_backup_user" and "lisa_backup_user" and I granted them each R/W access to only their respective backup shares; all other users have NO ACCESS to these shares - I setup the backup process on each machine to send the backups to their respective shares, and when I did that I entered the credentials for the share's respective user when prompted ... this was all in an admin elevated session. -- For reference I did this using Windows 10's "Backup and Restore (Windows 7)" on my machine and Acronis True Image 2013 on Lisa's - At this point I set the backup shares to HIDDEN - I tested if both the standard user and admin could access the backup shares and they could not; not their own and not the "other" machine's. Heck they couldn't even see them once set to HIDDEN. - I then of course tested the backup creation and all worked as hoped on both machines. So as best I can tell if any machine gets hit by ransomware it will itself of course get got. But it can't touch any media shares. Similarly as best I can tell my backup shares will all be protected except if the malware is able to grep the machine's backup_user credentials from where ever they are stored when I typed them in to allow the backup program to access the share. But that seems to be segregated from normal, or even admin user, access as evidenced by the fact that I couldn't access the shares that way. Also the stricken machine has never seen the credentials for the other machine's backup share, which isn't even visible. For the record my REALLY important, non-recreatable, data is also regularly synced to a thumbdrive and stored in a firebox. My next goal is something like peer-to-peer crashplan but last i checked it was still a fight everytime they updated something. In the mean time, what are anyone's thoughts? What have I missed / assumed incorrectly? What else should I test?
  18. Hello, I've just finally gotten on the security train with all the talk of ransomware lately and housemate that just loves social media. Anyway, I know I can look at the share listing and see the share's status as Public|Secure|Private but what I'd also like to see are two other things 1) An indicator to know if ANY user of the share is granted Write privileges 2a) a hover-over to show who those users are or 2b) a button to expand a share's row to display all users, their permission level, and the ability to change their share permissions right there. - Visually this could look much like what is done when we "compute" a share's size 3) In the current main share page, color code read-only and read/write (with ADA compliant icons) to make it a quicker visual scan to find users with write privileges. The current way of doing things is fine with just a few shares and a few users but quickly becomes annoying / mistake prone to manage as things get more complicated. And complicated is what happens as you have to add more shares and more users to provide the granular control needed to create fire-breaks against ransomware such as only allowing each computer access to its own backup-share with its own backup user, vice granting access to the standard user, or worse guest access. Related, it would be very useful, when looking at a user (or maybe even in the user tab) to have a list of that users share permissions to easily audit who has access to what. Again with color / icon cues.
  19. To be clear Unraid needs very little ram and if you are running out of 4gb in two days you'll run out of 8 in four days. Find the hog or leak before throwing ram at it. Then add ram if it really is justified. Sent from my Nexus 7 using Tapatalk
  20. I don't see why that matters though. Of course different algorithms produce different hashes. But the user doesn't have to change algorithms when they change hardware. They can, but they don't gave to. It won't be the fastest choice from that point on, but right now it isn't the fastest choice either. So I see that as a wash with the benefit of having the fastest choice on first use and the option to start over if it is important enough on new hardware.
  21. meh, I think if different machines produced different results we'd have heard about it by now. Blake and Blake2 are hardly untested. https://tools.ietf.org/html/rfc7693 You might as well judge SHA harshly because SHA-256 makes different hashes than SHA-512. I admit it would have been nicer if single vs parallel made the same hash but from what I've read of the technical articles, it is a different way of constructing the hash and a different result is impossible to avoid or it wouldn't be as much of a speedup. Blake2s vs Blake2b not making the same has also makes sense since, well they aren't even the same length. So, I'm not sure what would lead you to make the leap from different algorithms creating different hashes to the same algorithm creating different hashes on different machines? Still, I don't see portability as a concern. The user can choose which Blake2 algorithm to use just like they choose to use Blake2 over SHA or MD5. It is no more or less portable than any of the other hash functions that produce different hashes. Also, should a verification check generate a metric sh!t ton of failures, that might indicate the wrong algorithm is being used so maybe double check against the other options. It should be fairly obvious and just needs to be an user option when verifying. If nothing else the hash check code can cut the options in half just by looking at the length of the hash, b is longer than s. So really what should probably happen is one algorithm has to be chosen on install (and that might be overly restrictive), by the user, and then a warning issued if the user changes it. Practically speaking that is only going to happen for anyone moving from something without AVX to something with it. After that they'd literally have no reason to change ever again until another faster better algorithm is created. Just like when everyone moved from MD5 to SHA-256 to SHA-512. And they don't even HAVE to change (ie start over) because all that will happen is they will not be running the fastest most secure hash for their system. A fact already true right now. As for saturating the system, I sure hope the OS is smart enough to load balance, but even if it isn't, that it why you run it at night when nothing else is going on. Other than the implied ask of you doing more work I don't see the problem with offering user choice.
  22. I'm just popping in while in an uber to celebrate life events, but anyone worried about excessive email notifications etc MUST look into Pushbullet. It is amazing. And if you happen to not be allowed to bring your phone into your work, but can install a Firefox or Chrome extension, then you can get SMS at work too [emoji5] I frankly judge myself for waiting so long to hop on the train.
  23. Some quick testing on my windows machine (because I am lazy) showed the following timed results for a 1.32GB file on an SSD with an AMD PhenomII X4-975 -h blake2b: 4.39s (I can't tell what this is optimized for other than 64-bit) -h blake2s: 6.79s (optimized for 8- to 32-bit) -h blake2bp: 2.04s (optimized for 4-way parallel) -h blake2sp: 2.83s (optimized for 8-way parallel) I don't know which parameter is being used by this plugin but clearly even on a "modern" CPU it can make a large difference. If I had to guess, my 8-core unRaid cpu probably would do better on the 8-way optimized and would probably get even more benefit from the fact that it implements a more modern instruction set to include AVX, FMA4, and XOP which my desktop does not. Makes me wonder if your C2750, being an 8-core will benefit from using the either the blake2bp or even blake2sp hash function?
  24. Right from the webpage https://blake2.net/ There is also this document https://131002.net/data/papers/NA12a.pdf which describes the benefits of the AVX2 instructions which ATOMS don't appear to have So it might be worth it to test the others versions and use the fastest ... maybe dynamically at install