Jump to content

gundamguy

Members
  • Posts

    755
  • Joined

  • Last visited

Posts posted by gundamguy

  1. Jumperalex and Helmonder, that was actually very similar to the idea I had in mind.

     

    Like Helmonder said it would be easy to include the ignore-existing flag in mover, then have a second move step that moved the remaining files to a "Jail" of sorts awaiting positive confirmation (or some variation with incremental backups or something, lot's of options here.)

     

    The problem as far as I can tell is actually getting a share to be Read & Write but forcing ALL writes to the cache. I'm not sure if that's something that can be done, and even if it could, I'm not sure that solves the problem of how ransomware actually works.

     

     

  2. With a "Read & Append" share, 99% of it could be done (safely) directly to the destination share.

     

    I think the major hurdle here is, that "Read & Append" or "Read and New Writes Only" isn't a permission set that exists in Linux currently... so it would take significant efforts to make that happen.

     

    Also BubbaQ I'm doing the same thing as you, expect in a few shares like iTunes which other computers need to log into...

     

    Question about Securing the Root though, by default unRAID ships with root having no password, if you add a password does it prompt you every time you bring up the webgui (it would be sweet if there was a way to avoid that but still secure root), in addition how did you set up your non-root maintainer?

  3. Awesome, thanks so much.

     

    I will look around and see if i can find a APC UPS for a decent price. :)

     

    I think most modern CyberPower UPS's also works with APCUPSD... oddly like Squid said APCUPSD doesn't have a great list of supported UPSs.

  4.  

    How is this possible if he formats the USB drive?

     

    Following with these and reading @trurl what is the way to make a backup of the flash disk?

     

     

    trurl gave a bit more clarity, but I believe that even if you don't have the super.dat file unRAID will see them as properly formatted and let you add them without formatting them.

     

    A backup of your USB isn't really required (other then the key) to restore your array from a clean start.

     

    At least that's the impression I've been given, if this is wrong I'd really like to know because it would mean I need to change a few things.

  5. Alternatively, keep a copy of the config/super.dat and you won't have to assign your drives. Also, you can copy any config/*.cfg files you want to keep the settings from and also the config/shares/*.cfg files you want to keep user share settings from.

     

    Be careful about holding on to a super.dat file for too long, if you've changed your array or parity since you backed up that super.dat restoring it could result in data loss. This has mostly happened to people who moved a parity disk to the array after upgrading the parity disk. 

     

    In this case it works fine because you aren't planning to make changes to your array just clean out the junk on your boot USB.

     

     

     

    Also not sure anyone has said this year, back up your Key file. you are going to need it to register your copy again.

  6. Question:

     

    Does the samba recycle bin plugin give any protection against ransomware? Or would that be bypassed?

    i would be extremely surprised if the recycle bin is not by-passed.    Ransom ware rewrites files, it does not delete them.

     

    Yeah that is what I suspect as well.

     

    A couple of random thoughts.

     

    A lot of users have a set up where they have a cache drive and cache enabled shares.

     

    Which in effect results in reading from one directory say /mnt/user/movies and writing to another directory /mnt/cache/movies. I believe this is a product of our FUSE set up.

     

    My question is this, if one modifies a file in /mnt/user/movies does it modify in place to /mnt/user/movies, or is that modified file placed in /mnt/cache/movies. I suspect the former, but it would be great if it were the latter.

     

    If it works such that a file that is opened over SMB and modified is moved to /mnt/cache/movies then using rsync with the --ignore-existing option would allow all modified files to remain on the cache and not be updated over existing files. Then you could use a second rsync command to park those files in some sort of quarantine where they await user action (confirmation) before they are moved to the array.

     

    This of course only will work if there is a way to disable modify in place and force the creation of a working file in an alternative directory.

  7. Ok so keeping disk+users set to secure should be tight enough security?

     

    I can't help you with your other question, but to answer your first question.

     

    It depends on what your goals and concerns are. The major concern in this thread is that a Windows (or less likely a Mac) machine gets infected with ransomware that traverses the shares it has access to and encrypts the data on your Linux machine. If you don't allow that samba share to have write permission that ransomware running on the Windows machine can't modify the files on your Linux machine.

     

    This does not protect against your Linux machine getting infected with ransomeware. If this should be a concern or not is dependent on a lot of factors but for most users it's way less of a concern then an external machine with access causing problems.

     

     

     

  8. I see there has been some discussion on this already... but given the changes in 6.2 that are coming down the pike, I think this plugin should entirely strip out the zeroing and post read verification steps (basically all of the clearing...) and not worry about having a completely clear disk when it's done it's stress testing.

     

    This should let you run more stress test cycles in a shorter time since you don't have that whole 100% write and 100% verification that take huge swaths of time....

     

    Edit: Addition, currently if you want to run this 3 times on a disk it runs the clear and post read 3 whole times... it would be cool IMO if you could pick the number of times each test runs (with options)

     

    AKA, Badblocks 3 times, Pre-Smart 1, Post-Smart 1, etc... so you could mix and match instead of having to run though the whole script every single time... (if this was already an option... my bad... I really didn't know that)

  9. If you don't open up SSH to the outside via a Port Forward, "DMZ Host Forward", or some other means then your risk is fairly low that you would have attackers.

    Denyhosts monitoring then becomes, as you imply, one more thing to clean up, monitor, ignore, ... 

     

    This may come off a bit "tin-foil hat" but one thing to keep in mind is that our IOT (internet of things) devices are notoriously bad about security.  At some point they will likely become beach-head or bot-net "infected" devices.  If you want to control your light bulbs from your phone you should consider adding them and all other IOT devices to their own network.

    </tin-foil>

     

    Adding the SSH plugin may be something you want to consider if for no other reason it helps with setting up public-key style auth.  It sucks to have to type a complicated password for my unraid when I'm on my tablet. :-)

     

    hth,

     

    doc..

     

    these are good points, which is why I asked. I'm typically the kind of guy who many would call overly cautious... so this might be a good plugin anyway.

     

    Also good point about the SSH plugin.

  10. I wonder if there is a way to use rsync's (read mover...) --existing, --ignore-existing and --link-dest options in such a way that you could prevent modified files (read encrypted) from getting moved over top of the existing files, instead keeping the original and the modified file... (I know that last part is possible as there are guides to using rsync and -link-dest to create time machine like backups), which would kind of "vaccinate" you against ransomware at the cost of having to manually delete older versions of files when you intentionally modify files...

     

    I really wish I had a test server right now... maybe I'll hook up a raspberry pi and mess around with rsycn and see what I can get it to do...

  11. you could just CRON a script that copys over daily like mover does

     

    Yeah that's not a bad idea. That's actually a pretty good idea... but I think it breaks some of the functionality... I'll look into it and see what I come up with.

  12. If nothing else this thread has done me a service in getting me to audit my share settings and rethink who really needs write permission. After thinking about it some I fit the category of users who very rarely writes to his system (mostly for video files that I've ripped for Plex / Emby)

     

    I do have one automatic backup that Windows does via.... whatever Windows calls it's timemachine knock-off every night, but I could make all the other shares read-only and have a "DropBox" for purposes of moving data onto the array. This would only be a minor inconvenience for me, but if it improves security it sounds like a good idea.

  13. There isn't much demand for virus scanners on Linux platforms because of how Linux works.

    No offense, but that comment is somewhat akin to Steve Jobs publicly announcing that Mac's were immune to viruses unlike Windows and then having to retract that statement a month later.

     

    The difference on Linux is that the source for the OS (and for much of the add ons) is all open-source and free for the world to see.  But, that does not mean the OS is not vulnerable.

     

    But likely, the OP is more concerned with a virus being downloaded say through deluge and then being executed on a Windows / Mac platform vs being executed directly on unRaid.  In that case, if you're using windows, then just mapping a share to a drive on through explorer should have your anti-virus automatically check if configured to do so.

     

    You aren't wrong, and no I wasn't trying to suggest there is no risk of a virus. Just saying there is limited demand for scanners due to the nature of Linux. 

     

    I agree with you that it's more likely what his goal is, and there are some Linux based scanners that are designed to scan for Windows / Mac viruses.

     

    Additionally note, for some reason Bitdefender doesn't really like scanning the mapped shares... is there a trick to this?

  14. So to answer your actual question. Yes if the shares have different names you could do a command like mv /mnt/user/TVShows /mnt/user/TV-Shows

    One thing to note is that I am reasonably sure that if you use the mv command then files and folders remain on the same disks they are currently located on (I think that under the covers mv will just do a rename).  This will only be an issue if you thought they might be redistributed by such an action.

     

     

    ???HUH???

     

    If the share I'm copying from is restricted to disks 4, 5, 6 and 7 and the share I'm copying to is restricted to drives 10, 11, 12 and 13 that would move the files from the first set of drives to the send ... right?

     

    It should, as far as I am aware... MV is move, but since the path is a different share it should actually change the file location not just path... right?

     

    You could do a copy and delete just to be safe. Or use Rsync.

     

    Wildcards should be ok, but I'm not sure why you need them when you can (Should?) be moving directories and all sub-directories.

  15. As long as the shares don't have the same name (and you set the excludes and includes for your RFS vs XFS correctly) you should be ok.

     

    A user share say "TV-Shows" is simply an aggregation of all the "TV-Shows" directories on all of your disks including the cache. From a Linux prospective it's taking all the /mnt/disk1/TV-Shows (for each disk on which a TV-Shows directory exists and the cache drive) and creating a location /mnt/user/TV-Shows that is the sum of all the contents in the TV-Shows directory on each disk.

     

    Caution: NEVER MIX your user paths with disk paths when using commands that move, write, or sync data between two directories. This will result in data corruption.

     

    Note: When you start looking under /mnt/ you'll see Disk1-X, User, and User0.  User0 is the same as User but excludes the cache drive.

     

    So to answer your actual question. Yes if the shares have different names you could do a command like mv /mnt/user/TVShows /mnt/user/TV-Shows

  16. I believe excluding a drive from the GLOBAL setting would cause that drive to not participate in user shares (SHFS) at all, for reading, writing, or overwriting of files. But I have never tried it, and, in fact, have never heard any user exclude a disk in this manner. But if it works as I say, excluding a disk globally would make files on that disk immune to the user share copy bug.

     

    If someone has time and interest they could try and confirm.

     

    I don't recommend using this method, as it seems like playing with a live hand grenade. But just trying to explain what the OP read about global share settings, which I think was based on something I wrote.

     

    I didn't test this, but I would assume if it works like the normal exclude it would prevent all writing and overwriting, but wouldn't prevent reading... likely for the same reason that excluding a disk doesn't prevent existing files / folders on that disk to show up in your user share... to avoid "losing" data that isn't lost.

     

    If it does prevent reading from that disk and removes it from the aggregation that could lead to some duplication issues (not really a issue as much as waste of space) and it would be good to know if that's the actual behavior because it is different then the lower level exclude include.

  17. This has been discussed at length. There are good reasons - rfs is an aging filesystem and not being enhanced. We've seen a couple of file corruption bugs creep into rfs that have caused silent data corruption in some 6.0 betas - even for files that are not updated!  Many have had performance problems with larger drives with rfs, especially as the drive gets fuller. Xfs is widely used. It is definitely what you should use for any new disks. Some people here have argued for leaving rfs in place for old drives but I don't agree. A bunch of annoying hangs and timeouts have gone away since I switched. I am glad it is off my system. And one more reason - the author is in jail for brutally murdering his wife. I am happy not contributing to any popularity of his invention.

     

    All that is good, but one additional reason is that rfs doesn't really support the Dynamix File Integrity plugin which many people are interested in running. (This is a design flaw in that plugin... but the fact remains that it's the way it is...)

×
×
  • Create New...