Leaderboard

Popular Content

Showing content with the highest reputation on 09/10/19 in all areas

  1. Today's update to CA Auto Update will automatically apply the fix for this issue on affected systems (Whether or not the plugin is even enabled). You will though have to check for updates manually once to clear out the old update available status. If you are running @ljm42's patch script, you can safely remove it, as the Auto Update will not install the patch once 6.8+ is released.
    4 points
  2. After the rebuild of Disk 3 on the new(er) 2TB drive, I powered the system off, opened the side of the case and realized I didn't know which drive was which. I booted the server back up and it came up without any CRC errors. I took a couple of quick pics so I knew which serial number I needed to pull, powered down, swapped out the drive and have rebooted again. Still no CRC errors. It's currently rebuilding the newest and last upgraded drive. My guess is that one (or more?) of the SATA cables just wasn't seated fully. I did press them all in against the drives while I had the box open, so I'm going to cross my fingers and assume that was the issue. The drive reconstruction is positively screaming along at 85MB/s right now (now that everything is 2TB or greater, that increased speed does make some sense). I'll put this issue to bed now and only be back if something else goes wonky at some point in the future.
    1 point
  3. Different capacity devices not very good for raid configs, you won't be able to use full capacity unless you choose raid1, you can check here in more detail: http://carfax.org.uk/btrfs-usage/ Another consequence of using different size devices. You need to free up some space on cache and then convert.
    1 point
  4. Thanks for that @Harro. Looks like it's the cables (I didn't reseat everything, I just fiddled with the Drive 3 & Drive 4 connectors since those were the two that didn't show up when I powered back up) Or, it's the power supply (I'm running the same Corsair CX430, but I've only got 6 drives and an SSD cache) Or, the drives actually are dying. Well, the drive rebuild seems to be proceeding at what is a reasonable pace for this box (35MB/s ain't reasonable, but this is really old hardware (AMD FX4100 anyone?), and I plugged one drive back in via a USB2.0<->SATA connector for the time being). I'll let it finish the rebuild, then take the box down again when it's done tomorrow and pull and reseat all the cables and see what happens. To be continued...
    1 point
  5. I did run into this situation at first, but thought I had it worked out in the posted fix(the trick was supplying both mime-types in preference order). I use one of the containers you mentioned as having the issue post-patch, but don't see the behavior here. Did you edit the file manually, or did you use the script @ljm42 posted? If you edited it manually, would you mind pasting the exact line after the edit? It's very possible a typo in the right spot could cause this.
    1 point
  6. Yep, or if the data is corrupted during the process, like with bad RAM or SATA card, unraid has no way of knowing. It will quite happily emulate corrupt data. The disk is only one part of a long chain of places that the data goes through. That sounds like the best option. While you're at it, confirm that your SATA cables have the correct retention method. The link I posted shows what to look for, in a nutshell, if the latch on the outside of the cable doesn't firmly engage a piece of plastic directly, the cable must have 2 bumps that pinch inside the SATA slot. The internal 2 bump cable is actually preferred, as all drives should have the corresponding notches behind the conductors at the edge. Yep, a write failure means the drive is marked with a red X and no longer participates in any way.
    1 point
  7. I hope the 6.8 changes won't break the ability to GUI mount an encrypted drive (with or without retyping pass) using Unassigned Devices after array start
    1 point
  8. Thanks, limetech, for your thorough answer. I have to reiterate that I still don't want my password written in plaintext to disk. This reduces security by unnecessarily introducing an additional attack vector. Now my secret password is not just vulnerable to keyloggers, shoulder surfing, etc., but also to all the vulnerabilities of a keyfile. Yes, the password is also available in memory, network buffers, etc. But to retrieve it from those locations requires precise timing and more skills than most computer-literate users possess. Again, no system is 100% secure, so this is about making it too difficult to be worth the effort, and files are ridiculously easy to obtain, even by my computer-illiterate 75-year-old mother. I'm confident that I could teach her how to get the password from the plaintext keyfile in a few minutes, but I don't think I could ever teach her how to grab the password from a network buffer. As I said and Xaero provided an example, LUKS does not require writing the secret password to a file. To me, this looks like lazy programming so Unraid can use the same unlock method for either a keyfile or a password. And Xaero, I'm still not on board with your idea for different keys per drive. If it's formulaic, like salting with the license key or drive serial, that can be easily replicated. If you're relying on a cipher system, you're adding another attack vector (the cipher system, its recovery mode, its formula, its storage method, etc.), which I should note will be written by the same company that thinks it's OK to write passwords in plaintext to the file system. No thank you. Plus, I would much rather know the exact password(s) to my encrypted drive than to rely on the added complexity and insecurity and incompatibilities of an external program. A potential solution could be that when LimeTech implements multiple keys. Assuming they allow users to assign unique keys per drive (e.g., so you can hand over a single drive to someone), that could be used to optionally leverage multiple passwords to start the array. This is tricky to implement because most people will want the same password for all drives and will not appreciate having to repeatedly type the same password for every drive in their array. They'll want one blank to enter one password. So how do you implement multiple passwords without frustrating the majority of users? Maybe try that one password on each drive, and prompt for an additional password if that one doesn't decrypt all the drives, then repeat until all drives are decrypted. (It would only take a few seconds to check even for a huge array.) Or maybe a multi-line text box to enter a different password on each line, then try each password on each drive until one decrypts it. Also, these techniques allow for one set of drives all containing related information to be keyed with the same password, and another set to be keyed with a different password. All certainly possible, but all depends on how LimeTech implements multiple keys.
    1 point
  9. First a great deal of thought, design, and effort went into the Unraid OS encryption feature. However we do think there are improvements we should make in the interest of securing the server as much as possible. To clear some things up: If you use a passphrase, whatever you type is written to /root/keyfile If you upload a key file, the contents of that file are written to /root/keyfile Hence we always pass "--key-file=/root/keyfile" to cryptsetup when opening encrypted volumes. The Delete button action is to delete /root/keyfile. I can see where this leads to confusion in the UI. /root/keyfile is definitely not automatically recreated up system boot, but it is needed every time you Start the array if there are encrypted volumes. You only see the passphrase/keyfile entry fields if /root/keyfile does not exist, which is the case upon reboot. The default action, after luksOpen'ing all the encrypted volumes is to leave /root/keyfile present. This is because often, especially during initial configuration, one might Start/Stop array several times and it's a pain in the neck to have to type that passphrase each time. At present unlike traditional Linux distros, Unraid OS is essentially a single-user system (root) on a trusted LAN (your home). Thus we didn't think there was much risk in leaving /root/keyfile present, and besides there is a way to delete it, though granted you have to remember to do so. The primary purpose of encryption is to safeguard against physical theft of the storage devices. Someone who does this is not going to know to first snoop in a webGUI and find an encryption key - they are going to grab the case and run. Each storage device has its own unique volume (master) key. The keyfile is used to decrypt this master key, and its the master key that is actually used to encrypt/decrypt the data. Unraid uses only one of possible 8 slots. We intend to add the ability to assign additional passphrases, for example to change your passphrase, or to add another one if you want to give someone else a storage device and not reveal your unique passphrase. But of course this is very easily done with a simple command. Having read through the topic, we will make these changes: Change the default action of array Start to shred /root/keyfile after opening all the encrypted volumes. Add an additional configuration variable, probably under Settings/Disk Settings to change this default action if someone wants to, with clearer help text that explains what's happening (though the current Help text does explain it). Add the ability to change the passphrase.
    1 point
  10. It's used by a USB controller, should be fine to ignore.
    1 point
  11. I'm on holiday with my family, I have tried to compile it several times but there are some issues that need working on. It will be ready when it's ready, a week for something that is free is no time at all, we're not releasing the source scripts for reasons I outlined in the original script, but if someone isn't happy with the timescales that we work on, then they are more than welcome to compile and create this solution themselves and debug any issues. The source code is all out there. I've made my feelings about this sort of thing well known before, I will outline it again. We're volunteers with families, jobs, wives and lives to leads. Until the day comes where working on this stuff pays our mortgages, feeds our kids and allows us to resign our full time jobs arrives then things happen at our place and our pace only. We have a discord channel that people can join and if they want to get involved then just ask, but strangely whenever I offer, the standard reply is that people don't have enough free time. If that is the case, fine, but don't assume any of us have any more free time than you, we don't, we just choose to dedicate what little free time we have to this project.
    1 point
  12. NOTE: I understand this is more detail than you need as it involves things you have already done and have working. I am including this detail in case it is useful for others in the future. I don't use the Plexinc Plex docker as I prefer LSIO dockers whenever possible. However, I understand LSIO consulted with Plex to help Plex setup their docker so, there are likely many similarities. I think most of your confusion stems from the fact that you may not fully understand docker volume mappings which are used to map docker container paths to host paths. I see jonathanm answered your question about why things appear differently in Krusader. Again, the disconnect has to do with the way you may have mapped things on the container side (docker) vs. the host side (unRAID file system). You will be better off focusing on dealing with user shares rather than disk shares and never, ever confuse the two in Krusader, MC, etc. Moving files between disk and user shares could (and often does) result in data loss. Always work disk to disk or user to user share and do not mix the two. FYI - I, and many others, have Plex set to transcode in RAM. Yes, Plex once stated officially that they would no longer support RAM transcodes, but, I don't know if they ever really followed through on that as it works fine. Here are the steps for mapping a container path to a host path for transcoding purposes and then specifying the container path in the Plex server transcoder settings. /tmp is RAM on the unRAID server. It is not part of the unRAID user shares/disk shares file system. In my case I have /transcode mapped to /tmp Edit Plex docker settings and do the following: 1 - Edit the appropriate host path variable (or add a new one) for your transcode mapping. 2 - Enter the desired container path name, edit the configuration and enter a host path of /tmp if you want to transcode to RAM. This is a docker volume mapping where you are associating a container path (/transcode) with a host path (a physical location in the unRAID file system), which in this case, happens to be a location that exists only in RAM and is not part of the unRAID user/disk shares. 3 - Click Save button in Edit Configuration. 4 - Click Apply button in Docker Edit screen. 5. Open Plex docker WebUI 6. Click on Settings 7. Select Server Settings 8. Select Transcoder 9. Enter the name of the container path you specified in step two as the Transcoder temporary directory This tells Plex to transcode videos to the /tmp directory on the host which is in RAM. This could also be done to a directory on your cache drive by specifying the Host Path as something like /mnt/user/cache/plex/transcode in step two. The important thing is that in Plex transcode settings you specify the name of the Container path variable as the Transcoder location. 10. Click Save Changes button Now lets look at mapping media locations on the unRAID server to Plex Libraries. The concept is the same, you map a container path that exists only within the docker config to a physical host path in the unRAID filesystem. Here are my media mappings in Plex: In the case of movies, the docker container path is /movies which is mapped to the physical share path of /mnt/user/Movies (the Movies share in unRAID) Open the Plex UI and go to the Libraries section and add a Movies Library. When adding the folder, select the name of the docker container path you created for Movies (/movies) and click add. This tells Plex to look for movie media in the physical host path mapped to /movies which is /mnt/user/Movies Repeat for other media container/hosts paths you may have. Note that in the above screenshot of host path configuration, I also have a /config mapping which specifies where the Plex docker config files will be physically located. In my case that is /mnt/user/appdata/Plex which is on the cache drive. The recommendation is to specify /mnt/cache/appdata/Plex instead if you want docker config files stored in appdata on cache. I have never had the issues others have had with using the user mapping as apposed to cache so I have never bothered to change Plex and other dockers mapped this way. If I browse my cache drive, I find all the Plex docker/config files being stored in the appdata/Plex folder. Once you wrap your head around how docker volume mapping works, it is really quite easy to configure dockers properly
    1 point