Jump to content

Opawesome

Members
  • Posts

    276
  • Joined

  • Last visited

Posts posted by Opawesome

  1. I have a Supermicro X11SSH-LN4-F with one LSI SAS 9201 16i. It has been working flawlessly for a year, with 10 HDDs attached to it.

     

    I bought it from this guy : https://www.ebay.fr/usr/ac-tech for 150,00 USD +shipping.

     

    The card does get quite hot, so I installed a fan installed in the PCI-E slot right below the card, with blowing air directly to on to the SAS card radiator. The bracket used to install the fan is like this one: https://www.amazon.com/Bracket-Three-Mount-Video-Cooling/dp/B00UJ9JSBY 

    • Like 1
  2. Il est possible que le fingerprint d'une des 2 machines ait changé avec la résolution des problèmes que tu as mentionnés. Dans ce cas les scripts contenant des connexions SSH automatiques ne fonctionnent plus jusqu'à ce que le nouveau fingerprint soit ajouté au fichier "known_hosts" de la machine client SSH. Cela se fait simplement en se connectant une fois manuellement avec une console.

  3. 55 minutes ago, johnwhicker said:

    Well  SNAP.  Before I didn't have web gui nor ssh root passwd.  After I set it by mistake now I am hosed.  There is gotta be a way to get back to what I had?

    May I ask why you want to connect to SSH without password? Because there is always the possibility to login via RSA keys if you don't want to type passwords.

  4. For what it's worth, I have a Supermicro X11SSH-LN4-F with one LSI SAS 9201 16i. It has been working flawlessly for a year. I had it connected to the PCI-E 3.0 x8 (in x16) slot first, but decided to move it to the 2nd PCI-E 3.0 x8 (in x8) slot afterwards, with a fan installed in the last PCI-E slot blowing air directly to on to the SAS card radiator.

     

    The bracket used to install the fan is like this one: https://www.amazon.com/Bracket-Three-Mount-Video-Cooling/dp/B00UJ9JSBY

  5. On 3/3/2021 at 10:30 AM, je82 said:

    Thank you, i am doing a little bit of research if disaster was to strike and to backup these headers seem essential if you do xfs encryption as if the header is broken all your data is lost on that disk because you cannot mount it even with the correct password (as i understand?)

     

    A question to an expert like yourself, is it nessecary to run the header backup as a "cronjob" do have "fresh" header backups or is this only needed to run once as the header never changes?

    I believe there is indeed a chance to lose all your data if the LUKS header becomes corrupt, although I understand that the chance of that happening is lower with LUKS2 than with LUKS1.

     

    I am by no means an expert. I just have just been doing some reading / testing on the subject during a week or so. My understanding is that the LUKS headers only change if you perform an operation like changing the password, or add a new key. So based on that understanding, my guess would be that you only need to backup LUKS headers if and when you do perform such an operation.

     

    Anyway, as long as you keep your previous backups, I see no harm in scheduling a recurrent backup.

    • Like 2
  6. On 3/5/2021 at 11:56 AM, Lee Kim Tatt said:

    Hi there, my system locking up randomly with i915 driver (hw transcode in jellyfin) in 6.9.0. Anyone found the solution? 

    Jellyfin + i915, inorder to make it freeze/lock whole system, play some video and make sure the video in hw transcoding, then jumping around the video here and there to make the gpu busy, then it after awhile it will totally lock up the system.

    i tried with 

    touch /boot/config/modprobe.d/i915.conf

    it freeze in hw transcode randomly.

    then i add the following to i915.conf to load the extra firmware

    options i915 enable_guc=2

    still freeze in hw transcode randomly. any solution? 

     

    *rollback to 6.8.3, stability is back. *phew.... 

    Hi @Lee Kim Tatt,

    Your problem was exactly as mine (freezing when jumping around the video), except that the problem occurred on 6.8.3 for me. Would you mind sharing your hardware configuration (MB, CPU, etc.) ?

    Best,

    OP

  7.  

    Hi all,

     

    First of all many thanks to @limetech for the continuous development of Unraid and for the new 6.9 version.

     

    I am now preparing to upgrade and want to modify my /config/go file in order to comply with new features such as the including of Intel i915 drivers or the changes made to the SSH configuration.

     

    Indeed, my attention was caught regarding the SSH improvements: https://wiki.unraid.net/Unraid_OS_6.9.0#SSH_Improvements , the following part in particular:

    Quote

    In addition, upon upgrade we ensure the config/ssh/root directory exists on the USB flash boot device; and, we have set up a symlink: /root/.ssh to this directory.  This means any files you might put into /root/.ssh will be persistent across reboots.

    Note: if you examine the sshd startup script /etc/rc.d/rc.sshd, upon boot all files from the config/ssh directory are copied to /etc/ssh (but not subdirs).  The purpose is to restore the host ssh keys; however, this mechanism can be used to define custom ssh_conf and sshd_conf files.

     

    Currently, my /config/go file is set up to allow unsupervised SSH connections to/from my remote backup server via RSA keys. I noticed that @Hoopster (whom I suspect might have the same kind of configuration) also noted that upgrading to Unraid 6.9 would require adjustments to the go file: https://forums.unraid.net/topic/103388-unraid-os-version-690-available/?do=findComment&comment=954138 

     

    My /config/go file is currently as follows: 

     

    [...]
    
    # Copy RSA files to /root/.ssh folder and set permissions for files (not for Unraid 6.9+)
    
    # 1. Create .ssh folder for root
    mkdir -p /root/.ssh
    
    # 2. Copy private RSA key to localhost as id_rsa
    cp /boot/config/sshroot/server.key /root/.ssh/id_rsa
    
    # 3. Add authorized public RSA key to authorized_keys
    cat /boot/config/sshroot/client1.pub >> /root/.ssh/authorized_keys
    
    # 4. Add authorized public RSA key to authorized_keys
    cat /boot/config/sshroot/client2.pub >> /root/.ssh/authorized_keys
    
    # 5. Copy known_hosts to localhost to allow unsupervised connections
    cp /boot/config/sshroot/known_hosts /root/.ssh/known_hosts
    
    # 6. Apply correct permissions to .ssh folder's content
    chmod g-rwx,o-rwx -R /root/.ssh
    
    [...]

     

    My understanding is that I can simply remove all of the above from the /config/go file and just run once and for all the following commands before updating to Unraid 6.9:

     

    mkdir -p /boot/config/ssh/root
    cp /root/.ssh/id_rsa /boot/config/ssh/root
    cp /root/.ssh/authorized_keys /boot/config/ssh/root
    cp /root/.ssh/known_hosts /boot/config/ssh/root
    chmod g-rwx,o-rwx -R /boot/config/ssh/root

     

    Is that correct ?

     

    Many thanks,

    OP

     

  8. Hi @kimifelipe

    Hi all

     

    Maybe I did not understand very well but, it does not seem safe to me to change the parity disk if you are already having problems with one data drive. Especially with only one parity disk and not knowing for sure that the issue is caused by the new data drive and not by something else like a cable, backplane, HBA adapter, or motherboard issue.

     

    What if you put back you old drive and it becomes corrupt because of a faulty hardware ? I guess you would be happy to not have removed your parity drive in such event.

     

    So unless I misunderstand the situation, I would recommend to:

     

    1. check your cables and put the old data drive back in the array

    2. check parity and make sure you have a valid parity

    3. backup all your data (if not done already)

    4. test the new data drive, outside of the array, with something like the preclear plugin and an advanced SMART test

    5. if the tests are successful, then retry replacing the old data drive and rebuild the data from the parity

    6. use your server and stress the new data drive for a couple of weeks to check that no new error occur on that data drive / slot (may seem unnecessary to many but I would personally do that if I only had only 1 parity drive)

    7. when you are confident that the server is stable, upgrade the parity disk

     

    Best,

    OP

     

  9. 39 minutes ago, itimpi said:

    that is true if the recovery software you use knows how to recover encrypted volumes.  Not sure this is always the case.

     

    That I don't understand. My understanding was that with LUKS : (i) the encryption/decryption is made at the kernel level, and that  (ii) a device is created by LUKS resulting in software applications not even being aware that they are dealing with an encrypted partition.

  10. Hi again @itimpi

     

    Many thanks for you detailed and insightful answer.

     

    I have though about the various issues/scenarios you mentioned and I have a few remarks: 

     

    1ST ISSUE:

     

    Quote

    If the write failed , but the corresponding parity write worked unRaid would disable the drive where the write failed and subsequently detect that the parity is out of step with the drive.  You now have to decide if parity is right and rebuild the drive to match parity (normal action) or decide to rebuild parity to match the drive (which means parity now reflects any file system level corruption).

     

    This sounds very logical. I didn't know you could make such a choice however. In this case, I understand that the only case where having or not having encryption enabled would make a difference is: (i) when the bad sector would be located in the LUKS; and (ii) Unraid fails to rebuild the corrupted data correctly. Assuming that LUKS headers are checked and within the scope of parity protection, then I guess such failure would immediately be noticed upon a reboot, because you would not be able to mount the LUKS partition and start the array. In that case, wouldn't you just have to restore your backed up LUKS headers on the incriminated data drive to fix the issue? Unless I am missing something of course.

     

    [As a side question (a bit off-topic): Also wouldn't having 2 parity disks help in the decision in determining which drive (parity1, parity2 or data) contains the incorrect data ? And if yes, does Unraid make that decision autonomously when checking/correcting parity ? I would be curious to know.]

     

    2ND ISSUE:

     

    Quote

    You can also get the case where a software or hardware error causes an incorrect sector to be written to both the drive and the parity drive without any apparent error indication at the hardware level.  In such a case both parity and the drive are in agreement but the file system is corrupt.

     

    This case scenario sounds indeed quite bad. I imagine that maybe the 'hardware-caused" issue can be mitigated by using ECC memory. However, I don't really see how would disk encryption render data recovery more difficult. As long as the LUKS headers are intact (or able to be restored as mentioned above in 1st issue) and you can mount the LUKS partition after providing the decryption key, won't that partition by treated like a simple hard-drive device by the kernel, thus enabling you to use any data recovery tool you would have used anyway ? 

     

    3RD ISSUE:

     

    Quote

    Finally you have the case where you get more disks failing than you have parity drives.  In such a case parity cannot help you, but often the majority of the data can still be recovered off a failing drive as long as it is still working at even a basic level.

     

    Same remark as for 2nd issue above.

     

    CONLUDING REMARKS

     

    I am by no mean an expert, so please be welcome to challenge my understanding above. I do understand that encryption adds a layer of potential problems but I would like to exactly understand the risk and I don't want to reject encryption if this additional layer of problems can be managed.

     

    In any event, I don't see any scenario where encryption could be a problem if you have full offsite backup. Does anyone see one aside forgetting password or losing key for both machines at the same time?

     

    Best,

    OP

  11. On 2/14/2021 at 6:21 PM, 007craft said:

    Thanks for the input.  I think Ill leave encryption off.  All my data is important and I have cloud backups, but I suppose if somebody were to read the data, it wouldnt be a big deal. 

    Not saying that you should encrypt your data by any means, but when one has all its data backed up on a remote location, then the risk of loosing the locally encrypted data because of some sort of data corruption is, IMO, well mitigated.

  12. In case this helps, I made tests and improved an existing script which enables you to easily backup LUKS2 headers:

     

     

    @itimpi, I was considering switching to xfs-encrypted on my server. What exactly are you thinking about when you talk about "file system level corruption" ? Do you have in mind something other than a corruption in the fist 16MB sectors ?

     

    Best,

    OP

  13. 6 hours ago, Opawesome said:

    I was wondering what was the advantage of using the "dd" command (which is used in the script kindly shared by @golli53), rather than the built-in "cryptsetup luksHeaderBackup" command.

     

    It seems that it is actually recommended to use the built-in command rather than the "dd" command:

     

    Quote

    While you could just copy the appropriate number of bytes from the start of the LUKS partition, the best way is to use command option "luksHeaderBackup" of cryptsetup. This protects also against errors when non-standard parameters have been used in LUKS partition creation.

    (abstract from https://gitlab.com/cryptsetup/cryptsetup/-/wikis/FrequentlyAskedQuestions#6-backup-and-data-recovery)

     

    I also figured that:

     

    Quote

    While the LUKS1 header has a fixed size that is determined by the cipher spec (see Item 6.12), LUKS2 is more variable. The default size is 16MB, but it can be adjusted on creation by using the --luks2-metadata-size and --luks2-keyslots-size options.

    (abstract from https://gitlab.com/cryptsetup/cryptsetup/-/wikis/FrequentlyAskedQuestions#10-luks2-questions)

     

    That means that @golli53's script, which only backups the first 2MB of each device, may not be compatible with LUKS2 (which is the version used by Unraid as of the date of this post). On the contrary, using "cryptsetup luksHeaderBackup" does create 16MB header backup files.

     

    I hope this helps.

     

    Best

    OP

  14. On 12/21/2020 at 9:36 AM, loopback said:

    Currently i have planned the following: [...]

    - Using my hardware firewall to filter out any traffic except the necessary ones (Open ports will be 80/ 443/ 445) [...]

     

    Also is there any other ports unRAID needs to work correctly? [...]

     

    Hi @loopback

     

    Based on what I understand of your use case and knowledge in security I would also strongly advise against opening any of the 80/443/445 port (or corresponding HTTP, HTTPS and SMB services) to the internet (not that I am an expert myself either).

     

    IMO, the simplest and safest way to remotely access your Unraid server is via VPN. In addition to @trurl's suggestion to use WireGuard, I would also recommend OpenVPN, which have been around (and audited) for a long time now, and therefore could be seen by some as potentially less likely to suffer from vulnerabilities compared to WireGuard.

     

    If you really cannot use a VPN because of the need to have a VPN client or a VPN-client capable router, then @tudalex suggestion may be the way to go. You would then need to install some sort of web service to access your files (maybe a cloud file service like nextcloud ?).

     

    Then, as an additional mitigation measure, you can avoid using default ports for the different services you have opened to the internet, and use high number ports instead (like 45299 instead of 443 for your Nginx proxy). I have personally found it to drastically reduce the number of BOT attacks on my network. Some will argue that this is "security through obscurity" and that therefore it is bad. And some would argue that in some use cases, a bit of obscurity is beneficial. 

     

    Finally, you could install fail2ban and have it watch for failed attempts to connect to the services running on your server. When a potential attack is detected (i.e. multiple failed connection attempts in a set period of time), fail2ban will ban the IP and prevent it from connecting to your machine. 

     

    Please feel free to report back with what you did.

     

    Best,

    OP

    • Thanks 1
  15. Hi all,

     

    I was wondering what was the advantage of using the "dd" command (which is used in the script kindly shared by @golli53), rather than the built-in "cryptsetup luksHeaderBackup" command.

     

    With the built-in command one would just need to run:

    cryptsetup luksHeaderBackup /dev/sdbX --header-backup-file /path/to/luks-headers-backup/backed-up-header.bin
    

    to backup the LUKS header,

     

    and:

    cryptsetup luksHeaderRestore /dev/sdbX --header-backup-file /path/to/luks-headers-backup/backed-up-header.bin

    to restore the backed-up header.

     

    I also see less risk of messing something with the "dd" command which, as I understand it, can be very destructive if not used correctly (the wikipedia page says that "dd is sometimes humorously called 'Disk Destroyer', due to its drive-erasing capabilities").

     

    The script would then look like:

    for i in {/dev/sd*,/dev/nvme*}; do if cryptsetup luksDump $i &>/dev/null; then cryptsetup luksHeaderBackup $i --header-backup-file `udevadm info --query=all --name=$i | sed -n 's/.*ID_SERIAL=//p'`.bin; fi; done

     

    What do you think ?

     

    Best,

    OP

     

    More on the dd command:

    https://opensource.com/article/18/7/how-use-dd-linux

     

    Interesting video on LUKS:

    https://youtu.be/5rlZtasM-Pk?t=598

    • Like 1
×
×
  • Create New...