Jump to content

pwm

Members
  • Posts

    1,682
  • Joined

  • Last visited

  • Days Won

    10

Posts posted by pwm

  1. 13 hours ago, strahd_zarovich said:

    Yes to ECC, but I didn't look at the bios log. I will check that when I get a chance.

     

    Memtest may stress-test the memory and provoke errors. But ECC will hide any single-bit error so the Memtest program will not display them. You need to look in the BIOS log file to see the actual errors - then you can figure out which memory module that is causing the errors. Or if you possibly have multiple memory slots with issues. Obviously, you can get memory errors without a bad module if you have overclocked the memory or the memory controller.

  2. 15 hours ago, kevin_h said:

    I am only using 14 out of 24 bays so I need to spread them out in my case to get better airflow over each drive.

     

    In some situations, empty slots can give worse air flow - it's less resistance for the air to move through the empty slots instead of being forced through the narrow channels between disks.

  3. 2 hours ago, Neo_x said:

    thx guys. i will put a temporary measure on to see if i can drop the temperature.
    currently air is being sucked over the drives and pushed through cpu and then out, so shouldn;t be an issue. might be that the fans is a bit low CFM, but will try to manage/upgrade witha controller.

     

    Even slow fans would normally manage better - is it all drives that gets that hot, or do you have a some drives that have the air circulation blocked or isn't in the direct path of the moving air?

  4. 16 minutes ago, demonmaestro said:

    Although something that is making me wonder is I keep having notifications saysing udma crc error count returned to normal value.


    This doesn't make much sense - the UDMA CRC errors will just keep incrementing. They never gets reset back to zero. What is the exact text you see in the notifications?

  5. 65°C would be a warranty-breaking temperature for most HDD.

     

    You need to immediately fix this issue - hard disks doesn't really produce much heat but 10W of heat without moving air will result in very significant temperature increases. Your fans must move air over the drives. And preferably cool air sucked into the case and not air that has already been heated by the CPU, PSU, graphics card etc.

  6. 1 hour ago, scufless said:

    Got it to work.  So what I did was precleared the disk and re formatted it to NTFS and that seemed to work.  I guess Windows 10 doesnt like vdisks 

     

    No virtual machine likes vdisks. vdisks are virtual disks - i.e. containers that are intended to contain a OS-supported file system. The vdisk container isn't for Windows but for the VM host (unRAID), while the contents of the container is for the VM (Windows).

     

    A physical disk on the other hand isn't a virtual disk and when you hand over a physical disk to Windows then the disk itself must contain a partition with a OS-supported file system (NTFS).

    • Like 1
  7. 5 minutes ago, scufless said:

    thanks for the reply.  any reason why I would be still getting this error?

    1e3c16a6064d8b1b5d102a14edaee959.png

     

    You are requesting the storage of a 500 GB disk image file on a raw disk - but unRAID expects the destination to be a partition with a file system .

     

    I'm not sure if this video is up to date, but it relates to handing over physical drives:

    https://www.youtube.com/watch?v=QaB9HhpbDAI

  8. 39 minutes ago, jbonnett said:

    I set them both to 1%  tried a 8GB transfer and I still get silly speeds. Just in case you don't know from the diagnostics I have 32GB RAM.

     

    Copying from /dev/zero can sometimes give silly speeds when actually writing to the drive - a number of SSD performs on-the-fly compression of data to reduce the flash wear. And the compression ratio of the contents from /dev/zero is very, very high. So the write speeds ends up being the link speed to the drive instead of the actual write speed of the drive.

  9. 8 hours ago, tr0910 said:

    Any file is good so your dinner photo is fine. unRaid will use the first 8mb of the file and ignore the rest (need to confirm 8mb part), so it could even be a huge file.

    I like your dinner photo but Ssh keygen might be more safe from a crypto standpoint.


    The underlying encryption for the disks are using 128-bit or 256-bit encryption. No key randomness can give more security than this limit. So the reason that quite a lot of data is processed from the key file is just to cover for the key file potentially having bad random data.

     

    In the end, the weakest link with a key file is that it's a file. If the user has 1 million files then it's only 2^20 files to test. And the attacker doesn't need encryption skills to figure out which file.

     

    But even if not knowing which file to use, selecting the correct file to unlock with is basically 20 bits of security. Requiring two files to be concatenated and used as key file would improve the security since it isn't just 2^20 files to test anymore, but suddenly 2^40 file combinations. And concatenating and testing arbitrary combinations of two files out of 2^20 files represents a huge amount of time. But this concept also fails in reality, because the attacker must be expected to have access to the script that makes the concatenation.

     

    In the end, no special tool needs to be used. Any reasonably large file that contains some form of randomness is good enough. And photos fulfills these requirements. But the problem is stopping an attacker from figuring out which file and how to get access to it. So automatic startup using file data outside of the machine is really only a protection from a stupid burglar that walks away with the machine and only has access to the data they carried away.

  10. Just now, itimpi said:

    Have you raised a formal request to get the groups file included in the Config folder on the flash drive?   If there are no other ramifications I expect LimeTech would implement this as being a trivial change.   This is obviously a much smaller change than getting full baked-in support for groups but still worth asking for as a small step on the way.

     

    No, I haven't. Most of the time when I need changes to an unRAID machine, I just make the changes I need. But obviously I sometimes have to repair/modify my own patching after unRAID updates. Such as my original iptables firewalling of the machines that had to be modified after unRAID started to use iptables for docker.

  11. 1 hour ago, itimpi said:

    I thought you might have simply gotten away with copying the /etc/group file onto the flash drive and back into position rather than recreating the users?    This would be simpler than recreating the groups on each boot.    However it may well be more complicated than that and recreating the users each time may be easier to maintain?

     

    Yes I could. But the problem is I don't "own" the group file, and so do not know what requirements unRAID might have. If I update and unRAID requires additional system groups that I don't know about, then I will overwrite them. That's why I would have liked unRAID to change behavior and store the group file in the config directory just like the password file instead of trying to force a system-supplied file on us.

     

    I feel patching the file is the most compatible way to make use of groups.

     

    I would like that LT spent some time on hardening unRAID. Official use of account groups, firewall rules etc. The IoT revolution means people will bring in hundreds of new networked devices with totally unknown security levels - so there are just so many more ways we may get infestations in our local networks.

  12. 1 minute ago, itimpi said:

    Good to hear!

     

    Have you copied the files that get altered (e.g /etc/groups) to the flash drive, and then added entries into the ‘go’ file to copy them back into position during the boot process?    This is needed as unRAID is running from RAM so you need to take positive action to make such changes survive a reboot.

     

    Perhaps at the end you could create a brief ‘How To’ post in case anyone else has similar needs in the future?


    Yes, I'm a bit sad that the groups file isn't represented in /boot/config like the other files.

     

    So the machine needs to recreate custom groups and assign users to them on boot (the 'go' file), like this:

    root@n54l-3:/etc# groupadd -g 1101 pwm_test
    
    root@n54l-3:/etc# usermod -a -G pwm_test fs_cesium
    
    root@n54l-3:/etc# tail -1 group
    pwm_test:x:1101:fs_cesium

    And it's obviously important to reuse the same group ID on every boot - and use an ID that isn't likely to collide with future unRAID versions.

    root@n54l-3:/mnt/disk2# ls -l /mnt/disk2/radium/
    total 0
    drwxrws--- 2 root      pwm_test 112 Jun 28 00:07 test/
    -rwxrwx--- 1 fs_cesium pwm_test   0 Aug 25 12:27 test-pwm_test*
    
    root@n54l-3:/mnt/disk2# ls -l /mnt/user/radium
    total 0
    drwxrws--- 1 root      pwm_test 112 Jun 28 00:07 test/
    -rwxrwx--- 1 fs_cesium pwm_test   0 Aug 25 12:27 test-pwm_test*

    And I like to have:

    chmod 2770 <dirname>
    

    so new content created in the directory will inherit the group instead of getting the main group from the account adding the content.

    • Like 1
  13. On 8/23/2018 at 5:22 PM, itimpi said:

    Although unRAID is based on Linux, this will not easily done without a lot of command line work.   You have to work out how to get this to be handled correctly at both the share (samba) and Linux levels


    I have manually (i.e. on command line) made use of group rights and it works well.

  14. 14 hours ago, bonienl said:

    If the PC I am using to connect to unRAID is hijacked and some key-logger is installed, it really doesn't matter whether HTTP or HTTPS is used, they will "see" anyway what I am doing.

     

     

    A key logger doesn't catch pasted passwords from a password manager, because they aren't pasted as key presses.

     

    But DNS poisoning is an attack that just requires access to the network broadcast domain.

     

    12 hours ago, John_M said:

    But even when you have a valid certificate, if it can't be verified as being valid because the signing authority's responder doesn't respond you get an error!

     

    The browser doesn't really need to visit the signing authority - but the reason it wants to is to check the certificate revocation list. Most SSL-based applications doesn't bother with certification revocations so they never contacts the CA. But depending on the information encoded in the certificate, web browsers normally wants that additional step of security.

  15. 5 hours ago, ashman70 said:

    A second array is not a feature of unRAID, not sure if it ever will be.

     

    I really do believe unRAID needs to support multiple arrays. I can't use unRAID for my main machine just because it can't do multiple arrays. And I don't want to run multiple unRAID in individual VM.

     

    But it isn't meaningful with a secondary array just for storing the backup of the main array and then be able to remove the drives to store somewhere else. Any backup to disks that will be disconnected really should be made to normal UD disks without attempt to introduce any additional parity. Then the user has the option of replacing just a single broken data disk and rebuild parity. Or replace all the data disks and rebuild parity.

     

    If the user wants a full array for backup of he main array, then the correct way is to use two machines - that's the only way to make sure the backup array doesn't get destroyed when the PSU in the main machine suffers a catastrophic failure and burns all connected electronics.

  16. 7 hours ago, atconc said:

     

    I'd rather not give up that much capacity if possible, so being able to enable trim, with my eyes open to the security impact would be good.

     

    Note that TRIM only works on unused surface. So if you haven't a significant percent free space then TRIM will not be efficient. And if you haven't a significant percent free space, then the wear on the flash will greatly increase when the flash controller has to move data around which means it erases a block of data and directly writes back half of the content one more time.

     

    For first-generation SSD, each flash block could handle 100k erase cycles. Then it became 40k. Then 10k. Then 3k. Most of todays SSD have flash that can handle 300-400 erase cycles. And if you have a factor 20 write amplification, then you get 400 / 20 which means you end up only being able to write the full capacity 20 times before you have introduced 400x full disk writes on the actual flash media.

  17. 1 hour ago, bonienl said:

    A local third party can not see all traffic your server is sending. A switch relays frames between peers only and unless that third party is connected to a monitor (span) port of the switch, it may catch the occasional broadcast packets,  but nothing sensitive.

     

    Unless DNS is used and an attacker is using DNS poisoning to capture the connect.

  18. 3 hours ago, supermew10 said:
    
    smtp.mailgun.org

    should be correct (got this info from domain information on my mailgun page)

     

    You configured your mail with *.org.

    But did the telnet test with *.com.

    So still no connection test with the hostname you have configured.

  19. 12 hours ago, ashman70 said:

    At the end of the day, unless you are storing ultra classified, or secret stuff you don't want anyone to see, I'd have to ask WHY would you do it?

     

    The main advantage with encrypting the disk, is that if you have a warranty issue you can send in the drive as is without caring about the content.

     

    An unencrypted disk that fails may be so broken that it can't even be erased.

     

    But as noted by @John_M - most people don't need encryption.

     

    How much it slows down the system? That depends completely on amount of processing power. Note that unRAID parity work is done on raw disk sectors, so it isn't affected by the encryption - unRAID does the work below the file system layer.

  20. 46 minutes ago, Marino said:

    Emulation is normally not that good as native.

     

    Correct if you use an OS and/or file system that performs 512-byte writes.

    But just about all file systems in existence have been using 4kB cluster size (or larger) for a large number of years.

    So slowdown only happens if there is an unaligned paritition - and the partition editors (and unRAID) understands this.

     

    So for modern OS, the emulation is just a question of what address the OS should send to access a specific block of data. No speed difference involved - just a question of compatibility.

     

    48 minutes ago, Marino said:

    But is there any reason to buy 4Kn when 512e also uses 4k Sectors?

     

    It really doesn't matter which one you buy, as long as you have an OS that supports 4kB sector size. If using older OS that assumes 512-byte sectors, then you are forced to buy a 512e drive. So x years from now, most 512e drives will vanish from the market, because there isn't a need for them other than as spare parts for ancient machines.

     

    It's confusing to users with a HDD that has one broken sector but reports 8 broken sectors in SMART. That's what happens with a 512e drive, since the broken 4kB native sector is presented as 8 virtual 512-byte sectors.

     

     

×
×
  • Create New...