Jump to content

itimpi

Community Developer
  • Content Count

    7550
  • Joined

  • Last visited

  • Days Won

    15

Everything posted by itimpi

  1. Unraid does not normally care how the disks are connected - it recognizes the disks by their serial numbers. The only time you might have any issues is when the controllers involved report the serial numbers in different formats.
  2. Mine never seems to get to hot to touch so heat does not seem to be a problem for me. I am using a LSI 9201-16i card.
  3. Unraid does not include any support for repairing Bit Rot. If you use BTRFS as the file system on a drive then this has built in support for detecting corruption. If you use XFS then you can install the File Integrity plugin to help detect this type of issue. In both cases it is assumed that you restore from your backups to correct such an issue. In practise with modern hardware Bit Rot is rare and there are many other ways to corrupt data that are nothing to do with Bit Rot and are more likely to happen. Unraid allows a share to span multiple drives as a standard feature. The only restriction is that an individual file has to fit on a single drive (this a by-product of the fact that each Unraid drive has a discrete file system that in emergencies can be read outside the Unraid array).
  4. I applied the Red version; left it for about 10 minutes; used a cotton bud to clean off any excess left; applied the Gold version; left that for about 5 minutes; plugged the drive back in. If I have any reason to take the system apart I might do the SATA cables as well, and possibly even those inside the hot-swap cages.
  5. I was using the Red version to 'clean' the contacts and then the Gold version to re-plate the contacts. I just did it to the drives (I have hot-swap cages), but I cannot see how using it on the other parts in the connection path can cause an issue as long as you are careful about it.
  6. Just thought in might be worth mentioning that I was getting CRC errors at regular intervals on some drives that stopped happening when I used DeoxIT on the SATA connectors for the drives. Something that is easy to try if swapping cables does not seem to be helping.
  7. This slightly on the slow side, but not excessively so. If you do not mind all your disks spinning you might want to look at what performance you get using Turbo Write mode. There is an associated plugin to help automate when Turbo Write mode should be used if you do not want to control it manually.
  8. I did not think the RC was even trying to address this problem? Instead it is focused on getting to the bottom of why some users are experiencing SQLite DB corruption. Having said that I guess the two issues could be related in some way
  9. It is normally used when you want to reset the array for some reason and not as part of normal day-to-day array operation.
  10. I am not sure how you copied the original folders that gave you multiple User Shares? Did you copy them directly to a ‘diskX’ type share by-passing the User Share level. It might be worth posting your system diagnostics zip file (obtained via Tools->Diagnostics) so we can see if there is something else going on. Are you comfortable with using the command line if you are given some commands to run at that level to help diagnose your issue and get you to a state where things start working as expected?
  11. It It is still worth checking whether your processor supports vt-x as that is enough to run a VM if you do not want to pass through hardware such a GPU to a VM and are happy to live with emulated versions. Such VMs can still be very useful if you are not worried about the graphics performance of a VM but are intending to use it for some other purpose. Most processors that are 64-bit capable support at least that level of virtualisation. As was mentioned it does also need virtualisation to be enabled in the motherboard BIOS settings.
  12. Probably too late now, but Squid had said you could use Tools->New Config as the first step in his suggested approach. I suspect you missed that :). It is not part of normal array expansion but an alternative approach that could be used in your particular case. The New Config option invalidates parity (but you have done that anyway) and allows you to make any changes to the data drive selection that you want and then builds parity based on the new selection of data drives. It exploits the fact that as long as the data drive has previously been used by Unraid it’s contents are left intact when you first start the array and commit the new drive selection.
  13. If you do not format the drive then although it is part of the array it will not be mounted as it does not yet have a file system, so Unraid will not attempt to write to it. It is only when after successfully building parity and you decide to go ahead with formatting the drive that Unraid will create a file system on the drive, mount it, amd start writing to it.
  14. It looks as though the different controllers are reporting the disk serial in slightly different formats which is why Unraid is not realising they are the same drive.
  15. I have seen a similar problem on Windows. In that case it is because Windows does not allow two connections at the same time to the same server with different users from the a particular client. It just fails the second connection reporting a failure to authenticate. I wonder if MacOS has a similar limitation.
  16. 8.8GB is only about 1% of a 8TB drive. That is typical for such a disk with almost no files on it as 1% is about the overhead for creating the file system control structures when you format a disk. I suspect you were interpreting the 8.8GB as 8.8TB
  17. For shares set Use Cache = Yes files are moved to the array when mover runs (typically scheduled to run overnight). These is no real-time replication. For Use Cache = Only/Prefer where the files stay on the cache you need at least 2 drives in the cache for redundancy on the cache. You also need the cache to use BTRFS RAID1 (which is the default).
  18. It is worth pointing out that you can also write scripts in Unraid using PHP. I have started using this in preference to bash as it provides far more facilities and is closer to traditional programming languages from a syntax perspective.
  19. Although technically this is supported it is not recommended as if the drive momentarily disconnects Unraid will tend to lose track of it if it is an array drive. It is OK for drives that are not part of the array. Docker compose is not supplied as standard although it is available via a plugin. These is no built-in GUI support for docker compose so you need to drive it via CLI commands. Unraid DOES provide GUI support for running individual containers and this seems to satisfy the vast majority of users. Unraid does not require a dedicated GPU as it is administered via a browser. There are plenty of forum members who pass all GPUs to VMs.
  20. That is because each 'write' operation is not simply a write. it involves: Reading the appropriate sector from both the target drive and the parity drive(s) in parallel. Calculating the new contents of the parity sector(s) based in the changes to the target drive data and the current parity drive(s) data. Waiting for the parity drive(s) and target drive to complete a disk revolution (as this is slower than the previous step) Writing the updated sector to the parity drive(s) and the target array drive. in this mode there is always at least one revolution of both the parity drive(s) and the target drive (whichever is slowest) before the 'write' operation completes and this is what puts an upper limit on the speed that is achievable. The Turbo write mode tries to speed things up by eliminating the initial read of the parity drive is) and target drive by: Reading the target sector from ALL data drives except the target drive (in parallel). The parity drive(s) are not read at this stage. Calculating the new contents of the parity sector(s) based on the contents of the target drive and the equivalent sectors on all the data drive (this is the same calculation to that done when initially building parity) Writing the updated sector(s) to the parity drive. Whether this actually speeds things up is going to vary between systems as it depends on the rotational state of many drives, but in tends to by eliminating the need to wait for a full disk revolution to occur on both the parity drive(s) and the target drive and this tends to be the slowest step. in both cases the effective speed will be lower than raw disk performance might suggest. The potential attraction of SSD only arrays (that some users have been discussing) is that delays due to disk rotation are eliminated thus speeding up the above processes.
  21. The array is available during the rebuild but performance will be degraded until the parity build completes. The time has nothing to do with the amount of data on the array, but is determined by the size of the largest parity drive. It will take something like 2-3 hours per TB.
  22. That sounds like a good speed for writing to the parity protected array you might get better speeds if you have Turbo write enabled but that is at the expense of needing all drives to be spinning which is not the case for the default mode.
  23. Not sure why you are getting the issue. There have been previous reports of some motherboards only booting in GUI mode but I am not sure any root cause has ever been identified. I do not understand the comment about it being difficult to reboot if you boot in gui mode? I have mine set to default to that even though my Unraid server is normally running headless (just in case I want to plug in a monitor+keyboard) and it does not stop me rebooting in any way.
  24. It is worth pointing out that if you both upgrade the existing drive and add the new one in a single operation your data is not protected until the parity has been rebuilt. Another option (but it takes longer) is to first add the second parity and let that build. When that has completed you can replace the first parity disk and let that build. Using that procedure you are always protected against a single drive failing.