Jump to content

JorgeB

Moderators
  • Posts

    63,929
  • Joined

  • Last visited

  • Days Won

    675

Everything posted by JorgeB

  1. I've one different manufacturer, but virtually identical, same marvell 9230 chipset, it works fine but there could be issues if you use virtualization.
  2. This is a good tip, I use it myself, just a small correction, the number is a percent of your free RAM, i usually set it at 99. Edit to add: This tweak should only be used on a server with a UPS.
  3. Performance should be similar for HDDs, for SSDs use eSATA.
  4. Didn't see the sig since they don't appear on tapatalk, but still believe my impression was right, some users confuse parity disk size with parity protected array size.
  5. Probably means 9TB total array size, based on the duration I'm guessing parity disk is 3TB.
  6. V6.2-beta21 uses samba 4.4 with experimental smb multichannel support, I can't get it to work for now but it should work in the near future.
  7. You can improve it a lot by upgrading all the smaller disks to 4TB, would need to upgrade the pcie x1 controller also, still difficult to go below 8 hours with 4TB disks. I consider anything above 100MB/s a good average speed.
  8. It works. Disk cache is formatted as fuseblk The only supported file systems are ReiserFS, btrFS, XFS. This error should only happen if you are setting up a new array and the disk already has data on it. Prior to with a fix, you should seek assistance in the forums as the disk may simply be unmountable. Whatever you do, do not hit the format button on the unRaid main screen as you will then lose data
  9. Yes and yes. These controllers need some airflow near them, or they run very hot, a standard 80mm or 120mm fan blowing some air on the heatsink is enough.
  10. H310 has to be crossflashed to work with unRAID. Crossflashed H310 = LSI9211-8i
  11. I believe DD output is also average speed: ############################################################################################################################ # # # unRAID Server Pre-Clear of disk /dev/sdf # # Cycle 1 of 1, partition start on sector 64. # # # # # # Step 1 of 3 - Zeroing in progress: (58% Done) # # # # # # # # # # # # # # # # # # # # ** Time elapsed: 6:19:25 | Write speed: 155 MB/s | Average speed: 154 MB/s # # # ############################################################################################################################ # Cycle elapsed time: 6:19:29 | Total elapsed time: 6:19:29 # ############################################################################################################################ Current disk speed is <140MB/s
  12. Initial speed was 175MB/s, now @ 50%, disk write speed at this point is <150MB/s. ############################################################################################################################ # # # unRAID Server Pre-Clear of disk /dev/sdf # # Cycle 1 of 1, partition start on sector 64. # # # # # # Step 1 of 3 - Zeroing in progress: (48% Done) # # # # # # # # # # # # # # # # # # # # ** Time elapsed: 5:06:58 | Write speed: 159 MB/s | Average speed: 159 MB/s # # # ############################################################################################################################ # Cycle elapsed time: 5:07:01 | Total elapsed time: 5:07:02 # ############################################################################################################################
  13. Not really a big deal but the speeds are not correct, write speed should be lower than the average as the disks goes to the inner tracks, they are always similar, it looks to me like the current write speed is the wrong one. ############################################################################################################################ # # # unRAID Server Pre-Clear of disk /dev/sdf # # Cycle 1 of 1, partition start on sector 64. # # # # # # Step 1 of 3 - Zeroing in progress: (30% Done) # # # # # # # # # # # # # # # # # # # # ** Time elapsed: 3:04:35 | Write speed: 166 MB/s | Average speed: 166 MB/s # # # ############################################################################################################################ # Cycle elapsed time: 3:04:38 | Total elapsed time: 3:04:39 # ############################################################################################################################
  14. Tools -> File Integrity. Then check the box for the disk you want to verify and click the Check button. That will check the disk against a previously exported checksum file. I'm not aware of a simple way of forcing the checking of a disk against the checksums stored in extended attributes without resorting to the command line or fiddling with the crontab. Yes, that much I figured out myself. However, I don't export checksums, so it leaves me in limbo. Do an export before running the check, it should only take a few seconds.
  15. Little adjustment needed for System Stats and the new Pro limits, although the stats look correct it only counts up to 26 disks, I'm using 28 array disks in this example.
  16. Can I manually create and use multiple btrfs pools? Multiple cache pools are supported since v6.9, but for some use cases this can still be useful, you can also use multiple btrfs pools with the help of the Unassigned Devices plugin. There are some limitations and most operations creating and maintaining the pool will need to be made using the command line, so if you're not comfortable with that wait for LT to add the feature. If you want to use the now, here's how: -If you don't have yet install the Unassigned Devices plugin. -Better to start with clean/wiped devices, so wipe them or delete any existing partitions. -Using UD format the 1st device using btrfs, choose the mount point name and optionally activate auto mount and share -Using UD mount the 1st device, for this example it will be mounted at /mnt/disks/yourpoolpath -Using UD format the 2nd device using btrfs, no need to change the mount point name, and leave auto mount and share disable. -Now on the console/SSH add the device to the pool by typing: btrfs dev add -f /dev/sdX1 /mnt/disks/yourpoolpath Replace X with correct identifier, note the 1 in the end to specify the partition (for NVMe devices add p1, e.g. /dev/nvme0n1p1) -Device will be added and you will see the extra space on the 1st disk free space graph, whole pool will be accessible in the original mount point, in this example: /mnt/disks/yourpoolpath -By default the disk is added in single profile mode, i.e., it will extend the existing volume, you can change that to other profiles, like raid0, raid1, etc, e.g., to change to raid1 type: btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/disks/yourpoolpath See here for the other available modes. -If you want to add more devices to that pool just repeat the process above Notes -Only mount the first device with UD, all other members will mount together despite nothing being shown on UD's GUI, same to unmount, just unmount the 1st device to unmount the pool. -It appears that if you mount the pool using the 1st device used/free space are correctly reported by UD, unlike if you mount using e.g. the 2nd device, still for some configurations the space might be incorrectly reported, you can always check it using the command line: btrfs fi usage /mnt/disks/yourpoolpath -You can have as many unassigned pools as you want, example how it looks on UD: sdb+sdc+sdd+sde are part of a raid5 pool, sdf+sdg are part of raid1 pool, sdh+sdi+sdn+sdo+sdp are another raid5 pool, note that UD sorts the devices by identifier (sdX), so if sdp was part of the first pool it would still appear last, UD doesn't reorder the devices based on if they are part of a specific pool. You can also see some of the limitations, i.e., no temperature is shown for the secondary pools members, though you can see temps for all devices on the dashboard page, still it allows to easily use multiple pools until LT adds multiple cache pools to Unraid. Remove a device: -to remove a device from a pool type (assuming there's enough free space): btrfs dev del /dev/sdX1 /mnt/disks/yourpoolpath Replace X with correct identifier, note the 1 in the end Note that you can't go below the used profile minimum number of devices, i.e., you can't remove a device from a 2 device raid1 pool, you can convert it to single profile first and then remove the device, to convert to single use: btrfs balance start -f -dconvert=single -mconvert=single /mnt/disks/yourpoolpath Then remove the device normally like above. Replace a device: To replace a device from a pool (if you have enough ports to have both old and new devices connected simultaneously): You need to partition the new device, to do that format it using the UD plugin, you can use any filesystem, then type: btrfs replace start -f /dev/sdX1 /dev/sdY1 /mnt/disks/yourpoolpath Replace X with source, Y with target, note the 1 in the end of both, you can check replacement progress with: btrfs replace status /mnt/disks/yourpoolpath If the new device is larger you need to resize it to use all available capacity, you can do that with: btrfs fi resize X:max /mnt/disks/yourpoolpath Replace X with the correct devid, you can find that with: btrfs fi show /mnt/disks/yourpoolpath
  17. Why is my cache disk(s) unassigned after a reboot? Some users have this issue when using Chrome to add cache disk(s), single cache disk or one or more disks from a cache pool don't stay assigned after a reboot, use IE or Firefox for this operation, use it for the complete procedure, assign cache disk(s), start array, stop array and reboot, your assignments should stick now. If using a different browser doesn't work, boot unRAID in safe mode, assign your cache devices, start array, stop array and reboot in normal mode, assignments should now stick.
  18. There are two issues with chrome that come up again and again (although they only affect some users), in case you want to add them to the FAQ: 1-changing the number of array disk slots crashes the WebGUI 2-adding a cache disk (or adding more disks to the cache pool) doesn't stick after a reboot Users affected should use a different browser for these operations, for the cache pool issue the hole procedure should be done with it, assign cache disk(s), start array, stop array and reboot.
  19. The time I posted on your other thread was from a rebuild, it's also on this thread somewhere.
  20. Since the disks are already part of the array I would look at the SMART reports, if all looks fine and assuming by the models that they were previously used with no issues then do a parity check, if there are doubts do an extended SMART test before the parity check.
  21. I believe Auto is for future use, for now it works the same as normal writing mode.
  22. This is what caused the rebuild, you could stop it, do a new config and a parity check, but it's easier and safer to just let the rebuild finish.
  23. Did you start the array when you selected both cache slots?
  24. Choosing more than 1 cache slot sets cache fs to btrfs, change it back to whatever fs your cache is and it should mount again.
  25. Just wanted to add 2 more things I forgot to mention about MHDD: There's a linux clone, http://whdd.org/demo/, there's even a slackware package but it's 32bits. While the graphic part of MHDD is pretty, what I really care about are the stats, below two examples of a good disk, and a bad one that at 4% scan has several slow sectors. http://s10.postimg.org/gjso76ort/2016_03_31_09_55_54.jpg[/img] http://s10.postimg.org/8guhvv46h/2016_03_31_16_59_10.jpg[/img] Good disks usually have very few sectors >10ms, but it's normal to a handful of <150ms, on a healthy disk there shouldn't be any >150ms.
×
×
  • Create New...