Jump to content

limetech

Administrators
  • Posts

    10,192
  • Joined

  • Last visited

  • Days Won

    196

Everything posted by limetech

  1. See also: http://lime-technology.com/forum/index.php?topic=55116.0
  2. google is your friend and he's talking about someone physically stealing the storage device.
  3. That's good info, thanks. I'm still a bit worried about Parity updates slowing down the writes even with TRIM. This is because with those DZ_TRIM devices we can treat TRIM like a "WRITE all zeros" and update parity accordingly, but probably Parity will not be all zeros. A refinement would be to check if data to be written to Parity is all-zeros and if so, instead doing actual write, send down TRIM instead. Not sure how this would affect performance, I think TRIM is one of those commands that causes a queue draining, which may also impact performance.
  4. A file system issues TRIM commands as a result of deleting a file to tell the ssd that the set of blocks which previously made up the file are no longer being used. The sdd can then mark those blocks as 'free'. Later when the ssd internal garbage collection runs, then it knows that it doesn't have to preserve the contents of those blocks. This makes garbage collection more efficient. There are lots of articles that explain this. The trouble this causes for parity-based array organizations is that the data returned from a TRIM'ed data block can be indeterminate. This paper is a bit wordy but lays it out on p. 13: To boil this down for unRAID: it should work to use SSD's in an unRaid P or P+Q array if TRIM is not used. This is current behavior. However note that: a) Write performance can degrade faster on data disks depending on how many file deletions take place. b) The parity disk is also written for each data disk write. c) The data disks really should be completely written first because theoretically a block that was never written from the point of view of the SSD, can return non-deterministic data for those blocks. We have not seen this happen, but then again we have not run too many SSD arrays (it would show up as parity sync errors). This is pretty undesirable thing to do however since it will guarantee slowing down subsequent writes. d) If you don't want to pre-write the disks as above, then only use SSD's that support "DX_TRIM" or "DZ_TRIM", and instead of writing the disks with zeros, simply use 'blkdisard' command to first TRIM the entire device instead. You can use the 'hdparm' command to determine if your SSD's have this support: hdparm -I /dev/sdX # substitute X for your ssd device assignment You want to look near the end of the "Commands/features:" section for: * Data Set Management TRIM supported Following this will either see this: * Deterministic read data after TRIM or you will see this: * Deterministic read zeros after TRIM or you won't see either of the above (if this is the case, do not use in unRAID P or P+Q array). In a future release we do plan to add proper TRIM support to array disks. Here's a heads-up on that. In order to support TRIM in unRaid P or P+Q array, we must add code to the md/unraid driver and all SSD's in the array must support either "DX_TRIM" or "DZ_TRIM" mode as described above. In addition there's a really good chance we will only support SSD's that support "DZ_TRIM" since to support "DX_TRIM" is a lot more work
  5. After it shows "expired" in the upper left of the header, nothing actually happens until you Stop the array. You will find that you cannot then Start the array again with an expired key, but in this state when go to the Registration page, now a button shows up where you can get an extension. Yes the way it works right now is a bit confusing and we will be changing it...
  6. That is a documentation error: current versions of unRAID only support NFSv3. Supporting NFSv4 requires kernel option turned on as well as addition user space tools. We will try to get this into the next -rc release.
  7. For the record: we here at Lime Technology are agnostic when it comes to file systems and we welcome discussion. However this is a requirement: no fanboi flame wars. You want to talk about technical advantages/disadvantages, go for it! But if you just want to say, "I read somewhere xxx sucks, don't use it!" well there are plenty of other places to do that. Here are the main reasons we went with btrfs for the cache pool (vs other multi-device capable file systems): 1. Docker support. When we first integrated Docker they didn't offer zfs support, but did offer btrfs, though these days I think they do support zfs now. 2. The h/w requirements to smoothly run zfs are quite onerous for a consumer NAS, though that too is less important. 3. The licensing was/is still an issue and we didn't feel like paying our lawyer 4 figures to give us the definitive answer of whether we can bundle zfs with unRAID OS, and we didn't want to go the "Guide" route instructing our customers to download zfs themselves in order to use a fundamental feature of the product. 4. Questionable linux integration. zfs remains a third party component which is not updated in step with the linux kernel, which also means it's not tested alongside other kernel components during ongoing kernel development. We never want to get into a situation where we have to update the kernel to address a serious bug or security issue, but can't update because it breaks another key component. I guess there are other lessor reasons for using btrfs. For example, personally I have studied quite a bit of the code base, and we are familiar with how it works and how to use the management tools. Probably the way we will approach this moving forward is to develop better plugin/snapin support in unRAID OS to make it easier for many kinds of third party components to be integrated with unRAID OS.
  8. iSCSI is a block-level protocol. How should an iSCSI LUN be mapped to unRAID storage? The only use case I can see for this is a SAN.
  9. Good question. What's the use case for iSCSI?
  10. There are lots of places this device is discussed and it appears to be problematic. You can try this: https://bbs.archlinux.org/viewtopic.php?id=214293 But you have to disable iommu, which means no VM's. Probably better to get a higher quality device or wait until kernel comes out with a fix. Are you using unRAID-next? We keep that kernel on cutting edge.
  11. The reason the mover is so slow is that for each and every file it must check if that file is "in use" before trying to move it from source to destination. This is because there is no concept of "automatic file locking" - it's completely possible for one process to read a file that another process is writing. The "in use" test is pretty expensive.
  12. Ok probably have to do a pkill find as well. Never had to stop the mover before. We'll see about adding some kind of cancel control. Not really recommended though. Why do you want to cancel it?
  13. pkill mover That will kill the script, but important to let the current copy complete.
  14. It will come with linux 4.9 kernel.
  15. Yeah should work, follow dlandon's first post. Here is documentation on smb.conf in all it's glory: https://linux.die.net/man/5/smb.conf Search in there for "wins" to learn everything you didn't want to know about a WINS server. Post back with your results.
  16. I recommend plugging flash into PC, make backup of your 'config' folder. Then reformat the usb flash - volume label UNRAID - then copy over release from zip file. Click 'make_bootable' as administrator (don't forget this step), then drag your 'config' file backup to the flash.
  17. It appears the Dell branded controllers use custom firmware (for Dell) and it's not possible to "flash" them into "IT mode". The closest you get to JBOD is to configure it so that each device is in single-device raid0. But we are still looking into this.
  18. Actually we're working with someone else to try and support that controller. Should have more info on this early next week. Send me an email: [email protected] and we'll try and keep you up to date on this.
  19. We have received your email requests and have replied to them all, and your replacement key was sent. But obviously you are not receiving email from us. Check your PM.
  20. You spotted an actual WINS server in the wild?
×
×
  • Create New...