• Posts

  • Joined

  • Last visited


  • Gender
  • Location
    Brooklyn, NY

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

WeeboTech's Achievements


Veteran (13/14)



  1. RE: unrecover These days I might recommend a used Dell E series laptop. They have eSATA port. Te E series dock also has an eSATA port. Note that it does not support port multipliers, only the first drive will be seen. In addition, an older used AMD based HP Microserver gen 10 might work since it doesn't require caddys. 4 Screws and slide in the drive. This is what I am using these days for network based backups and transitioning/merging smaller hard drives to larger hard drives. While the newer HP microserver gen10 plus is modern and powerful, I personally find it better for a set it and forget it unRAID server due to the delicate nature of front panel and external laptop based PSU. The older AMD based HP microserver gen10 is self contained and is better suited for frequent drive swaps. Even then I acquired a Startech external trayless eSATA device for making things easier.
  2. WeeboTech

    PC in a Desk

    At time I was doing recording it went to a 4track, 8 track, then mixed down to stereo directly into a sound card with sound forge. I haven't played or composed in a very long time.
  3. Many different methods to clear and/or certify a disk. I use unraid itself on a refurbished laptop with badblocks. I do a multipass pattern test. At the end of it I use the preclear script to add the signature. It's an old Dell E series laptop which has a eSATA port along with an ORICO Tool-Free USB 3.0 & eSATA to 2.5" & 3.5 " SATA External Hard Disk Drive Lay-Flat Docking Station HDD. It's a pretty compact temporary solution to getting the job done and away from the server. Granted it is an extra cost. A refurbished Dell can be scored pretty cheaply on eBay. E6510, E4310, E4300, etc, etc. These models have an eSATA port but do not support PMP. They are useful for diagnostics and/or clearing/certifying a disk. I'm sure a more modern refurbished laptop with USB 3.0 would suffice as well.
  4. Just had one start failing. The issue with these drives is death is usually sudden and catastrophic. Parity check good Aug 27th. Array Heathy good at 6:3am I start getting pending sector warnings. What is odd is the numbers keep going up and I wasn't even using the array. as I am trying to rsync the data over to another drive, the numbers just keep climbing.
  5. I can't really answer that. It wasn't enough to trigger any alarms in my apartment. Then again, It's always cool in my apartment. I am not using the LSI SAS 8212-4i4e in my gen 8 microserver. I am using the internal 'cougar' controller without issue. For my larger gen 8 server I am using an LSI SAS controller, however it is the 8 port internal model. I am using that in pass thru, however it is rarely used. Last I remember unRAID was having issue with the USB controllers and that was resolved in a later release. I have not used that particular server for unRAID in a while. Sorry I can't be much more help.
  6. My Gen 8's are using the internal controller without issue. I am using ESX 5.1 (HP) with the older mechanism of passing the disks through and unRAID 5(still) i.e. I am not passing the whole controller through with DirectPath I/O. I believe if you want to do that you will need to use the LSI. I tried, failed and moved on. The server's been up for 2 years without a reboot.
  7. I have a STARTECH ASMedia ASM1061 on Supermicro X9SCM-F and it worked right away. I also have it on some HP Micro Severs and it worked with unRAID as well.
  8. At one point I had 4gb and 8gb of ram with cache_pressure=0 and I would run out of ram. With my particular case I was rsyncing data with a dated backup using --link-dest= I had to back off to cache_pressure=10. There's another kernel parameter to expand the dentry queue. At that time, with that kernel, I could not find an advantage. What I did find to provide an advantage was an older kernel parameter sysctl vm.highmem_is_dirtyable=1 That changed the caching/flushing behavior of the system, however, I'm not sure how that would aid in cache_pressure vs cached inodes. It's not just about inodes themselves. From what I had read in the past it the dentry queue came into play too.
  9. I cannot remember the outcome in regards to reliability. If I remember correctly I retired it. There seemed to be weird issues with devices randomly dropping. I went with the 2 port Startech card ASMedia chipset.
  10. I'm not a proponent of a full pause (only). While I'm not an opponent of that feature request, I'm more of a proponent for a throttle option. The older md code had an option to throttle a minimum and maximum resync speed. There was also code to drop to the minimum speed if the subsystem was being used and raise to the maximum speed on idle subsystem access. If someone were to set that to 0 or a very low but acceptable value, so be it, a full pause might not be the best way to go. What if someone forgot it was on pause? Another choice might be to do incremental parity checks, but that entails keeping place (like pause).
  11. This post has already been reported 3 times. "Necroposting" is totally allowed here, the post is on topic, and the link within the post is the same as in this thread so it is not spam. Possibly the poster is just a drive-by who only registered to make this one post. Maybe he even has some axe to grind. But I am going to let it stand unless some other mod wants to bilge it. I'm also leaning on the side of caution since it is hard drive/unRAID Storage related.
  12. From what I gather via various reads we can't enable trim on an arraydrive. If it's not enabled, the SSD will operate slower over time. Yet that should make it safe for unRAID. as long as the trim/discards are not enabled. This makes sense to me. I thought this was a good read as far as trim. http://unix.stackexchange.com/questions/218076/ssd-how-often-should-i-do-fstrim Trim triggers writes to the blocks directly without the OS knowing what's going on. Thus actually erasing blocks. Invalidating parity. I think the other part is how garbage collection works. Perhaps we are joining the two when they should not be. Whle they work hand in hand to make the SSD efficient, Garbage collection may be safe where trim is definitely not. According to what I recently read It is 'discard' aware if mounted with discard.
  13. I think this is a very clever idea but I would be loathed to suggest the community takes on such an undertaking without LT sponsorship. We need to be working hand on this. This seems like allot of work and may be easily prone to error. Perhaps filling a SSD in the unRAID array, removing all of the files. Stop the array, Trigger a trim or wait for garbage collection to occur. Start the array and trigger a parity check. Given some data on another array drive. Replace that drive with a replacement candidate. Rebuild the replacement. Validate the new drive either by direct comparison or via hash sum checking. Those are pretty much the steps that would occur if a drive required replacement.
  14. Part of me thinks the whole internal block level reassignment is partially FUD. Where there is real data, an internal block move, for what ever reason in the firmware, should still equate to an external block request no matter where the firmware puts it. Just think of how our high level file systems would show corruption as blocks are moved and reassigned internally as cells decay. The issue being, what is returned from a block where the data has been deleted. If trim is not engaged there shouldn't be an issue. My understanding is, the firmware doesn't really know file system. It only knows blocks and if a trim command occurs on a block, it is erased and added to a free block list. How does garbage collection come into play here? How does the firmware know a block is no longer in use? Does it? If a block is re-assigned to a new spot, I would surmise the old block is tagged for garbage collection, yet a request for the orginal block gets data from the re-assigned spot.
  15. Maybe someone with an SSD or few SSD's in the array could test. Add a bunch of files to the SSD. Do a parity check. Remove some files. Add some other files. Do a parity check. wait a few days,weeks or so. Do another parity check. This or possibly a more advanced suite of tests. I would surmise that if people are doing monthly parity checks, this situation might have reared it's ugly head.