Jump to content

bubbaQ

Moderators
  • Content Count

    3471
  • Joined

  • Last visited

Community Reputation

5 Neutral

About bubbaQ

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. To replicate: Size browser so there is no horizontal scrollbar. Resize so there is a horizontal scrollbar. Scroll right. Banner area at top does not cover entire canvas on the right.
  2. Don't touch LSI. I use either Areca or Adaptec SAS controllers with HW RAID and onboard cache. This server is an Adaptec 7000Q series. I'm staying on 6.5.3 ... can's stand the UI in 6.6... makes me want to vomit.
  3. Well, I'm glad I asked. My existing cache drive was 4 PNY 256GB SSDs. hdparm -I says they are RZAT: Data Set Management TRIM supported (limit 8 blocks) Deterministic read ZEROs after TRIM So I decided to do some testing, took them out of HW RAID, secure erased them, and set them up in a btrfs RAID0 cache pool. Worked fine, RAID0 was confirmed working. But fsutil refused to run on them... reported "discard operation not supported." Had I not started this thread, I probably would not have proceeded. But in spite of that test, I went ahead and tried a btrfs RAID0 cache pool with 2 Sammy 4TB 860 EVO SSDs. The kicker.... fstrim appears to work on the Samsung array (at least no errors). Before you ask, the Samsung drives are on the same controller (in fact same ports) as the prior cache pool of PNY SSDs. But I cannot see any relevant difference in trim support between the PNY and Sammy SSD's
  4. That is the correct interpretation. IIRC, LSI interrogates the drive capabilities, and if a drive does not specify the type of trim support (DRAT or RZAT) it will not pass trim commands to the drive. Check an SSD with hdparm -I /dev/sdX. The Sammy 860 EVO's I am using correctly report RZAT.
  5. I have been running 4 SSDs in hardware RAID 0 as my cache drive for a few years. I need it since I have 10 GBE between workstation and unRAID and really need the speed as I work with large (>100GB) files regularly. I also back up large drives to unRAID regularly over 10 GBE and need the speed. I leave “hot” files on the cache, since I need read access as fast as possible for a couple of weeks while I work with them. I have good performance, but the drives are “hidden” behind the HW RAID controller. Most of the files are working copies of files I have stored offline, so they just sit on cache till I am done with them, and they never get moved to the array. As expected, performance decreases over time since I can’t trim the RAIDed drives. I have to back up cache, break the RAID, secure erase the SSDs, reconfig the RAID and then restore. I need a much larger cache drive so I am going to use four 4TB SSDs in RAID 0. But I am thinking about using BtrFS to do the raid instead of the HW contoller. The deciding factor will be if I can use fstrim to trim the drives RAID'ed with BtrFS w/o breaking/rebuilding the RAID. There is conflicting info out there as to whether you can do effective fstrim on the drives in a BtrFS RAID 0 configuration. Some say you can do it but it won’t actually be effective, and other say it does work as expected. Anyone here want to chime in with their $0.02?
  6. I have a collision of two needs. I love unRAID because in a failure (system itself or drives) I can mount and access the drives from unRAID in Windows box. Using XFS Explorer, R-studio, FTK Imager, XWays, etc, depending on the filesystem, I can get to and recover data. I can also recover data from damaged drives with my forensic software/hardware such as a PC-3000 controller. I also have wanted native encryption in unRAID for a long time. But encryption with LUKS/dm-crypt complicates the ability to do disaster recovery. I would have preferred TrueCrypt or Veracrypt because of the widespread support, but cest la vie. Plan B is always boot up Linux, mount the drive with the passphrase, and make a dd image. This takes (potentially a lot of) time and another hard drive. Time is usually short in a disaster recovery. The vast world of forensic recovery software is almost exclusively Windows, particularly the "good" stuff. So I wanted to start a thread to collect ideas on how to more quickly access encrypted unRAID disks on a Windows box. As of now, I have not tested anything, but have found some candidates: LibreCrypt https://github.com/t-d-k/LibreCrypt FreeOTFE http://download.cnet.com/FreeOTFE/3000-2092_4-10656559.html (no longer maintained) Any other suggestions?
  7. Just one line changed in default-white.css: table.wide tbody td{padding:.2em 0}
  8. Would you consider adjusting the CSS padding in the main drive display table or make it configurable (I use 0.2em rather than the default 10px)? This (and a number of other things) are based on pixels rather than em and don't scale well with high DPI monitors. I've been using a sed script to change it for a long time, but every time the css changes I have to redo the script, and as new themes are developed, I have to write yet another sed command if I want to use the new theme. It would be nice going forward if more things can be em-based css rather than pixels.
  9. Don't know if this was a regression in 6.4 or not, but I run emhttp on port 88 rather than 80, and when launching VNC from the VM management page, it does not add the port 88 to the IP, so VNC fails to connect. If I append the :88 to the IP manually, it works fine.
  10. Timestamp: 7/7/2017 5:24:00 PM Error: SyntaxError: expected expression, got '*' File: DockerSettings.page Source Code: if (good && (ippool < ipbase || ippool+2**(32-pool[1]) > ipbase+2**(32-base[1]))) {good = false; swal . . .
  11. DockerSettings.page Line 234..... blows up the Docker configuration page in the UI.
  12. Definitely used to max bang/buck. I've found several dual hex-cores with plenty of RAM for $200-275 range + ship. But I'd go higher to get more cores. No storage needed.
  13. I need a box for a special project. It is not for unRAID and I don't need any hot-swap bays. But I need some horsepower. Dual Xeons with at least 6 cores each (prefer 10 or more) and at least 2.6gHz. Prefer Supermicro mobo w/ IPMI but will consider something else if it's a real screamer. Must be 2U with a single (not redundant) PSU (I can't stand the sound of the 40mm fans in the redundant PSUs. I plan to water cool it with an aquarium chiller to minimize noise.) RAM is less important, but need at least 24GB. I prefer a complete system rather than building piecemeal, but will consider anything.
  14. No out-of-the-box OS has "complete" support for Areca HW RAID the way you talk about. You cobble together a build with hardware that uses non-standard interfaces then you have to expect to do some tweaking. If you want to complain, complain to Areca, not OS devs. And I say this as a long-time (and current) user of Areca HBAs (I have 1882's and 1883's in my servers now).
  15. WAY too much whitespace... not just the fact the background is BLINDINGLY white, but the line spacing and padding around objects is HUGE, requiring way too much scrolling and taking up way too much wasted screen real estate. Can we turn off avatars... they are annoying.