rheumatoid-programme6086

Members
  • Posts

    19
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

rheumatoid-programme6086's Achievements

Noob

Noob (1/14)

3

Reputation

  1. root@tiny:~# zfs list -t all NAME USED AVAIL REFER MOUNTPOINT cache 48.9G 1.71T 120K /mnt/cache cache/appdata 13.9G 1.71T 13.9G /mnt/cache/appdata cache/cacheTemp 24.4G 1.71T 24.4G /mnt/cache/cacheTemp cache/domains 96K 1.71T 96K /mnt/cache/domains cache/system 10.6G 1.71T 10.6G /mnt/cache/system
  2. Could you please point me to directions on how to do the ZFS replication? Thanks!
  3. You are correct that the new drives are SSDs .... I was using the term 'disk' generically. The only thing in my cache is appdata. When I first added the cache, I stopped docker, copied all of the appdata from the array to the cache, pointed all of the docker containers at the new location, and started docker back up. A bunch of the containers malfunctioned or lost their settings anyway. Will ZFS pool replication avoid this problem? If so, I could use help with how to do it. If not, is there a way I can get Unraid to just create the partitions at the location they're in on the current SSDs
  4. Also, here is the fdisk result for the disk that is "too small": root@tiny:~# fdisk -l /dev/sdg Disk /dev/sdg: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors Disk model: CT2000MX500SSD1 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes
  5. Pool was created on Unraid. root@tiny:~# fdisk -l /dev/sdb Disk /dev/sdb: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors Disk model: SSD-PUTA Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x00000000 Device Boot Start End Sectors Size Id Type /dev/sdb1 64 3907029167 3907029104 1.8T 83 Linux root@tiny:~#
  6. Hello, I am attempting to replace my ZFS pool disks. Presently they are two 2TB USB SSDs, which are slow because they cannot be TRIM'd due to Unraid's lack of UAS support. I am replacing them with 2TB sata disks. Per lsblk -b, the devices are exactly the same size. In the output below, the disks that I am attempting to replace (1 at a time) are sdb and sdc, and I am attempting to replace them with sdd and sdg. Unfortunately, I get an error stating "replacement device is too small" when attempting to start the array after selecting one of the replacement disks into the pool (using the GUI). Thanks for your help! root@tiny:~# lsblk -b NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS loop0 7:0 0 62644224 1 loop /lib loop1 7:1 0 345944064 1 loop /usr loop2 7:2 0 21474836480 0 loop /var/lib/docker/btrfs /var/lib/docker sda 8:0 1 8032092160 0 disk └─sda1 8:1 1 8031043584 0 part /boot sdb 8:16 0 2000398934016 0 disk └─sdb1 8:17 0 2000398901248 0 part sdc 8:32 0 2000398934016 0 disk └─sdc1 8:33 0 2000398901248 0 part sdd 8:48 0 2000398934016 0 disk sde 8:64 0 14000519643136 0 disk └─sde1 8:65 0 14000519593472 0 part sdf 8:80 0 14000519643136 0 disk └─sdf1 8:81 0 14000519593472 0 part sdg 8:96 0 2000398934016 0 disk sdh 8:112 0 14000519643136 0 disk └─sdh1 8:113 0 14000519593472 0 part sdi 8:128 0 14000519643136 0 disk └─sdi1 8:129 0 14000519593472 0 part sdj 8:144 0 14000519643136 0 disk └─sdj1 8:145 0 14000519593472 0 part md1p1 9:1 0 14000519589888 0 md /mnt/disk1 md2p1 9:2 0 14000519589888 0 md /mnt/disk2 md3p1 9:3 0 14000519589888 0 md /mnt/disk3
  7. I am still facing this frustrating problem months later. The only solution I have is to remove the drives from the unraid box and TRIM them on another machine, which is a huge pain and just a ludicrously bad solution. Does anyone know if it's possible to build a custom kernel with UASP support? Considering that UASP is needed to TRIM both SSDs and SMR drives, and significantly enhances the performance of UASP-capable DAS enclosures, I'm really surprised that this relatively simple change (enabling UASP in the kernel build) has not been made.
  8. To clarify what I meant by "negating all of their [usb SSDs] benefits" -- The USB SSDs I have see a 5-10X reduction in performance after they become fully 'dirty' as compared to an Ubuntu system that supports UASP and has been configured to properly TRIM them. While UASP support might not have been a big deal performance-wise for HDDs six years ago when this feature request was originally written, it's a huge deal for SSDs in the present era.
  9. Hi ... I'd love to see a feature added that can enable Turbo Write automatically once a configurable threshold is passed for write rates (if overall write rate stays above X MB/sec for Y seconds), and automatically turns off when the write rate falls under a similar configurable threshold. I love turbo-write mode's performance for things like adding files to my plex server, but at the same time, I don't necessarily want to spin up all of my disks every time 2kb is written to a log file somewhere. This way, if a large amount of data is coming into the server, it'd automatically "accelerate" after the first few seconds of slow writing.
  10. Unraid doesn't support ARM. It's a no-go on an Orange Pi.
  11. Wondering if there has been any progress on UASP support? With USB speeds now surpassing SATA and the availability of several high-quality USB SSDs (and USB NVMe adapters), USB has become an interesting option for cache on NVMe and PCIe slot constrained systems ... but without UASP, USB SSDs cannot be TRIMed, negating all of their benefits.
  12. Hi, I'm using a USB SSD as a cache and I'd like to get TRIM working on it (otherwise the performance falls off dramatically after a while). Unfortunately, it seems that in order for this to work, the drive must be connected via UASP. I'm hoping someone here has been down that path already and can let me know whether it's even possible to get UASP working on Unraid. Thanks!
  13. For anyone who comes here by Google, I've looked into this myself a little and I thought I'd share my conclusions. IF you are going to do this, the way to go is definitely to virtualize unraid and run both it and pfSense on Proxmox. Having said that, doing so may be pointless for those users who are trying to reduce the energy consumption of their homelab (read on). The biggest issue that I've found with running pfSense on Unraid is that it's impossible to have any VMs running if the array is not started, even if those VMs and their data are entirely hosted on non-array disks. This has a few implications: if the array is down, pfSense is down, and as a result: - Your DHCP server is down. Fine if you have configured static IPs to important LAN devices (as I have), but if you've instead allocated static IPs in the pfSense DHCP service, everything on your network will be inaccessible. - Your VPN server and router are down. This is the big one for me. If the Unraid box loses the array for some reason (or loses power and fails to re-start the array), both your Internet connection and your VPN server are gone. No remote troubleshooting for you. - You will be unable to set any of this up in an Unraid trial -- Unraid trials will not start the array until Internet access is available. Conversely, Internet access will not be available until the array is started. Therefore, this setup is completely untestable without first buying a license. None of these issues exist if you virtualize Unraid on Proxmox and pass-through your SATA controller(s) and/or HBA(s). Doing this actually works pretty well, but results in signficantly higher idle CPU usage and idle power consumption (~+8-10 watts, on a server that otherwise takes only 15 watts at idle) on my server vs. just running Unraid on the metal. A side benefit of this is that (at least on my system), Proxmox boots and starts pfSense very quickly, whereas the Unraid boot is glacially slow (and it would require still more time to start pfSense after that!). Since the power consumption hit is so huge for virtualizing Unraid, at this point I'm planning to run Unraid on the HW and run pfSense on a separate system that only takes ~10 watts idle total (same as the hit to my Unraid server). Yes, this situation is extraordinarily frustrating, but I guess I just have to keep in mind that Unraid is primarily a storage server and it isn't reasonable to expect its VM management capabilities to be comparable to a dedicated virtualization product. I understand that "most" applications of virtualization would need the array up anyway ... but it's still disappointing that there's not a way to run a router VM independent of the array status.
  14. I'd really like to see Unraid support maintaining copies of critical files on multiple drives (like Drivepool can) to address this. My media library? I'm good with just parity. My family photos? I'd like to have more redundancy for those. Right now I run dual parity ... I'd rather run single parity and have it (transparently) mirror my photos across several disks. I'd also like to see the ability to 'drain' a disk down to all zeros and then remove it from the array so that I can remove failing or old disks without ever putting the array in a degraded state.