aim60

Members
  • Posts

    88
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

aim60's Achievements

Rookie

Rookie (2/14)

6

Reputation

  1. I ran across an article whose author claimed that on recent versions of qemu, discard=unmap was functional on virtio disks. So I did some testing. I have not run all permutations, but have tested: Unraid 6.9.2 UEFI and Seabios VMs Q35-5.1 and i440fx-5.1 VMs Windows 10 and Windows 11 with the latest virtio drivers, virtio-win-0.1.196.iso Ubuntu 21.4 raw and qcow2 vdisks vdisks on XFS and BTRFS disks vdisks with and without the copy-on-write attribute set In all cases, the virtio disk was functionally equivalent to a virtio-scsi disk. On file deletion, windows unmapped blocks immediately. Linux unmapped blocks after running fstrim. https://chrisirwin.ca/posts/discard-with-kvm-2020
  2. Missed it. Thanks Changed Status to Solved
  3. ADD POOL using unformatted disks (blkdiscarded SSD's) fails with the following error messages: A 2 device pool emhttpd: /mnt/test mount error: No pool uuid A 1 device pool emhttpd: /mnt/test mount error: Unsupported partition layout The workaround is to preformat the drives using Unassigned Devices Tested in Safe Mode, with the disks connected to on-board Intel controllers. Diagnostics - Create Pool Tests.zip
  4. @JorgeB, Isn't it possible to remove the drive from the pool without reconnecting it first (like if it was completely dead)? That way, I wouldn't have to worry about the small possibility of corruption to the few files that aren't cow.
  5. I am getting write_io_errs, read_io_errs, and flush_io_errs on one of the devices in my cache pool. The pool is raid1, with enough unused space that I can maintain redundancy without the problem drive. I would like to remove the drive with errors. Please confirm that this is the correct procedure. Thanks tower7-diagnostics-20210217-1146-B.zip BTRFS Info.txt
  6. Been using a CP1500PFCLCD for years with the unraid apc daemon. No problems, except you should set “Turn off UPS after shutdown” to no. It doesn’t work correctly with the apc software. Tried with NUT several years ago and it didn’t work work with it either.
  7. Adding a container variable of type path, with container path and host path of /usr/bin (and access mode of read only for security) , would substitute the docker's /usr/bin with unraid's /usr/bin. However, this would probably have negative consequences. The files are probably be from different linux distros. I tried it with a random docker, and it wouldn't even start.
  8. Install the netdata docker and take a look at the template variables, including what you see when you press the Edit buttons. Some of the system variables are defined read-only.
  9. For those of us with removable drive bays, should we be doing anything after unmounting a drive, before physically removing it?
  10. I'm running 6.8.1 with a 3-drive raid1 cache pool. I've been getting Errors on Cache Pool messages, and many of the following in the logs: BTRFS error (device sdf1): error writing primary super block to device 1 kernel: sd 7:0:0:0: [sdf] tag#15 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 kernel: sd 7:0:0:0: [sdf] tag#15 CDB: opcode=0x2a 2a 00 1d 9b af c0 00 00 58 00 The device still shows a green ball on the Main tab, but smartctl returns Short INQUIRY response, skip product id A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options. I'm assuming that the drive needs replacement. My only use of cache is VM's and Dockers, and I have disabled both services, as well as temporarily disabling the mover job. I would like to remove the cache pool from the configuration until I can obtain a replacement. I assume that the procedure is to stop the array and unassign all of the cache drives, and start the array. Then when the new drive arrives, reassign the 2 good drives and the replacement, and start the array. Please confirm my procedure. I only have remote access to the server for the next few days. Thanks tower7-diagnostics-20200208-1324B.zip
  11. I just purchased two WD80's. To my surprise, what arrived were WDC_WD80EFAX-68LHPN0's. I thought all new stock was air filled. Helium is a plus for me, they run a few degrees cooler.
  12. Its for this kind of issue that binhex isolated preclear in a docker.
  13. Once the plexpass is associated with your plex account, connect to your plex server using the web interface. Logout of your plex account, then login again. You’re done. You now have all the features of plexpass. Binhex’s plexpass docker gives you access to beta releases, not additional features.
  14. Was waiting for a 6.8.1 stable that fixed the problem Settings-->VM Manager-->DIsable VM function does not work without stopping the array. If an 6.8.1-RC seems stable, and it is the only way to fix the problem, I'll probably jump early.