draeh

Members
  • Posts

    20
  • Joined

  • Last visited

Everything posted by draeh

  1. I'm not sure what you did, but the latest update dropped about 1.3GB in size. Again, I wasn't opposed to a container getting bigger. I was only trying to be diligent about the possibility of something having gotten added to one of the supporting images that may not have been above board. Thanks for looking into it.
  2. Doesn't seem like any of those changes would lead to such an increase in the container. Most likely its one of the other supporting images that changed. This is what I hate about docker. The lack of traceability. I mean, yes, its traceable if I take each image that makes up the container and track them down manually one by one. Its tedious, but when I see a size increase like this I want to know why.
  3. Is there a reason why the container has roughly doubled in size? It may not have literally doubled in size, but its gotten significantly bigger in the last 2 updates.
  4. I see that the luckybackup docker image was updated 6 hours ago. Is there a change log posted somewhere?
  5. I understand that. How do I make that password temporary? Is there a way to make Unraid forget that password? Is it forgotten when you press the 'remove' button on the 'Main' tab's Historical Devices?
  6. Is there a way to temporarily enter an encrypted drive's passphrase? Or is the only option to set the passphrase and when finished to 'forget' the drive?
  7. Hello. Searching the forums, I was able to passthru my attached bluray drive to a VM by adding the following to the VM XML in the devices section: <hostdev mode='subsystem' type='scsi' managed='no' rawio='yes'> <source> <adapter name='scsi_host10'/> <address bus='0' target='0' unit='0'/> </source> <readonly/> <alias name='hostdev0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </hostdev> In general it works. The Ubuntu 20 VM sees the drive and can read from it. What it reads appears to be valid, but the read spead is only about 4MB/s and the following error(s) are filling syslog: Jul 11 11:43:51 dev kernel: sr 0:0:0:0: [sr0] tag#122 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jul 11 11:43:51 dev kernel: sr 0:0:0:0: [sr0] tag#122 Sense Key : Aborted Command [current] Jul 11 11:43:51 dev kernel: sr 0:0:0:0: [sr0] tag#122 Add. Sense: I/O process terminated Jul 11 11:43:51 dev kernel: sr 0:0:0:0: [sr0] tag#122 CDB: Read(10) 28 00 00 00 30 00 00 02 80 00 Jul 11 11:43:51 dev kernel: blk_update_request: I/O error, dev sr0, sector 49152 op 0x0:(READ) flags 0x80700 phys_seg 23 prio class This goes on and on and on during the read. Any thoughts on how to correct this and make the read speeds better? EDIT: Stopping the VM and using the drive in the unraid root terminal does not produce this error in the host logs and the read spead reaches up to 20MB/s. This would appear to be some kind of VM/passthru issue.
  8. Sorry if I didn't make that clear. I have an existing apache server that my firewall pointed to. That server managed a letsencrypt certificate. I decided to employ the letsencrypt reverse proxy docker on my unraid server to manage the certificate to make it easier to host multiple named servers and subdomains. As a first step I simply used the docker to reverse proxy the original server which is working great, but I've lost the ability to audit my server in the original way that I did. I would audit the apache access logs for undesired behavior and sometimes blacklist other domains or ips based on the addresses listed in those logs. Now the apache server's access logs only show the unraid server's ip address as the one making the requests. Is there somewhere within the reverse proxy docker where I can view a kind of access log that will show me what internet addresses are trying to access the proxy?
  9. Just started using this instead of having my server handle the SSL certificate directly. Now that this is running, my server's access log shows all requests as having come from the reverse proxy. Is there an access log on the reverse proxy where I can see the outside addresses using the server?
  10. Oh. Dear. God. I am blind. I never investigated that button. This is exactly the feature I was looking for. Thanks!
  11. Is there a plugin that will list each drive's io rate? I've found one that shows the io rate for the array, but I would like to check and track the io read/write rates to each drive in the array. Something like iostat but on tha main page
  12. Been almost a month (next week), sure would be nice to get that 'next article'.... please.
  13. This ^^^, and the need for granular permissions is why I use NFS over Virt and SMB. I can't wait for the follow-up blog post where the discussion turns to the 'stale file handle' issue seen on some VMs. For my most active VM it seems to go in cycles. It will run without issue for weeks and then sporadically it will have a day where it happens repeatedly. So far I haven't been able to pin down what is happening differently on those days.
  14. I also have nerd tools installed. The only thing I installed was perl to do the system temperature devices scan. When perl installed it also installed the newer tmux or atleast the nerd tools page shows tmux as installed. Either way, this is some new incompatibility with preclear as the previous version was working fine before the update. EDIT: I spoke too soon. While tmux 3.1.0 was listed on the nerd tools page, my installed tmux is 3.0a
  15. Hello. I updated this morning to 2020.05.05a and now I am seeing this in the logs: May 6 10:32:14 legion preclear_disk[29247]: error encountered, exiting... May 6 10:34:49 legion preclear_disk[32017]: error encountered, exiting... May 6 10:35:45 legion preclear_disk[660]: error encountered, exiting... Since I had performed an update, I also tried completely removing the plugin and reinstalling it, but I am getting the same result.
  16. Did you ever discover how to correct this? I installed the docker container today and this is happening on every WU. EDIT: They eventually clean themselves up. I did nothing and after a few more WU it was able to clean up after itself.
  17. Thanks for the 9p confirmation. Must be common knowledge to most as I hardly found it talked about. I do have my NFS shares currently set to private. I was wondering if there was something I could add to the rules that would limit the share to inside the unraid server and not be offered externally.
  18. Hi All. As a rather new Unraid user, I've come across what most of you have already solved. Seems that when using virtio to mount an unraid share in a guest vm that the throughput is rather limited (20-30MB/s). If mounted via CIFS or NFS, the throughput is only limited by the cache/disk being written to. Is this correct? Is there something else that can be tweaked to improve virtio? Some of my shares are specific to the VMs and it feels a little hoky to share them by NFS/CIFS where another machine on the network could potentially gain access whereas virtio is exclusive to the unraid host. So, as it says in the title, what am I missing? Thanks!
  19. Before I pull the trigger on some new hardware and an Unraid license, I had a few questions about my scenario and whether unraid is the right fit. Currently I have 2 Dell Optiplex 9020's running Debian 9 as servers. One is a dedicated Plex media server with a 120GB ssd for boot, a 250GB ssd for the plex metadata and two 8TB drives in raid 0 for the media libraries. The second server is a jack-of-all-trades. Its a web-server, database server, and Openhab/MQTT server as well as a development test-bed for various projects. What I would like to know is if Unraid with docker objects can be both the storage and vm solution for all these services within the same box. I think the answer is yes, but confirmation would be great. Also, what kind of drive configuration would I need to get started? Obviously once the data is migrated, I'll have the drives listed above for additional storage, but what and how many do I need to start with? Thanks in advance!