surfshack66

Members
  • Posts

    223
  • Joined

  • Last visited

Posts posted by surfshack66

  1. On 11/8/2020 at 11:57 PM, jbat66 said:

    I was wanting to see if I "could" run a VM inside of unraid, while unraid was running as a VM under Proxmox. There is no need in doing this as it is better to run your VM's under the hypervisor that is talking directly to your hardware, in this case it is Proxmox. I could not get VM's in unraid to work with out passing all the CPU functions (such as virtualization) down to unraid.

    While you can run an emulated CPU, the reason for doing that is to be able to migrate a running VM between dissimilar hosts. Since you can't migrate unraid you might as well pass all the CPU features onto unraid. IMHO.

    Got it. Thanks for the explanation @jbat66!

  2. 3 hours ago, jbat66 said:

    It is passing all the CPU features to the VM, it is not locking the VM to the CPU. When not set as "host" it emulates a CPU.

    When you do pass all the CPU features to the VM, you can not migrate the VM to another host while the VM is running. Since Unraid is locked to that host because of the physical USB key, it doesn't matter that you can not migrate the VM.

    If this was VMware, think of it as disabling EVC (Enhanced vMotion Compatibility) processor support. EVC is a way to emulate a particular generation of CPU. You can have several hosts all running different generations of intel CPU's and if you have all the hosts/vm's emulate the lowest common CPU, then you can live motion your VM's from one host to another.
     

     

    Got it. Thanks for the explanation.

     

    Out of curiosity, why does unraid not work well with an emulated CPU?

  3. I seem to run into this issue frequently when adding containers that do not have templates created for Unraid. I've noticed many of the templates for Unraid have PUID and PGID included. However, there are quite a few containers on dockerhub that do not have those parameters listed.

     

    The issue I run into is not being able to edit/access files created by the containers. Am I supposed to be adding PGID and PUID to containers even if they don't specify this in their documentation?

  4. Hello - I have two identical GPUs. One being used for Plex transcoding. The other I would like to passthrough to a VM. The issue is KVM crashes when I try to passthrough to a VM. I was going to try stubbing the device but not sure how to after seeing this:

     

    IOMMU group 16:[10de:1c30] 01:00.0 VGA compatible controller: NVIDIA Corporation GP106GL [Quadro P2000] (rev a1)

    [10de:10f1] 01:00.1 Audio device: NVIDIA Corporation GP106 High Definition Audio Controller (rev a1)

    IOMMU group 17:[10de:1c30] 02:00.0 VGA compatible controller: NVIDIA Corporation GP106GL [Quadro P2000] (rev a1)

    [10de:10f1] 02:00.1 Audio device: NVIDIA Corporation GP106 High Definition Audio Controller (rev a1)

     

    They are both 10de:1c30. Is it possible to stub the second GPU?

     

     

    EDIT: I believe the fix is to add "BIND=02:00.0" to file 'config/vfio-pci.cfg' on the USB flash boot device.

  5. On 2/17/2020 at 1:19 PM, sjaak said:

    i have 3 GPU and no problems at all. 1 for GUI boot (gt710), 1 for Plex (1050ti) and a Vega64 for the VM's (reset bug is still there).
    are you sure you didn't assigned the wrong one?

    Interesting. Also, I'm sure I didn't assign the wrong one.

     

    EDIT: Both cards the same. Not sure if that matters.

  6. 21 hours ago, trevormiller6 said:

    This is the wazuh server and then you would install the kibana app in your case or if using splunk you would install the splunk app. From the app you connect to the server using the API. The app serves as the UI for wazuh.

    So to answer my original question it sounds like you're running their elastic stack as opposed to the official kibana, logstash, and elasticsearch.

     

  7. Would someone mind taking a look at my diagnostics? I tried searching for the error but no luck.

    Tower nginx: 2020/01/09 01:19:50 [alert] 7437#7437: worker process 13182 exited on signal 6

     

    Ultimately, my syslog fills up with these errors

     

    Jan  9 01:24:04 Tower nginx: 2020/01/09 01:24:04 [error] 18528#18528: nchan: Out of shared memory while allocating channel /cpuload. Increase nchan_max_reserved_memory.
    Jan  9 01:24:04 Tower nginx: 2020/01/09 01:24:04 [error] 18528#18528: *1736007 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/cpuload?buffer_length=1 HTTP/1.1", host: "localhost"
    Jan  9 01:24:04 Tower nginx: 2020/01/09 01:24:04 [crit] 18528#18528: ngx_slab_alloc() failed: no memory
    Jan  9 01:24:04 Tower nginx: 2020/01/09 01:24:04 [error] 18528#18528: shpool alloc failed

     

    tower-diagnostics-20200115-1611.zip

  8. On 6/17/2019 at 3:37 PM, deusxanime said:

    You might want to review all your settings if you are still using those old parameters. They were depreciated for quite a long time actually and finally had support completely removed for them months back. I had the same problem with some other settings as well (must have used an old guide to set it up originally so I was using many of the older deprecated parameters) and went back through and realized I had to redo quite a bunch of them to get things working properly again. Don't remember what version that happened in, but it just kind of stopped working all at once because of that.

    The "old" parameters are still in the default config for this container. Are there any plans to update the rtorrent.rc file to remove the depreciated settings?

     

    https://github.com/linuxserver/docker-rutorrent/blob/master/root/defaults/rtorrent.rc

     

    EDIT: "new" parameters aren't working, specifically max upload and download speed set to 0

  9. 1 hour ago, gxs said:

    Same here. I have reinstalled docker and it's ok but it's a pain to reinstall since I'm afraid I'll experience provisioning problems or something similar. I'm kind of afraid to reboot my USG now. :)

    Although I was surprised to see that my custom jsons transfered over.

     

    Edit: Oh thank god! At least the restore function works like a miracle. Created a new docker, restored the backup and everything is back up without any problems. Now onto setting up Wireguard (which is why I have noticed that Unifi is dead).

    That seems to have worked for me as well. Deleted the container. Resintallated with defaults. During setup..restored from backup config (which was in a backup of the appdata folder)

  10. Hello - I have a Sandisk Cruzer Fit that I'm trying to passthrough to a VM. The issue is the usb device does not show on the list of available devices to attach. It does show up in unassigned devices though.

     

    Does anyone have any suggestions?

     

    Thanks.

  11. 1 hour ago, hawihoney said:

    It depends:

     

    If you start from scratch I suggest to go Nextcloud only. 

     

    If you already have a filled document archiv I suggest to go Nextcloud with external storage. If you add/modify/delete files from outside of Nextcloud as well, I would go that way too. 

     

    When we started with Nextcloud we already had thousands of documents in well organized shares and folders. Until today some people work with the shares. We didn't want to stopp that at first. External storage is perfect for that Workflow. So we're still using External Storage in Nextcloud.

     

    BTW, the only thing that's missing in Nextcloud is better Notes support. There are 2-3, some with weird formatting, some didn't work.

     

    I don't have thousands of documents, but maybe in the dozens. I don't mind uploading them to Nextcloud and removing the duplicate on the share.

     

    I use the default notes app and the android notes app as well. It's just ok.

     

    The one thing I see missing in Nextcloud is being able to edit PDFs. Sometimes I have to fill out forms that are in pdf format.

  12. I'd like to organize my files on unraid as well as move away from paper copies. How do you manage your files?

     

    1. SMB Share on unraid

    2. SMB share on unraid with external storage support on Nextcloud

    3. Only Nextcloud

     

    These are the three scenarios I can think of, but perhaps I am missing some. Interested to hear how you organize your personal files and why.

     

    EDIT: Also, do you use Paperless (or something similar)?