surfshack66
-
Posts
223 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by surfshack66
-
-
9 hours ago, uldise said:
it works just fine. my two unraid VMs is working on emulated cpu without any problems.
Thanks for clarifying. I was asking because @jbat66 mentions "make sure..."
QuoteProcessors; Make sure you go to the bottom of the list of processors and pick the "host" type.
-
3 hours ago, jbat66 said:
It is passing all the CPU features to the VM, it is not locking the VM to the CPU. When not set as "host" it emulates a CPU.
When you do pass all the CPU features to the VM, you can not migrate the VM to another host while the VM is running. Since Unraid is locked to that host because of the physical USB key, it doesn't matter that you can not migrate the VM.
If this was VMware, think of it as disabling EVC (Enhanced vMotion Compatibility) processor support. EVC is a way to emulate a particular generation of CPU. You can have several hosts all running different generations of intel CPU's and if you have all the hosts/vm's emulate the lowest common CPU, then you can live motion your VM's from one host to another.
Got it. Thanks for the explanation.
Out of curiosity, why does unraid not work well with an emulated CPU?
-
@jbat66 Thanks for this! I have some new server equipment arriving soon so planning on running proxmox bare-metal and virtualizing unraid.
Why do you have to pass through the hosts processor to unraid? In your example, the 4 cores you passed through would be inaccessible to proxmox and other VMs, correct?
-
32 minutes ago, Squid said:
It's never been there on the startup pages. You can do all apps and then sort it accordingly
I could have sworn it was there before but sorting "all apps" works as well. Thanks!
-
15 hours ago, Squid said:
Same as above. Startup categories are limited to the 24. Other categories have no such limitation.
So this is by design? I liked the option to see more results for "New Apps"...
-
Hi - Any particular reason why the results are limited to 24? There used to be a "next page" button to show more results of the Categories.
- 1
-
I seem to run into this issue frequently when adding containers that do not have templates created for Unraid. I've noticed many of the templates for Unraid have PUID and PGID included. However, there are quite a few containers on dockerhub that do not have those parameters listed.
The issue I run into is not being able to edit/access files created by the containers. Am I supposed to be adding PGID and PUID to containers even if they don't specify this in their documentation?
-
Hello - I have two identical GPUs. One being used for Plex transcoding. The other I would like to passthrough to a VM. The issue is KVM crashes when I try to passthrough to a VM. I was going to try stubbing the device but not sure how to after seeing this:
IOMMU group 16:[10de:1c30] 01:00.0 VGA compatible controller: NVIDIA Corporation GP106GL [Quadro P2000] (rev a1)
[10de:10f1] 01:00.1 Audio device: NVIDIA Corporation GP106 High Definition Audio Controller (rev a1)
IOMMU group 17:[10de:1c30] 02:00.0 VGA compatible controller: NVIDIA Corporation GP106GL [Quadro P2000] (rev a1)
[10de:10f1] 02:00.1 Audio device: NVIDIA Corporation GP106 High Definition Audio Controller (rev a1)
They are both 10de:1c30. Is it possible to stub the second GPU?
EDIT: I believe the fix is to add "BIND=02:00.0" to file 'config/vfio-pci.cfg' on the USB flash boot device.
-
On 2/17/2020 at 1:19 PM, sjaak said:
i have 3 GPU and no problems at all. 1 for GUI boot (gt710), 1 for Plex (1050ti) and a Vega64 for the VM's (reset bug is still there).
are you sure you didn't assigned the wrong one?Interesting. Also, I'm sure I didn't assign the wrong one.
EDIT: Both cards the same. Not sure if that matters.
-
Hello - So I have 2 GPUs in my server. 1 is dedicated to plex transcoding. I tried assigning the other to a VM but it crashed KVM. Has anyone else experienced this issue?
-
21 hours ago, trevormiller6 said:
This is the wazuh server and then you would install the kibana app in your case or if using splunk you would install the splunk app. From the app you connect to the server using the API. The app serves as the UI for wazuh.
So to answer my original question it sounds like you're running their elastic stack as opposed to the official kibana, logstash, and elasticsearch.
-
@trevormiller6 Are you running the other wazuh containers or just this? I have separate instances of elasticsearch, kibana, and logstash so I'm trying to integrate this container into my existing stack.
-
Would someone mind taking a look at my diagnostics? I tried searching for the error but no luck.
Tower nginx: 2020/01/09 01:19:50 [alert] 7437#7437: worker process 13182 exited on signal 6
Ultimately, my syslog fills up with these errors
Jan 9 01:24:04 Tower nginx: 2020/01/09 01:24:04 [error] 18528#18528: nchan: Out of shared memory while allocating channel /cpuload. Increase nchan_max_reserved_memory. Jan 9 01:24:04 Tower nginx: 2020/01/09 01:24:04 [error] 18528#18528: *1736007 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/cpuload?buffer_length=1 HTTP/1.1", host: "localhost" Jan 9 01:24:04 Tower nginx: 2020/01/09 01:24:04 [crit] 18528#18528: ngx_slab_alloc() failed: no memory Jan 9 01:24:04 Tower nginx: 2020/01/09 01:24:04 [error] 18528#18528: shpool alloc failed
-
I'm wondering if the issue is because I have the same usb device being used for the tower config.
-
On 6/17/2019 at 3:37 PM, deusxanime said:
You might want to review all your settings if you are still using those old parameters. They were depreciated for quite a long time actually and finally had support completely removed for them months back. I had the same problem with some other settings as well (must have used an old guide to set it up originally so I was using many of the older deprecated parameters) and went back through and realized I had to redo quite a bunch of them to get things working properly again. Don't remember what version that happened in, but it just kind of stopped working all at once because of that.
The "old" parameters are still in the default config for this container. Are there any plans to update the rtorrent.rc file to remove the depreciated settings?
https://github.com/linuxserver/docker-rutorrent/blob/master/root/defaults/rtorrent.rc
EDIT: "new" parameters aren't working, specifically max upload and download speed set to 0
-
1 hour ago, gxs said:
Same here. I have reinstalled docker and it's ok but it's a pain to reinstall since I'm afraid I'll experience provisioning problems or something similar. I'm kind of afraid to reboot my USG now.
Although I was surprised to see that my custom jsons transfered over.
Edit: Oh thank god! At least the restore function works like a miracle. Created a new docker, restored the backup and everything is back up without any problems. Now onto setting up Wireguard (which is why I have noticed that Unifi is dead).
That seems to have worked for me as well. Deleted the container. Resintallated with defaults. During setup..restored from backup config (which was in a backup of the appdata folder)
-
On 12/11/2019 at 2:39 PM, yippy3000 said:
Seems like it was a corrupt DB. Don't know if it was related to the upgrade or not but the mongo DB got corrupt and restoring the app data folder from a backup fixed it.
I have the same issue but a restore did not resolve it.
-
Hello - I have a Sandisk Cruzer Fit that I'm trying to passthrough to a VM. The issue is the usb device does not show on the list of available devices to attach. It does show up in unassigned devices though.
Does anyone have any suggestions?
Thanks.
-
32 minutes ago, Squid said:
Thank you.
-
I have an existing container that I manually created the template for. I would like to install a second copy of that container and use the "official" template that is now available on CA.
How can I do that? I can't seem to figure it out...
-
1 hour ago, hawihoney said:
It depends:
If you start from scratch I suggest to go Nextcloud only.
If you already have a filled document archiv I suggest to go Nextcloud with external storage. If you add/modify/delete files from outside of Nextcloud as well, I would go that way too.
When we started with Nextcloud we already had thousands of documents in well organized shares and folders. Until today some people work with the shares. We didn't want to stopp that at first. External storage is perfect for that Workflow. So we're still using External Storage in Nextcloud.
BTW, the only thing that's missing in Nextcloud is better Notes support. There are 2-3, some with weird formatting, some didn't work.
I don't have thousands of documents, but maybe in the dozens. I don't mind uploading them to Nextcloud and removing the duplicate on the share.
I use the default notes app and the android notes app as well. It's just ok.
The one thing I see missing in Nextcloud is being able to edit PDFs. Sometimes I have to fill out forms that are in pdf format.
-
I'd like to organize my files on unraid as well as move away from paper copies. How do you manage your files?
1. SMB Share on unraid
2. SMB share on unraid with external storage support on Nextcloud
3. Only Nextcloud
These are the three scenarios I can think of, but perhaps I am missing some. Interested to hear how you organize your personal files and why.
EDIT: Also, do you use Paperless (or something similar)?
-
Can you bake SNMP into the OS without the need for a separate plugin? Unraid is the only NAS without SNMP installed by default...
- 2
- 1
-
On 5/24/2019 at 1:58 PM, bobbintb said:
You edited the rc file directly? If so, not sure why that wouldn't work.
Yes, I edited the rc file directly as well as made the changes in the gui. Anytime the container updates, those two settings revert back to a default.
[GUIDE] Installing UnRaid (ver. 6.83) on ProxMox (ver. 6.2-4)
in Virtualizing Unraid
Posted
Got it. Thanks for the explanation @jbat66!