• Posts

  • Joined

  • Last visited


  • Location

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

cracksilver's Achievements


Apprentice (3/14)



  1. Hi there I made a borg Backup from my nextcloud instance which runs in a VM to a share in unraid. The nc instance runs on a separate Harddisk. For this about 1.4 TB it needs a time from 220 hours. Writespeed was between 30 to 100 kB/sec. Why is this system so slow? Attached you will find the - maybe anyone can find the problem. I really would appreciate this. Thanx in advance Gregor
  2. Hi there I realised that if I change in the VM configuration the machine from Q35-5.1 to Q35-4.0 the VM starts normal. But why did it work before? I did nothing change at all. And what is the difference between the two machines? Would be nice if someone give me advice to that. thank you. Greg
  3. Hi at all Since I restarted the nextcloud VM, there is just the following in my browser: 502 Bad Gateway openresty failure I can't ping the IP of the instance. If I look inside the VM with VNC there is no network configuration available. At first i thought it has to be a problem with nextcloud. However, it must be due to unraid cause I tried to start the vm from a backup disk.img, but there is the same problem. That's why I think it is aproblem with unraid not with nextcloud. When I look at the log there is the following. I think the last 3 lines are not ok. But what can I do? I have also the diagnostic file attached. Would be very nice if someone can help me. This instance runs since 8 months without any problem. Now I can't work anymore... thank you in advance greg -rtc base=utc,driftfix=slew \ -global kvm-pit.lost_tick_policy=delay \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 \ -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \ -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \ -device pcie-root-port,port=0x1c,chassis=4,id=pci.4,bus=pcie.0,addr=0x3.0x4 \ -device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 \ -device pcie-root-port,port=0x15,chassis=6,id=pci.6,bus=pcie.0,addr=0x2.0x5 \ -device pcie-root-port,port=0x16,chassis=7,id=pci.7,bus=pcie.0,addr=0x2.0x6 \ -device pcie-root-port,port=0x17,chassis=8,id=pci.8,bus=pcie.0,addr=0x2.0x7 \ -device pcie-root-port,port=0x18,chassis=9,id=pci.9,bus=pcie.0,multifunction=on,addr=0x3 \ -device ich9-usb-ehci1,id=usb,bus=pcie.0,addr=0x7.0x7 \ -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pcie.0,multifunction=on,addr=0x7 \ -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pcie.0,addr=0x7.0x1 \ -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pcie.0,addr=0x7.0x2 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.3,addr=0x0 \ -blockdev '{"driver":"file","filename":"/mnt/user/domains/Debian/vdisk1.img","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \ -device virtio-blk-pci,bus=pci.6,addr=0x0,drive=libvirt-1-format,id=virtio-disk2,bootindex=1,write-cache=on \ -fsdev local,security_model=passthrough,id=fsdev-fs0,path=/mnt/user/nc_share_unraid/ \ -device virtio-9p-pci,id=fs0,fsdev=fsdev-fs0,mount_tag=nc_share_unraid,bus=pci.1,addr=0x0 \ -netdev tap,fd=33,id=hostnet0 \ -device virtio-net,netdev=hostnet0,id=net0,mac=52:54:00:ef:82:8e,bus=pci.4,addr=0x0 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=34,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device usb-tablet,id=input0,bus=usb.0,port=1 \ -vnc,websocket=5700 \ -k de-ch \ -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pcie.0,addr=0x1 \ -device virtio-balloon-pci,id=balloon0,bus=pci.5,addr=0x0 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2021-01-27 00:07:13.067+0000: Domain id=2 is tainted: high-privileges 2021-01-27 00:07:13.067+0000: Domain id=2 is tainted: host-cpu char device redirected to /dev/pts/0 (label charserial0)
  4. Thanx and sorry for no answer, I did not get any message... Ok, if I have to stop the VM to run the mover correctly the cache option is not the best option for me. I can't stopp it every night when mover should run. Is there another option to stop the VM automaticly? Otherwise I would decide against the cache option and let it run as it runs.
  5. If you have some time to look inside the it would be greatly appreciated. Thank you
  6. attached the thx for view it.
  7. Exactly what I have been looking for. thx The diagnostic zip you get 2morrow.
  8. Yes my nextcloud is in running in a VM with all other components (mariadb, onlyoffice, etc.) I'm asking about the cache settings for the user share, domain share and appdata share. I can choose prefer or yes. What exactly is the difference? I'm looking for the fast and save solution
  9. yes replace 2 of my unused Unassigned HDDs with 2 new SSD's. Do you mean just the user share of nextcloud (which is mounted in nextcloud VM) or also appdata and domain share? How to set interval for the mover?
  10. Hello everybody I have now finished my Unraid Server. It is in a server room in the company behind a Nethserver firewall. Open ports are only 80, 443 and another one for VPN connection. Specification: - 128 GB RAM - 2 x Xeon E5645 CPU - SuperMicro X8DA3 motherboard - LAN GigaBit The following instances run on it: - nextcloud VM - NginxProxyManager Docker - MariaDB Docker - Wordpress Docker The following array: - 2 x 4TB parity - 2 x 3TB data disk Backups: - AppData, UsbData, nextcloud data goes to an unassigned 2TB disk (every Monday) - nextcloud VM goes to a separate unassigned disk 1.5TB (every night, exept Monday) Now I have 2 HD slots free for the new 2 Samsung SSD 500 GB. 1. How do I best proceed to configure this? Just shut down the server, take out the 2 old HDs and put the new ones in? 2. Start up and take the 2 new SSDs into the array as cache drives? The aim is for the nextcloud VM to benefit as much as possible from the SSD speed while taking maximum data security into account. What share settings do I have to configure? Could someone kindly give me some tips? I'm not really familiar with caching yet. I appreciate it very much. Thank you. Gregor
  11. @bastl This is a good one. Thank you for this.
  12. Thanks for your detailed description. Then I got that right. I have 128 GB of RAM and of course I would like to distribute these resources sensibly to the VM's and Docker's. greg
  13. Hi there I am not sure how the settings of RAM in the VM configuration must be set exactly. It has 2 values that I can set. Left: Initial Memory, Right: Max Memory. Well what does that mean exactly? And can I look somewhere how much RAM a VM is currently using? Like at overview from dockers in advanced view. I Thank you for advice. Greg
  14. so I tested the blog a bit and everything runs well and smoothie. The only part I get a server timeout 504 is when I try to print as pdf. There is a plugin for that. Can I change some settings somehow? greg
  15. Thank you so much. Everything is working now You made my day happier !! gregor