Jump to content

rorton

Members
  • Posts

    209
  • Joined

  • Last visited

Posts posted by rorton

  1. interestingly, it may well be the vnc 

     

    Ive just enabled remote desktop, disconnected my VNC session, tried to reconnect the vnc session, and failed 

     

    used remote desktop, and im straight in - so the vm is up n running. 

     

    Im using a mac to connect, and tried using chicken of the vnc - didn't work at all, VNC Viewer, iTeleport 

  2. Clean install of Win10 Pro - latest ISO from Microsoft

     

    All installs OK, activates, drivers on correctly, done a few reboots to make sure its all good. 

     

    Then after about 10 mins of leaving it, i cant reconnect to it - the VNC session disconnects, and i cant ping it, or connect by RDP or anything - the only thing i can do is force restart it. It then stays ok for a bit, then carries on

     

    My VM Log is below, and i keep getting these qemu-system-x86_64: warning: guest updated active QH at the end, which im guessing is the problem. 

     

    I thought there were problems with how i built the VM, as this started when i built my first yesterday, so i trashed it, rebuilt it, and its still the same 

     

    Any ideas?

    thanks

     

    -no-hpet \
    -no-shutdown \
    -boot strict=on \
    -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x7.0x7 \
    -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x7 \
    -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x7.0x1 \
    -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x7.0x2 \
    -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 \
    -drive 'file=/mnt/cache/vms/Windows 10/vdisk1.img,format=raw,if=none,id=drive-virtio-disk2,cache=writeback' \
    -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk2,id=virtio-disk2,bootindex=1,write-cache=on \
    -drive file=/mnt/user/isos/Win10_1903_V2_English_x64.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on \
    -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=2 \
    -drive file=/mnt/user/isos/virtio-win-0.1.160-1.iso,format=raw,if=none,id=drive-ide0-0-1,readonly=on \
    -device ide-cd,bus=ide.0,unit=1,drive=drive-ide0-0-1,id=ide0-0-1 \
    -netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=29 \
    -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:1b:f5:e4,bus=pci.0,addr=0x3 \
    -chardev pty,id=charserial0 \
    -device isa-serial,chardev=charserial0,id=serial0 \
    -chardev socket,id=charchannel0,fd=30,server,nowait \
    -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \
    -device usb-tablet,id=input0,bus=usb.0,port=1 \
    -vnc 0.0.0.0:1,websocket=5701 \
    -k en-us \
    -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 \
    -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 \
    -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
    -msg timestamp=on
    2019-10-24 10:09:56.697+0000: Domain id=13 is tainted: high-privileges
    2019-10-24 10:09:56.697+0000: Domain id=13 is tainted: host-cpu
    char device redirected to /dev/pts/1 (label charserial0)
    2019-10-24T10:18:01.221797Z qemu-system-x86_64: warning: guest updated active QH
    2019-10-24T10:22:00.463133Z qemu-system-x86_64: warning: guest updated active QH
    2019-10-24T10:23:17.937838Z qemu-system-x86_64: warning: guest updated active QH
    2019-10-24T10:41:25.024489Z qemu-system-x86_64: warning: guest updated active QH
    2019-10-24T10:43:24.756554Z qemu-system-x86_64: warning: guest updated active QH
    2019-10-24T10:48:45.270993Z qemu-system-x86_64: warning: guest updated active QH
    2019-10-24T10:49:44.660416Z qemu-system-x86_64: warning: guest updated active QH
    2019-10-24T10:50:05.738320Z qemu-system-x86_64: warning: guest updated active QH
    2019-10-24T10:50:26.487107Z qemu-system-x86_64: warning: guest updated active QH
    2019-10-24T11:00:09.811881Z qemu-system-x86_64: warning: guest updated active QH

     

  3. yeah its odd - after i renamed them all (and they all work) i just tried again to manually add a date like (23/10/2019) to the dir name, and i was able to do this (I'm on a Mac)

     

    Looking at the share, it displays as expected, but then looking at what was created via the cli, it no longer has the / char, but : instead

     

    Its many years since i created some of these directories, and im sure it was on a much older version of MAC OS - no idea if at the time, that just created the / char, or it was something to do with a different version of SMB, or if it used AFP - no idea. 

     

    Either way, its all good - rename fixed it, and if i tried to enter / it replaces with :

     

     

     

  4. Have a number of directories with data in, and i can't browse them via smb. I can browse them via CLI

     

    I think the problem is because the directory names have a / in them (i put a date in the directory name of some of these) 

     

    Loopmasters\ Laidback\ House\ (2642013)/

    example above is what the file name looks like from the CLI and the square blocks are question marks which show as / in an SMB browser

     

    How is best to get around this, is there a global way i can do a find in the share for any directory with a / in the name, and replace that with a hyphen or similar - and should that fix it?

  5. ah perfect thanks

     

    so 

     

    <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='vmdk' cache='writeback'/>
          <source file='/mnt/cache/vms/librenms/librenms.vmdk'/>
          <backingStore/>
          <target dev='hdc' bus='sata'/>
          <boot order='1'/>
          <alias name='sata0-0-2'/>

     

    So do i need to change my type to something other than vmdk? or leave that, and just rename the file to have .img extension, and change the same in the source file path above?

  6. Does anyone have a copy script that will work like this...

     

    Existing share on an unassigned drive

    Want to back this up to the array daily (so schedule a user script) 

    Would like it to look at the initial backup that was made, and then just append to keep it up to date. 

     

    I have all my mp3's on a SSD for fast reading from my Mac in iTunes, plus sonos around the house, but would like them on the array too incase the SSD dies. 

  7. nah, they don't use hardly any resource - i got a roku hooked up to try, and set emby up on it - and it started to transcode due to the audio, but the cpu hardly broke a sweat - the rest off my media client devices are raspberry pi's so dont need to transcode.

     

    Should be fine - there are also sooo many fan positions in this case, that if you did find problems, then you can chuck some more fans in it

  8. :) Cheers

     

    I run as a NAS, my Mac backs up to it using Time Machine, and then i have 2 VMs, one Linux with LibreNMS running (a network monitor), and then another Windows 10 VM in case i need to do anything windows based (certain mp3 tagging apps dont run on Mac)

     

    Im planning on more VMs hence the extra Ram! One for Pi Hole (currently have this running on a Ras Pi, wanted to consolidate) 

     

    Docker wise i run 

     

    SabNZBd

    Radaar

    Sonaar

    Emby Media Server (the plex alternative!)

    Unifi for Wireless and Router controller

    TFTP Server for flashing network kit

    Krusaider for file management 

     

    that's it for now, but my old server was so slow, it was lagging, so will get to play with more stuff now too i think :) 

     

    What put you off doing it in a Matx case then - you mention the amount of dockers you run

  9. yeah it took me ages to decide on the case for the same reasons :)

     

    Thermals wise it fine, there is so much space in the mb area once built that the CPU and board are fine, and i moved the fans around a bit, so i put 2 in the hard disk area, one to bring cold in, at the bottom, and then the existing exit at the top - then just left the cpu side with a single exit fan at the rear. 

     

    Everything is ticking over really nicely, 

     

    IMG_3319.jpeg

    IMG_3321.jpeg

  10. Ive just done a similar thing - gone from a HP MicroServer (N40L) and i went with the following spec.

     

    Ryzen 2700 (non X)

    Asrock B450m Pro 4 motherboard (MicroATX)

    32gb Crucial DDR4 2666

    500gb NVME (Cache)

    2gb Nvidia 710 (just really so i could build the thing)

    Fractal Design Node 804 Case

     

    Its running great, really pleased with it 

     

    I also bought a Dell H310 Perc Raid card from eBay for cheep, flashed it with IT firmware. 

     

    I have 4 WD reds from my old system running on the Dell card, an SSD from my old system connected to the onboard SATA

  11. thanks - looking at my backup, i have a share.cfg file, which contains 

     

    shareMoverSchedule="40 3 * * *"
    shareMoverLogging="no"

     

    I don't have config related just to the scheduler - is each 'schedule type' in the separate config files - so trim out be in the disk config etc

     

    If so, i dont see anything obvious that relates to trim or parity schedules

     

    I just done an index search of the files, looking for parity, and in the dynamic.cfg i think i see the parity schedule - 

     

    [parity]
    mode="3"
    day="0"
    hour="0 1"
    write="NOCORRECT"
    dotm="28-31"

  12. Recently moved to new hardware, and looked at the system log today and saw this..

     

     

    Oct 20 22:52:27 Nas kernel: traps: ffdetect[3374] general protection ip:4042af sp:7ffeee428700 error:0 in ffdetect[403000+c000]
    Oct 20 22:52:27 Nas kernel: traps: ffdetect[3375] general protection ip:4042af sp:7ffeaae18af0 error:0 in ffdetect[403000+c000]

    it occurs when I start my emby docker. If I don’t start that, I don’t get it. 
     

    nothing obvious in the actual emby docker log. 
     

    should I be worried?
     

     

     

×
×
  • Create New...