Jump to content

Divi

Members
  • Posts

    21
  • Joined

  • Last visited

Posts posted by Divi

  1. Hey I'm setting up new unraid server using old unraid Plus usb stick I used in the past. I red on forums that I can wipe the old config by deleting everything in config folder except the plus.key so I did that. CLI says the unraid is up, and it got IP from my DHCP.

    3bQOP0Z.jpg

     

    I can ping it:

    CXbSPsG.jpg

     

    But I cant reach the GUI:

    GwNMlDu.jpg

     

    I tried giving it a static IP in router, and rebooted but with the new IP (192.168.5.14 which is the same one server used before without problems) I'm getting exact same problem. Ping is ok but GUI is not.

     

    edit: uploaded diagnostics

    tower-diagnostics-20220715-0510.zip

     

  2.  

    Hello friends! Here's my main rig powered by Unraid. I'm running Windows 10 Gaming VM on it as my main pc. Its handling many other tasks as well, you can find out lower in the specs what. :) I used to run Unraid in my garage strictly as nas/server, but decided to use it also as my gaming pc as I dont have so much time to game nowadays. It just made sense to use one hardware for all.

     

    OS at time of building: Unraid 6.8.3 Basic
    CPU: AMD Ryzen 7 1700 @ 3,9GHz Boost OC
    Motherboard: Gigabyte Aorus X470 Ultra Gaming
    RAM: 24Gb Kingston Hyper X Fury 2x4+2x8 @ 1866MHz

    GPU 1: Nvidia GT520 (for host)

    GPU 2: Nvidia GTX970 (Passthru for VM)
    Case: BeQuiet Silent Base 801
    Power Supply: Seasonic Focus+ Gold 550w
    Fans: 3x BeQuiet stock 140mm, 2x Corona 120mm RGB, 1x 120mm Thermalright on Macho120 CPU cooler

    Parity Drive: 4Tb Seagate Ironwolf
    Data Drives: 4Tb Seagate Ironwolf (1pcs)
    Cache Drive: 500Gb Samsung 860 Evo (automaticly backed up weekly to array)

    Unassigned Drive 1: 500Gb Samsung 860 Evo (game installation storage for VM)

    Unassigned Drive 2: 1Tb WD Green (Storage for security camera footage)
    Total Drive Capacity: 10Tb, 4Tb as NAS and its plenty for me

    Primary Use: NAS, Gaming VM, AdBlocker, Home automation, security camera NVR, unifi controller, Media server
    Likes: Its fairly compact, stays cool and doesnt look anything else than you regular gaming pc.
    Dislikes: Fans could be still quieter
    Add Ons Used: Plex Media Server, MQTT broker, Unifi Controller, Shinobi Pro, HassOS VM, Windows 10 VM, Many plugins, scripts etc
    Future Plans: Nextcloud will be coming, and maybe moving from Shinobi to Zoneminder but we'll see. Hardware upgrades are endless, but after fans I'd like to swap gaming GPU to a new Ampere card, maybe RTX 3070 or something. CPU I will update eventually to 3900X or 4000-series equivalent. Memory needs to be swapped to 2x16Gt 1R kit so I can run it faster. I guess PSU needs a bit more power also for Ampere and 12-core. After these upgrades it should be a long time rig, multiple years forward... Motherboard I am super happy with. IOMMU groups are perfect for my use case.

     

    20200722_185152.thumb.jpg.4d09a52a2510fb7b665c39bddfd359f9.jpg

     

    20200722_211717.thumb.jpg.7e587b026aa0b648585660f496830410.jpg

     

     

     

    Let me know what you think! :) 

     

    • Like 1
  3. - I have turned C-states from BIOS back enabled (CPU is handling idle P-states normally))

    - Dropped memory speed to 1866MHz as suggested

    - Installed Python, ran Zenstates.py script with 1700x settings and automated it with CA User Scripts (Script is handling Boost state P0 as commanded, resulting a mild overclock)

     

    Now my Ryzen 7 1700 is idling as it should, Boosting to 3.9GHz like 1700x, and seems to be completely stable, both in games and idling over night. Note that in the pictures not all cores are allocated to windows VM Iäm using to put load on the processor, therefore threads 0, 7, 8, 15 are not boosting as they are used by host and other VM. Perfect!

    40VL9hOg.thumb.jpeg.cd942ca60b9c4e6d3083cc02e98866d3.jpeg

     

    wCbkvXag.thumb.jpeg.12b2cea923017625d8464f3ed8f204b6.jpeg

     

    • Like 1
  4. ^ Thank you!

     

    Tried user scripts and it seems awesome way to automate stuff around unraid. Really easy to make new script, copy-paste you command and set schedule when to run the script.

     

    Made these four separate scripts for four shares on my Cache drive, so I can schedule them differently if I want:

    backups1.thumb.PNG.26e5577ec545dfec86f3d18173801635.PNG

     

    Example how my Appdata backup script looks:

    backups2.thumb.PNG.5ee8413cdc0ca5ba72da5d2231e7a821.PNG

    • Like 2
  5. I took some time yesterday to try rsync/crontab method. 

    Managed easily to make a script with rsync that creates the copy of cache on array, but didnt manage yet to automate the script with crontab. I'll repost when I get the automation working. Rsync and Crontab both come preinstalled on unraid, so its just matter of writing the commands in correct files to be run daily/weekly/whatever.

     

    my rsync command which works fine when run in terminal:

    rsync -r -av --delete /mnt/cache/domains/ /mnt/user/vault/Backups/Unraid/Cache/domains

     

    -r = recursive

    -a = archive files and directory while synchronizing 

    -v = verbose

    --delete = delete files from destination, that are no longer found in source (good for my use case, not for most backups)

    /mnt/cache/domains/ = source

    /mnt/user/vault/Backups/Unraid/Cache/domains = destination

    • Like 2
  6. I also would like to create daily backup of the shares on cache disk (appdata, domains, isos, system) and store the backup on array. This would be enough safety for me to run single cache disk instead of 2 disk pool.

     

    I think setting up rsync script with cronjob would be one option, but I'm not sure how to do this.

    Another possible way might be installing DirSyncPro docker, but I'm not familiar with this either.

    Both options sound really complicated for such a simple job.

     

    I'm sure this is trivial job for more experienced linux/unraid users, so please share what is the best way?

  7. Thanks for the link! Its possible that my previous problems were partly caused by memory speed, all tho the freeze would always happen when the machine was idle, usually it was frozen in the morning. Disabling C-states was the fix and this thing has been rock solid since. Still, running 1st gen Ryzen atm and all 4 sticks, I should be using 1866mhz in case of 2R sticks according to the spreadsheets in the link. Currently I'm running 2666mhz. I'm not sure if my kingstons are single or dual rank, but I assume dual rank because cheapest of the cheap, and already quite old.

     

    I've been eyeballing those 3900X and XT cpus now, as they would allow me to run higher memory speeds and provide more cores for virtualization. Just plain lot more power and cores to play with. I'm just not sure how 8+3 phase power delivery of Aorus X470 Ultra Gaming would stand feeding 24/7 rig. Well I guess most of the time the system is idling with low power consumption anyway. CPU support list says also 3950X is supported tho, but still im not sure. 

     

    Display and GPU are waiting for the upgrade first tho, and I need another SDD to install games on, dont wanna fill my cache drives too much with the VM. Maybe I drop the parity from cache pool and just backup all the shares from cache to the array every now and then.

  8. Hello world!

     

    I've been using UnRaid on Ryzen for quite some time now as basic NAS unit with few dockers and Hassio VM. Just yesterday I decided to ditch my main pc, took the server in and configured a "gaming vm" to use as my main desktop. So far so good, managed to passthru a gpu and usb controller. Input devices, DAC and GPU drivers are in and working.

    When first building the server about a year ago, I remember having stability issues that were solved by turning off some C-states and other power saving features of Ryzen. Just a general question here, has there been any improvements how UnRaid handles Ryzen power plans, is it possible to turn some of those back on? Currently my CPU is idling at base clock on all cores 24/7 which was totally fine in server rack, but now I would like it to really go idle if possible.

     

    AMD Ryzen 7 1700

    Gigabyte Aorus X470 Ultra Gaming

    24Gb Kingston Hyper X regular basic non-ecc

    Nvidia GT520 for host

    Nvidia GTX970 for VM

    2x Samsung 860 Evo 500Gb Cache

    2x Seagate Ironwolf 4Tb Array

     

    20200712_085513.thumb.jpg.0beabdc76798c11c36f44f6a76e187dd.jpg

  9. Like I wrote, I had the problem at first with power control of Zen architecture, it crashed really fast after starting to idle. But it seemed to be solved by disabling Global C-States Control. This was over a month ago and server was 100% stable for a month before these two recent crashes. I didn't find "Power Supply Idle Control" or anything similar from BIOS, other than Global C-States Control.

    20200221_212819.thumb.jpg.c6e5b5f64613662f1b83d72f54946eb3.jpg

     

    I now dropped memory speed from 2400 to 1866, but again the hardware was stable for a month so I doubt this will have anything to do with the topic.

    20200221_212806.thumb.jpg.a5d9c5ec55c2379974ee908d3345e088.jpg

     

    BIOS version is also fairly recent, 5.80 from 2019/7/3. Newest is 6.30 from 2020/2/4. And again it was stable for a month... nothing changed there recently.

    20200221_212532.thumb.jpg.dd2f5b3c07bdecc402f6614fa65bf85f.jpg

     

  10. Unraid version: 6.8.2 official

     

    Installed plugins: Preclear, CA Backup/Restore Appdata, Commuynity Applications, Dynamix S3 Sleep (not enabled tho..), Dynamix System Information, Dynamix System Statistics, Dynamix System Temperature, Fix Common Problems, Nerd Tools, Network UPS Tools (NUT), Tips and Tweaks, Unassigned Devices, Unassigned Devices Plus

    Dockers: MQTT, pihole-template, plex, shinobi_pro, UniFi

    VM's: Generic Linux ru3nning HassOS 3.11 (Cores 7 / 15, and 4Gb RAM assigned)

     

    Hardware:

    MoBo: AsRock Fatal1ty AB350 Gaming K4 (Global C-States Control = disabled)

    CPU: Ryzen 7 1700 @ stock, cooled by AMD Wraith Prizm

    RAM: 4x4Gb of chinese 2400mhz DDR4

    GPU: Asus GT520 Silent

    Disks: 2x4TB Seagate Ironwolf, no cache yet

    PSU: Corsair VS650

    Networking: Intel i350-T4 installed but not in use as I couldnt make LACP work. Running single gigabit connection from integrated Realtek nic with 4 VLANs.

     

    Problem:

    Server sometimes freezes, happened twice now in couple days, both times it has been idling when crashing. Networking is lost, doesn't answer to ping and doesn't allow ssh. Had similar problem when it was first installed, but after disabling Global C-States Control it was stable until now. Last things I have installed before problems was CA Backup/Restore plugin that I configured to make a backup to external smb share mounted via unassigned devices plugin. Crash has never happened when this is scheduled. Also I have messed around with home assistant dockers, mqtt docker and lately switched to home assistant running in VM. First time this crash happened (between February 17 and 18) was before switching from docker to HassOS VM (February 19). I have syslog after the first crash, ending to second crash that I noticed last night February 20, about 23:00. 192.168.5.69 is my main PC, which was offline 17:27 as seen in syslog.

    I'm curious what mover has been trying to do Feb 20 03:40:01, as mover should be disabled because of no cache is installed.

    Oh and when it has crashed, hard power off is the only way out. Power button doesnt do anything, even if pressed continously long time. Need to switch PSU off or take power cord out from UPS.

     

    Syslog between crashes:

    Feb 18 21:18:40 Tower rsyslogd: [origin software="rsyslogd" swVersion="8.1908.0" x-pid="8449" x-info="https://www.rsyslog.com"] start
    Feb 18 21:27:50 Tower webGUI: Successful login user root from 192.168.5.69
    Feb 18 21:28:08 Tower kernel: mdcmd (44): set md_num_stripes 1280
    Feb 18 21:28:08 Tower kernel: mdcmd (45): set md_queue_limit 80
    Feb 18 21:28:08 Tower kernel: mdcmd (46): set md_sync_limit 5
    Feb 18 21:28:08 Tower kernel: mdcmd (47): set md_write_method
    Feb 18 21:28:08 Tower kernel: mdcmd (48): set spinup_group 0 0
    Feb 18 21:28:08 Tower kernel: mdcmd (49): set spinup_group 1 0
    Feb 18 21:28:08 Tower kernel: mdcmd (50): start STOPPED
    Feb 18 21:28:08 Tower kernel: unraid: allocating 15750K for 1280 stripes (3 disks)
    Feb 18 21:28:08 Tower kernel: md1: running, size: 3907018532 blocks
    Feb 18 21:28:08 Tower emhttpd: shcmd (796): udevadm settle
    Feb 18 21:28:08 Tower root: Starting diskload
    Feb 18 21:28:09 Tower tips.and.tweaks: Tweaks Applied
    Feb 18 21:28:09 Tower emhttpd: Mounting disks...
    Feb 18 21:28:09 Tower emhttpd: shcmd (798): /sbin/btrfs device scan
    Feb 18 21:28:09 Tower root: Scanning for Btrfs filesystems
    Feb 18 21:28:09 Tower emhttpd: shcmd (799): mkdir -p /mnt/disk1
    Feb 18 21:28:09 Tower emhttpd: shcmd (800): mount -t xfs -o noatime,nodiratime /dev/md1 /mnt/disk1
    Feb 18 21:28:09 Tower kernel: XFS (md1): Mounting V5 Filesystem
    Feb 18 21:28:10 Tower kernel: XFS (md1): Ending clean mount
    Feb 18 21:28:10 Tower emhttpd: shcmd (801): xfs_growfs /mnt/disk1
    Feb 18 21:28:10 Tower root: meta-data=/dev/md1               isize=512    agcount=4, agsize=244188659 blks
    Feb 18 21:28:10 Tower root:          =                       sectsz=512   attr=2, projid32bit=1
    Feb 18 21:28:10 Tower root:          =                       crc=1        finobt=1, sparse=1, rmapbt=0
    Feb 18 21:28:10 Tower root:          =                       reflink=1
    Feb 18 21:28:10 Tower root: data     =                       bsize=4096   blocks=976754633, imaxpct=5
    Feb 18 21:28:10 Tower root:          =                       sunit=0      swidth=0 blks
    Feb 18 21:28:10 Tower root: naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
    Feb 18 21:28:10 Tower root: log      =internal log           bsize=4096   blocks=476930, version=2
    Feb 18 21:28:10 Tower root:          =                       sectsz=512   sunit=0 blks, lazy-count=1
    Feb 18 21:28:10 Tower root: realtime =none                   extsz=4096   blocks=0, rtextents=0
    Feb 18 21:28:10 Tower emhttpd: shcmd (802): sync
    Feb 18 21:28:10 Tower emhttpd: shcmd (803): mkdir /mnt/user
    Feb 18 21:28:10 Tower emhttpd: shcmd (804): /usr/local/sbin/shfs /mnt/user -disks 2 -o noatime,allow_other -o remember=0  |& logger
    Feb 18 21:28:10 Tower shfs: use_ino: 1
    Feb 18 21:28:10 Tower shfs: direct_io: 0
    Feb 18 21:28:10 Tower emhttpd: shcmd (806): /usr/local/sbin/update_cron
    Feb 18 21:28:11 Tower root: Delaying execution of fix common problems scan for 10 minutes
    Feb 18 21:28:11 Tower unassigned.devices: Mounting 'Auto Mount' Devices...
    Feb 18 21:28:11 Tower emhttpd: Starting services...
    Feb 18 21:28:11 Tower emhttpd: shcmd (808): /etc/rc.d/rc.samba restart
    Feb 18 21:28:12 Tower rsyslogd: [origin software="rsyslogd" swVersion="8.1908.0" x-pid="18333" x-info="https://www.rsyslog.com"] start
    Feb 18 21:28:13 Tower root: Starting Samba:  /usr/sbin/smbd -D
    Feb 18 21:28:13 Tower root:                  /usr/sbin/nmbd -D
    Feb 18 21:28:13 Tower root:                  /usr/sbin/wsdd 
    Feb 18 21:28:13 Tower root:                  /usr/sbin/winbindd -D
    Feb 18 21:28:13 Tower emhttpd: shcmd (822): /usr/local/sbin/mount_image '/mnt/user/system/docker/docker.img' /var/lib/docker 25
    Feb 18 21:28:13 Tower kernel: BTRFS info (device loop2): disk space caching is enabled
    Feb 18 21:28:13 Tower kernel: BTRFS info (device loop2): has skinny extents
    Feb 18 21:28:14 Tower kernel: BTRFS info (device loop2): new size for /dev/loop2 is 26843545600
    Feb 18 21:28:14 Tower root: Resize '/var/lib/docker' of 'max'
    Feb 18 21:28:14 Tower emhttpd: shcmd (824): /etc/rc.d/rc.docker start
    Feb 18 21:28:14 Tower root: starting dockerd ...
    Feb 18 21:28:17 Tower kernel: IPv6: ADDRCONF(NETDEV_UP): docker0: link is not ready
    Feb 18 21:28:19 Tower root: Fix Common Problems Version 2020.02.06
    Feb 18 21:28:20 Tower rc.docker: 150b3faf200b8ec6ced31e944297f7dbaede48e9b87890f53c2cbc95e5b693c6
    Feb 18 21:28:22 Tower rc.docker: 86a33cc6062a533bac58566ec1bf21267482041c7a7a69c7aa32079555d54d1f
    Feb 18 21:28:22 Tower rc.docker: bf0a0b72cc54d961ea6810f8e537b3b131412f844f877d3e71845323b4ffe935
    Feb 18 21:28:24 Tower rc.docker: 5a55816f0c8ca1900d9ea00debcfb48bfa5fc540ed269db8d1577d15088d493d
    Feb 18 21:28:25 Tower rc.docker: 6b4a99c56c928fa509a1fe4df5800b16cd74b3e6a37bcab4a3f3006a5036555f
    Feb 18 21:28:26 Tower kernel: mdcmd (51): check correct
    Feb 18 21:28:26 Tower kernel: md: recovery thread: check P ...
    Feb 18 21:28:28 Tower unassigned.devices: Mounting 'Auto Mount' Remote Shares...
    Feb 18 21:28:28 Tower unassigned.devices: Mount SMB share '//192.168.5.69/unraidbackup' using SMB3 protocol.
    Feb 18 21:28:28 Tower unassigned.devices: Mount SMB command: /sbin/mount -t cifs -o rw,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777,uid=99,gid=100,vers=3.0,credentials='/tmp/unassigned.devices/credentials' '//192.168.5.69/unraidbackup' '/mnt/disks/192.168.5.69_unraidbackup'
    Feb 18 21:28:28 Tower unassigned.devices: Successfully mounted '//192.168.5.69/unraidbackup' on '/mnt/disks/192.168.5.69_unraidbackup'.
    Feb 18 21:28:28 Tower unassigned.devices: Adding SMB share '192.168.5.69_unraidbackup'.'
    Feb 18 21:28:30 Tower root: Fix Common Problems: Warning: Syslog mirrored to flash
    Feb 18 21:28:40 Tower kernel: eth0: renamed from vethcc69053
    Feb 18 21:28:40 Tower kernel: device br0.20 entered promiscuous mode
    Feb 18 21:29:07 Tower rc.docker: home-assistant: started succesfully!
    Feb 18 21:29:56 Tower kernel: eth0: renamed from veth774cc56
    Feb 18 21:30:42 Tower rc.docker: MQTT: started succesfully!
    Feb 18 21:31:37 Tower kernel: eth0: renamed from veth45dcefd
    Feb 18 21:31:37 Tower kernel: device br0.5 entered promiscuous mode
    Feb 18 21:32:05 Tower rc.docker: pihole-template: started succesfully!
    Feb 18 21:35:43 Tower kernel: eth0: renamed from veth3735a1e
    Feb 18 21:36:42 Tower rc.docker: plex: started succesfully!
    Feb 18 21:38:02 Tower root: Fix Common Problems Version 2020.02.06
    Feb 18 21:39:13 Tower kernel: eth0: renamed from veth2a7b730
    Feb 18 21:39:13 Tower kernel: device br0.15 entered promiscuous mode
    Feb 18 21:40:45 Tower rc.docker: shinobi_pro: started succesfully!
    Feb 18 21:40:52 Tower root: Fix Common Problems: Warning: Template URL for docker application  is missing.
    Feb 18 21:40:53 Tower root: Fix Common Problems: Warning: Syslog mirrored to flash
    Feb 18 21:42:46 Tower kernel: eth0: renamed from vethd578a9a
    Feb 18 21:42:46 Tower kernel: device br0 entered promiscuous mode
    Feb 18 21:44:20 Tower rc.docker: UniFi: started succesfully!
    Feb 18 23:01:56 Tower kernel: vethcc69053: renamed from eth0
    Feb 18 23:02:27 Tower kernel: eth0: renamed from vethf20d86d
    Feb 18 23:29:08 Tower kernel: CIFS VFS: Server 192.168.5.69 has not responded in 180 seconds. Reconnecting...
    Feb 19 03:40:01 Tower crond[2144]: exit status 3 from user root /usr/local/sbin/mover &> /dev/null
    Feb 19 05:12:34 Tower kernel: md: sync done. time=27848sec
    Feb 19 05:12:34 Tower kernel: md: recovery thread: exit status: 0
    Feb 19 19:32:27 Tower kernel: CIFS VFS: Server 192.168.5.69 has not responded in 180 seconds. Reconnecting...
    Feb 19 20:13:04 Tower emhttpd: Starting services...
    Feb 19 20:13:04 Tower emhttpd: shcmd (3557): /etc/rc.d/rc.samba restart
    Feb 19 20:13:07 Tower root: Starting Samba:  /usr/sbin/smbd -D
    Feb 19 20:13:07 Tower root:                  /usr/sbin/nmbd -D
    Feb 19 20:13:07 Tower root:                  /usr/sbin/wsdd 
    Feb 19 20:13:07 Tower root:                  /usr/sbin/winbindd -D
    Feb 19 20:13:07 Tower emhttpd: shcmd (3566): smbcontrol smbd close-share 'domains'
    Feb 19 20:14:22 Tower emhttpd: Starting services...
    Feb 19 20:14:22 Tower emhttpd: shcmd (3571): /etc/rc.d/rc.samba restart
    Feb 19 20:14:24 Tower root: Starting Samba:  /usr/sbin/smbd -D
    Feb 19 20:14:25 Tower root:                  /usr/sbin/nmbd -D
    Feb 19 20:14:25 Tower root:                  /usr/sbin/wsdd 
    Feb 19 20:14:25 Tower root:                  /usr/sbin/winbindd -D
    Feb 19 20:14:25 Tower emhttpd: shcmd (3580): smbcontrol smbd close-share 'domains'
    Feb 19 20:29:36 Tower login[15229]: ROOT LOGIN  on '/dev/pts/0'
    Feb 19 20:31:50 Tower sudo:     root : TTY=pts/0 ; PWD=/mnt/user/domains/HassOS ; USER=root ; COMMAND=/usr/bin/qemu-img resize hassos_ova-3.11.qcow2 +20G
    Feb 19 20:32:18 Tower ool www[18096]: /usr/local/emhttp/plugins/dynamix/scripts/emcmd 'cmdStatus=Apply'
    Feb 19 20:32:18 Tower emhttpd: Starting services...
    Feb 19 20:32:18 Tower emhttpd: shcmd (3618): /etc/rc.d/rc.samba restart
    Feb 19 20:32:20 Tower root: Starting Samba:  /usr/sbin/smbd -D
    Feb 19 20:32:20 Tower root:                  /usr/sbin/nmbd -D
    Feb 19 20:32:20 Tower root:                  /usr/sbin/wsdd 
    Feb 19 20:32:20 Tower root:                  /usr/sbin/winbindd -D
    Feb 19 20:32:20 Tower emhttpd: shcmd (3637): /usr/local/sbin/mount_image '/mnt/user/system/libvirt/libvirt.img' /etc/libvirt 1
    Feb 19 20:32:20 Tower kernel: BTRFS: device fsid 289b59d2-1e32-43f4-88f5-fa541d3fcb71 devid 1 transid 18 /dev/loop3
    Feb 19 20:32:21 Tower kernel: BTRFS info (device loop3): disk space caching is enabled
    Feb 19 20:32:21 Tower kernel: BTRFS info (device loop3): has skinny extents
    Feb 19 20:32:21 Tower root: Resize '/etc/libvirt' of 'max'
    Feb 19 20:32:21 Tower kernel: BTRFS info (device loop3): new size for /dev/loop3 is 1073741824
    Feb 19 20:32:21 Tower emhttpd: shcmd (3639): /etc/rc.d/rc.libvirt start
    Feb 19 20:32:21 Tower root: Starting virtlockd...
    Feb 19 20:32:21 Tower root: Starting virtlogd...
    Feb 19 20:32:21 Tower root: Starting libvirtd...
    Feb 19 20:32:21 Tower kernel: tun: Universal TUN/TAP device driver, 1.6
    Feb 19 20:32:21 Tower kernel: virbr0: port 1(virbr0-nic) entered blocking state
    Feb 19 20:32:21 Tower kernel: virbr0: port 1(virbr0-nic) entered disabled state
    Feb 19 20:32:21 Tower kernel: device virbr0-nic entered promiscuous mode
    Feb 19 20:32:21 Tower kernel: virbr0: port 1(virbr0-nic) entered blocking state
    Feb 19 20:32:21 Tower kernel: virbr0: port 1(virbr0-nic) entered listening state
    Feb 19 20:32:21 Tower dnsmasq[25551]: started, version 2.80 cachesize 150
    Feb 19 20:32:21 Tower dnsmasq[25551]: compile time options: IPv6 GNU-getopt no-DBus i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify dumpfile
    Feb 19 20:32:21 Tower dnsmasq-dhcp[25551]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h
    Feb 19 20:32:21 Tower dnsmasq-dhcp[25551]: DHCP, sockets bound exclusively to interface virbr0
    Feb 19 20:32:21 Tower dnsmasq[25551]: reading /etc/resolv.conf
    Feb 19 20:32:21 Tower dnsmasq[25551]: using nameserver 192.168.1.1#53
    Feb 19 20:32:21 Tower dnsmasq[25551]: read /etc/hosts - 2 addresses
    Feb 19 20:32:21 Tower dnsmasq[25551]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
    Feb 19 20:32:21 Tower dnsmasq-dhcp[25551]: read /var/lib/libvirt/dnsmasq/default.hostsfile
    Feb 19 20:32:21 Tower kernel: virbr0: port 1(virbr0-nic) entered disabled state
    Feb 19 20:33:42 Tower kernel: br0: port 2(vnet0) entered blocking state
    Feb 19 20:33:42 Tower kernel: br0: port 2(vnet0) entered disabled state
    Feb 19 20:33:42 Tower kernel: device vnet0 entered promiscuous mode
    Feb 19 20:33:42 Tower kernel: br0: port 2(vnet0) entered blocking state
    Feb 19 20:33:42 Tower kernel: br0: port 2(vnet0) entered forwarding state
    Feb 19 20:37:48 Tower kernel: br0: port 2(vnet0) entered disabled state
    Feb 19 20:37:48 Tower kernel: device vnet0 left promiscuous mode
    Feb 19 20:37:48 Tower kernel: br0: port 2(vnet0) entered disabled state
    Feb 19 20:38:40 Tower kernel: br0.20: port 2(vnet0) entered blocking state
    Feb 19 20:38:40 Tower kernel: br0.20: port 2(vnet0) entered disabled state
    Feb 19 20:38:40 Tower kernel: device vnet0 entered promiscuous mode
    Feb 19 20:38:40 Tower kernel: br0.20: port 2(vnet0) entered blocking state
    Feb 19 20:38:40 Tower kernel: br0.20: port 2(vnet0) entered forwarding state
    Feb 19 21:04:46 Tower kernel: vethf20d86d: renamed from eth0
    Feb 19 21:05:00 Tower kernel: device br0.20 left promiscuous mode
    Feb 19 21:05:00 Tower kernel: veth774cc56: renamed from eth0
    Feb 19 21:05:13 Tower kernel: eth0: renamed from vethc42cdd2
    Feb 19 21:05:13 Tower kernel: device br0.20 entered promiscuous mode
    Feb 19 21:24:27 Tower kernel: br0.20: port 2(vnet0) entered disabled state
    Feb 19 21:24:27 Tower kernel: device vnet0 left promiscuous mode
    Feb 19 21:24:27 Tower kernel: br0.20: port 2(vnet0) entered disabled state
    Feb 19 21:25:30 Tower kernel: br0.20: port 2(vnet0) entered blocking state
    Feb 19 21:25:30 Tower kernel: br0.20: port 2(vnet0) entered disabled state
    Feb 19 21:25:30 Tower kernel: device vnet0 entered promiscuous mode
    Feb 19 21:25:30 Tower kernel: br0.20: port 2(vnet0) entered blocking state
    Feb 19 21:25:30 Tower kernel: br0.20: port 2(vnet0) entered forwarding state
    Feb 19 21:50:29 Tower webGUI: Successful login user root from 192.168.5.69
    Feb 19 21:50:30 Tower login[13073]: ROOT LOGIN  on '/dev/pts/1'
    Feb 19 21:51:48 Tower CA Backup/Restore: #######################################
    Feb 19 21:51:48 Tower CA Backup/Restore: Community Applications appData Backup
    Feb 19 21:51:48 Tower CA Backup/Restore: Applications will be unavailable during
    Feb 19 21:51:48 Tower CA Backup/Restore: this process.  They will automatically
    Feb 19 21:51:48 Tower CA Backup/Restore: be restarted upon completion.
    Feb 19 21:51:48 Tower CA Backup/Restore: #######################################
    Feb 19 21:51:48 Tower CA Backup/Restore: Stopping MQTT
    Feb 19 21:51:49 Tower kernel: device br0.20 left promiscuous mode
    Feb 19 21:51:49 Tower kernel: vethc42cdd2: renamed from eth0
    Feb 19 21:51:51 Tower CA Backup/Restore: docker stop -t 60 MQTT
    Feb 19 21:51:51 Tower CA Backup/Restore: Stopping pihole-template
    Feb 19 21:51:55 Tower kernel: veth45dcefd: renamed from eth0
    Feb 19 21:51:57 Tower CA Backup/Restore: docker stop -t 60 pihole-template
    Feb 19 21:51:57 Tower CA Backup/Restore: Stopping plex
    Feb 19 21:52:01 Tower kernel: device br0.5 left promiscuous mode
    Feb 19 21:52:01 Tower kernel: veth3735a1e: renamed from eth0
    Feb 19 21:52:02 Tower CA Backup/Restore: docker stop -t 60 plex
    Feb 19 21:52:02 Tower CA Backup/Restore: Stopping shinobi_pro
    Feb 19 21:52:03 Tower kernel: device br0.15 left promiscuous mode
    Feb 19 21:52:03 Tower kernel: veth2a7b730: renamed from eth0
    Feb 19 21:52:05 Tower CA Backup/Restore: docker stop -t 60 shinobi_pro
    Feb 19 21:52:05 Tower CA Backup/Restore: Stopping UniFi
    Feb 19 21:52:18 Tower kernel: device br0 left promiscuous mode
    Feb 19 21:52:18 Tower kernel: vethd578a9a: renamed from eth0
    Feb 19 21:52:20 Tower CA Backup/Restore: docker stop -t 60 UniFi
    Feb 19 21:52:20 Tower CA Backup/Restore: Backing up USB Flash drive config folder to 
    Feb 19 21:52:20 Tower CA Backup/Restore: Using command: /usr/bin/rsync  -avXHq --delete  --log-file="/var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log" /boot/ "/mnt/disks/192.168.5.69_unraidbackup/USB/" > /dev/null 2>&1
    Feb 19 21:52:20 Tower CA Backup/Restore: Changing permissions on backup
    Feb 19 21:52:20 Tower CA Backup/Restore: Backing up libvirt.img to /mnt/disks/192.168.5.69_unraidbackup/libvirt/
    Feb 19 21:52:20 Tower CA Backup/Restore: Using Command: /usr/bin/rsync  -avXHq --delete  --log-file="/var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log" "/mnt/user/system/libvirt/libvirt.img" "/mnt/disks/192.168.5.69_unraidbackup/libvirt/" > /dev/null 2>&1
    Feb 19 21:52:33 Tower CA Backup/Restore: Backing Up appData from /mnt/user/appdata/ to /mnt/disks/192.168.5.69_unraidbackup/Appdata/[email protected]
    Feb 19 21:52:33 Tower CA Backup/Restore: Using command: cd '/mnt/user/appdata/' && /usr/bin/tar -cvaf '/mnt/disks/192.168.5.69_unraidbackup/Appdata/[email protected]/CA_backup.tar' --exclude 'docker.img'  * >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/backupInProgress
    Feb 19 21:52:38 Tower CA Backup/Restore: Backup Complete
    Feb 19 21:52:38 Tower CA Backup/Restore: Verifying backup
    Feb 19 21:52:38 Tower CA Backup/Restore: Using command: cd '/mnt/user/appdata/' && /usr/bin/tar --diff -C '/mnt/user/appdata/' -af '/mnt/disks/192.168.5.69_unraidbackup/Appdata/[email protected]/CA_backup.tar' > /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log & echo $! > /tmp/ca.backup2/tempFiles/verifyInProgress
    Feb 19 21:52:43 Tower kernel: eth0: renamed from veth3ab691e
    Feb 19 21:52:43 Tower kernel: device br0.20 entered promiscuous mode
    Feb 19 21:52:46 Tower kernel: eth0: renamed from veth755cd47
    Feb 19 21:52:46 Tower kernel: device br0.5 entered promiscuous mode
    Feb 19 21:52:48 Tower kernel: eth0: renamed from veth27dd72e
    Feb 19 21:52:52 Tower kernel: eth0: renamed from veth2505066
    Feb 19 21:52:52 Tower kernel: device br0.15 entered promiscuous mode
    Feb 19 21:52:56 Tower kernel: eth0: renamed from veth0be2d7d
    Feb 19 21:52:56 Tower kernel: device br0 entered promiscuous mode
    Feb 19 21:52:56 Tower CA Backup/Restore: #######################
    Feb 19 21:52:56 Tower CA Backup/Restore: appData Backup complete
    Feb 19 21:52:56 Tower CA Backup/Restore: #######################
    Feb 19 21:52:56 Tower CA Backup/Restore: Backup / Restore Completed
    Feb 19 22:50:49 Tower kernel: device br0.15 left promiscuous mode
    Feb 19 22:50:49 Tower kernel: veth2505066: renamed from eth0
    Feb 19 22:50:53 Tower kernel: eth0: renamed from vethd4660e8
    Feb 19 22:50:53 Tower kernel: device br0.15 entered promiscuous mode
    Feb 19 23:11:07 Tower kernel: CIFS VFS: Server 192.168.5.69 has not responded in 180 seconds. Reconnecting...
    Feb 20 03:40:01 Tower crond[2144]: exit status 3 from user root /usr/local/sbin/mover &> /dev/null
    Feb 20 17:27:19 Tower kernel: CIFS VFS: Server 192.168.5.69 has not responded in 180 seconds. Reconnecting...

    Diagnostics file, this is after reboot tho because I cant get it out when crashed.. 

    https://www.dropbox.com/s/u67zkfv3plorvat/tower-diagnostics-20200221-2021.zip?dl=0

     

    Screen after second crash: didnt find a way to scroll up to see previous lines..

    20200221_142416.thumb.jpg.6d3e325a5ad4e46bbaab44aa46dee0ed.jpg

     

    HALP! 😵

     

    Best regards, Divi from Finland

  11. I have setup unraid server and its working nice.

    Now I want to have a cold copy of my files for more safety, I'm planning to build a second NAS out of old Dell server. I dont want it to be up all the time as its power hungry and loud, it would be nice to use maybe wake-on-lan once a week or once a month, update backup if anything has changed, and shutdown secondary nas again. 

     

    How would you go to setup something like this? Would it be possible to configure the whole thing on unraid (wake, wait for boot until smb is visible, check files and update what necessary, send shutdown command). This way it would be possible to use pretty much any PC with large enough SMB share for this, and unraid would do all the work.

     

    Thoughts? Ideas?

  12. Could not wait, had to put some old extra hdd in, make array and see how the passthrough goes. So far we are looking good.

     

    Got windows installed in novnc, configured some power management settings, installed virtio drivers. Then passthrough, pcie acs override broke native groups out really nice. Had some freezes and other problems after installing amd drivers, but changing machine type to Q35 seemed to fix it all. Also windows start menu stopped working couple times but has not happened anymore. 

     

    Now I dont have the 1tb nvme drive in yet, so well see it that one will mess up the passthrough.

     

    edit: Also tried passing through USB controller: Chipset USB controller has 4x external USB2.0, 4x internal USB2.0, 2x internal USB3.1 and internal USB-C 3.1 Gen 2. This has [RESET] and is in its own IOMMU group 13 = perfect thing for passing through, no problems at all. USB sound card (DacMagic 100) works fine now, keyboard and mouse are passed as separate devices so they wont eat 2 ports from the back leaving more room for other stuff. There is also two more controllers, one in CPU and one ASMedia. ASMedia controller only has one external USB 3.1 Gen2 and one external USB-C 3.1 Gen2, I dont use these so I put boot stick here so I could also pass thru 4-port usb 3.1 Gen1 controller which is in CPU, but its not needed atm. Might still want USB ports for host, maybe for UPS or something like that.

     

    I think I'll stop posting in this thread now as everything seems to work great and hardware is up for the task. When build is finished I'll post it in UCD section. If anyone else is looking to use Gigabyte Aorus X470 Ultra Gaming for something like this, go for it! Its not bad at all!

     

    Big time thanks to testdasi, your answer was big encouragement for me to go on with Unraid!

     

    lGktSWo.jpg

     

    2018041110424516_big.thumb.png.8820682a9abcf7b43264b574154c0113.png

  13. Throw the parts on test bench... RX580 in first 16x slot, GT520 in second and Pro1000/PT in last one. Seems like a nice setup, everything has room to breathe and managed to set GT520 as primary in bios. Time to update bios, make usb stick and get everything else ready. Hopefully next weekend I'll have the case to build, and start figuring out how to get drives and data from freenas without screw-ups. I have couple offline backups (wd green in the picture and another 1tb usb hdd) that are luckily still able to hold all of my most precious data so should be fine... :)

     

    1002970677_2020-01-1113_43_22.thumb.jpg.7549fd3ee96044428e32c5947b707936.jpg

  14. 39 minutes ago, testdasi said:

    Let's start with the good news.

    You have a Gigabyte mobo which has a BIOS option for you to pick which PCIe slot as "Initial Display Output" i.e. for Unraid host.

    That will save you needing to plug the GT520 on the fast 1st PCIe slot. IIRC the 520 is a single-slot GPU so you may even be able to use the bottom PCIe slot for it.

    It will also allow your main GPU to be secondary GPU (despite being plugged in the fast 1st PCIe slot) which tends to make life a lot easier with PCIe passthrough.

     

    Now the not-so-good news.

    The RX580 is notoriously not happy with being passed through - I have seen many posts with issues on here.

    There is a "Solved" post which reported success with having the RX 580 as secondary GPU + vfio-stub it so you might still be ok.

    I would highly recommend watching SpaceInvaderOne guides on youtube, particularly the ones about passing through GPU (I think he has 3 vids varying from basic to advanced tweaks, you should watch them all)

     

    Now with regards to storage.

    • It is impossible to pass through the 660p via PCIe method to a VM on Linux host (e.g. Unraid) due to Linux kernel not liking the controller). So the best you can do is mount it as Unassigned Devices and put a vdisk on it (or pass it through via ata-id method).
      • The 660p real life performance varies from "as good as" to "worse than" a good SATA SSD. Keep that in mind if you are considering purchasing another SSD. QLC for SSD is like SMR for HDD i.e. cheap but not cheerful.
      • I personally prefer a vdisk instead of ata-id passthrough since I can somewhat benefit more from TRIM with a qcow2 vdisk + scsi device (see johnnie's guide in the VM FAQ of the VM forum).
    • You do NOT need cache for the array but you should still have at least a cache drive.
      • The original intent for the cache pool to serve as write cache came about before (a) reconstruct write aka turbo write was implemented and (b) HDD were still ultra slow. Nowadays with turbo write on, you can run pretty close to the max speed of your slowest HDD while writing.
      • The cache drive is now used for docker image, libvirt, appdata etc. They are pretty important for a smooth experience with Unraid if you are using it beyond a simple NAS.
    • Mirrored cache pool is only required if you have critical data to protect against drive failure. I used to run a pool but now go for single-drive cache and instead focus on making sure my backup strategy is up to par.
      • One of the main reasons is that I have found SSD (that is not Intel-based) incredibly resilient to catastrophic failure. They tend to fail very gracefully as dead cells are gradually replaced with reserved cells and eventually you just lose capacity as the reserve runs out.
      • Intel, in contrast, engages in the anti-consumer practice of locking your SSD in read-only mode if all reserves are used under the pretext of data protection. While it does take a relatively long time to use up all the reserve, it is still anti-consumer and the practice is even more concerning with QLC.

    Awesome information!

     

    I've watched few spaceinvader videos but will look more, there was 100+ on the playlist so its a bit challenging to find the relevant ones. :D

    I will setup the thing in testbench and see what I can do with it. Maybe try the vdisk method to use nvme drive for vms, and possibly one sata ssd as a cache. Everything critical stays on the array anyway, and I have offline backup also.

  15. I've been looking through documentation of unraid and the forums but its hard to put all information together without experience of unraid.

    Most sources point the user to use just array, and cache (i guess main usecase for the os is simple nas box). Cache pool of two drives seems to be the preferred way and default option for pool is mirror if I got this correct. However I found some posts that lead me to think there is possibility of using other configurations aswell with more ssds.

     

    In my use case, would it be possible and good practise to use existing 120gb kingston ssds as cache for array, and mount 1tb intel nvme drive separate from array and cache, and put all vm's, dockers etc there? I found some information that backing up vm's should be quite simple task, so I could get away with one cheap QLC ssd as a primary drive for vm's. For me it would be secure enough if I could automate backing up the vm's and dockers maybe once a week to the hdd array. This way I could use cheap fast nvme drive for vms and dont need it to me mirrored. Im sure its not optimal, but I'd like to get some use out of that drive that I have already.

     

    Another option I was thinking... If I ditch those 120gb ssd's and only use 1Tb nvme drive as cache for both array and vm's/dockers etc. I suppose I can pool it with slower 1tb sata ssd, but can I somehow choose that nvme drive to be the primary drive where I want everything to be written first, so I could benefit from the speed? Or will this pool limit speed of the cache to the slowest member of the pool?

    I found a post saying that pool speed will be limited to slowest. Need to test if I can use second M.2 slot (M2B) for similar nvme drive and make a pool of 2 that way. Its only pci-e 2.0 x4, M2A slot is 3.0 x4 which im using at the moment.

  16. So I currently have separate Windows 10 gaming pc, and freenas server running pihole, syncthing etc. I'm looking to put all under one machine, and so far unraid looks like best option for me. 

     

    Current freenas box is running 2x4Tb mirror, and this is what I would continue to use. I like the fact that expanding would be easy with unraid. I wanna run Windows 10 vm and dedicate gpu, usb, audio etc for it.. all other vms and containers will be running on background and require only ssh/vnc or similar 

     

    Parts I have already in my disposal:

    AMD Ryzen 5 2600

    Gigabyte Aorus X470 Ultra Gaming

    Nvidia GT520 for host

    AMD RX580 8Gb to passthrough for windows

    24Gb of ram

    1tb Intel 660p nvme m.2 ssd

    2x 120gb Kingston A400 sata ssd

    2x 4Tb Seagate Ironwolf

    Dual port pcie intel nic for 2x1gbe lacp with vlans

    Silent Base 801 case

    Seasonic Focus Plus 550w psu.

     

    How would you suggest setting it all up?

    1tb ssd would be quite fast cache but will it hold up being cheap qlc drive? Better to sell all ssds and get something like 2x 500gb samsung evo or similar? Those would be limited to sata3 tho.. Problem is that motherboard dont support two big nvme ssd, and they will also eat so many pcie lanes. CPU only has 20 and GT520 need to be in first 16x slot i think?

     

×
×
  • Create New...