Leaderboard

Popular Content

Showing content with the highest reputation on 10/12/20 in all areas

  1. Something really cool happened. I woke up this morning and my profile had this green wording on it. I just officially became the newest UNRAID "Community Developer". I just wanted to say thanks to UNRAID and @SpencerJ for bestowing this honor. I am pleased to be able to add to the community overall. You guys rock! Honestly, I have been a part of many forums over the years, and I have never seen a community so eager to help, never condescending, always maintaining professional decorum, and overall just a great place to be. I'm proud to be a part of it!
    7 points
  2. Certainly well deserved with your great contributions Sent from my iPhone using Tapatalk
    2 points
  3. You have just described how almost all software functions 🤣
    2 points
  4. Application Name: Nextcloud Application Site: https://nextcloud.com/ Docker Hub: https://hub.docker.com/r/linuxserver/nextcloud/ Github: https://github.com/linuxserver/docker-nextcloud Note: Requires MariaDB or MySQL, please note the issues with binlogging detailed here. This is a Nextcloud issue which we have no control over. https://info.linuxserver.io/issues/2023-06-25-nextcloud/ For setup guide please see the article on our website here. Image is now upgrading Nextcloud internally For upgrading the Nextcloud version there are 3 options. 1. Update via the webui when the upgrade shows as available. 2. Update from the terminal when the upgrade shows as available with: docker exec -it nextcloud updater.phar 3. Manual upgrade using occ. ##Turn on maintenance mode docker exec -it nextcloud occ maintenance:mode --on ##Backup current nextcloud install docker exec -it nextcloud mv /config/www/nextcloud /config/www/nextcloud-backup ##Grab newest nextcloud release and unpack it docker exec -it nextcloud wget https://download.nextcloud.com/server/releases/latest.tar.bz2 -P /config docker exec -it nextcloud tar -xvf /config/latest.tar.bz2 -C /config/www ##Copy across old config.php from backup docker exec -it nextcloud cp /config/www/nextcloud-backup/config/config.php /config/www/nextcloud/config/config.php ##Now Restart docker container docker restart nextcloud ##Perform upgrade docker exec -it nextcloud occ upgrade ##Turn off maintenance mode docker exec -it nextcloud occ maintenance:mode --off ## Now Restart docker container docker restart nextcloud Once all is confirmed as working: ##Remove backup folder docker exec -it nextcloud rm -rf /config/www/nextcloud-backup ##Remove Nextcloud tar file docker exec -it nextcloud rm /config/latest.tar.bz2 Please post any questions/issues relating to this docker you have in this thread. If you are not using Unraid (and you should be!) then please do not post here, instead head to linuxserver.io to see how to get support.
    1 point
  5. Changes vs. 6.9.0-beta29 include: Added workaround for mpt3sas not recognizing devices with certain LSI chipsets. We created this file: /etc/modprobe.d/mpt3sas-workaround.conf which contains this line: options mpt3sas max_queue_depth=10000 When the mpt3sas module is loaded at boot, that option will be specified. If you add "mpt3sas.max_queue_depth=10000" to syslinux kernel append line, you can remove it. Likewise, if you manually load the module via 'go' file, can also remove it. When/if the mpt3sas maintainer fixes the core issue in the driver we'll get rid of this workaround. Reverted libvirt to v6.5.0 in order to restore storage device passthrough to VM's. A handful of other bug fixes, including 'unblacklisting' the ast driver (Aspeed GPU driver). For those using that on-board graphics chips, primarily Supermicro, this should increase speed and resolution of local console webGUI. Version 6.9.0-beta30 2020-10-05 (vs -beta29) Base distro: libvirt: version 6.5.0 [revert from version 6.6.0] php: version 7.4.11 (CVE-2020-7070, CVE-2020-7069) Linux kernel: version 5.8.13 ast: removed blacklisting from /etc/modprobe.d mpt3sas: added /etc/modprobe.d/mpt3sas-workaround.conf to set "max_queue_depth=10000" Management: at: suppress session open/close syslog messages emhttpd: correct 'Erase' logic for unRAID array devices emhtppd: wipefs encrypted device removed from multi-device pool emhttpd: yet another btrfs 'free/used' calculation method webGUI: Update statuscheck webGUI: Fix dockerupdate.php warnings
    1 point
  6. There is a tradition that the new guy has to buy the beer....
    1 point
  7. Thank you very much for this set of numbers. Here are my observations: Probably samba aio is not enabled at all even if 'aio read size' or 'aio write size' is non-zero. It's very possible the differences you are seeing is just "noise". According to 'man smb.conf' there are a number of preconditions which must exists for it to be active, probably something not met? Not surprised by dismal shfs performance of 25K small files. The union introduces lots of overhead to ensure consistent metadata. Maybe some checks can be relaxed - I'll look into that. For next release I put this into /etc/samba/smb.conf: # disable aio by default aio read size = 0 aio write size = 0 As you observed, it's easy to re-enable via config/smb-extra.conf file.
    1 point
  8. Well deserved! Thank you for all of your contributions to the Unraid community.
    1 point
  9. <os> <type arch='x86_64' machine='pc-q35-3.1'>hvm</type> <loader readonly='yes' type='pflash'>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_CODE.fd</loader> <nvram>/mnt/user/domains/MacinaboxCatalina/ovmf/OVMF_VARS.fd</nvram> </os> this tag is what you need to change
    1 point
  10. You were correct, that worked. Thanks!
    1 point
  11. Oh.... THANKS!!! Its works! But why it change itself?
    1 point
  12. This is because you've choosen the wrong network type I think... Use br0 not Custom if you use custom it has it's own IP address on port 8080 then the ports doesn't matter because it has it's own IP address.
    1 point
  13. No, it's true, my mistake, the problem is that the filesystem is fully allocated, you need to run balance, see here for more details: https://forums.unraid.net/topic/62230-out-of-space-errors-on-cache-drive/?do=findComment&comment=610551
    1 point
  14. It's passed differently in FUSE3, no longer as a mount option, but shfs will set that config bit when init happens as a result of mount. We added that option back when we were using FUSE2 because it helped performance with servers that had 10Gbit Ethernet. FUSE3 added several I/O improvements, among them eliminating the kernel->user->kernel data copies, so it doesn't surprise me if direct_io on/off makes no difference.
    1 point
  15. Email [email protected] if you still haven't received it.
    1 point
  16. Here is where we got the Linux driver: https://www.realtek.com/en/component/zoo/category/network-interface-controllers-10-100-1000m-gigabit-ethernet-usb-3-0-software Note the description "... for kernel up to 5.6". Unraid OS 6.9 is on 5.8 kernel. Realtek is notorious for lagging behind stable Linux kernels. My suggestion is to ask them to update their driver. Edit: to clarify: we are including that latest driver in Unraid OS because it does compile without error with the 5.8 kernel. But just because there are no compilation errors doesn't mean it will function properly. We've seen this with other drivers too, e.g., Highpoint r750.
    1 point
  17. I want to add one more thing when using only one gpu. In my case, with unraid 6.8.3 and a gtx titan black I'm able to start unraid with the gpu output, my vm (macos) is started automatically with that gpu passthrough and a vbios and it switches video correctly, but when I shutdown the vm the gpu doesn't re-attach to unraid, so to shutdown the server I need to short press the power button (or login with ssh from somewhere else); all ok, the server is shutdown properly. I have also another system with linux manjaro (only cli, no gnome nor kde) running kernels 5.8 (latest stable) and 5.4 lts: in both kernels the gpu is able to attach to linux, to the vm, and back to linux; in other words the gpu attaches to nouveau driver (linux), to vfio (when the vm is started) and back to nouveau. The problem is that sometimes the gpu hangs with dmar errors in logs and a long press power button is required to shutdown linux. So, managing gpu passthrough with only one gpu differs from different gpus, you may be lucky or not, I like the way unraid manages this with my gpu, not having any problems with proper shutdown.
    1 point
  18. Oh my goodness, thank you sooooo much!!! I had no idea this was causing it and after your fix, it seems to run perfectly! I greatly appreciate it man seriously, you have no idea how much I struggled with this!! Thank you!!
    1 point
  19. Thanks @binhex . Updated kernel and it was good to go.
    1 point
  20. This tweak has nothing to do with your RAM usage. The source of your problem must be something else (or you used a path, that targeted your RAM). Do you use transcoding to RAM and use the /tmp or /shm folder? Than this is your problem. Next time you should investigate the RAM usage before restarting your server / containers. Use this command: ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n In addition check the sizes of your Ramdisks as follows: df -h -t tmpfs And finally you could check the size of your tmp folder which is even located in your RAM: du -sh /tmp
    1 point
  21. You probably had cloudflare cache/proxy turned on, which we recommend against. It's explained in the docs article linked in the first post
    1 point
  22. Well that certainly seems to be the case here. I started up a live image of Linux Mint on the server and found that the network connection wouldn't start. Moved the ethernet cable to a different port on the switch and it connected fine. So I went back to my Unraid console and found that my Unraid server now appears in Windows Network. Pinging 192.168.1.231 works and invader resolves to that IP address. Equally I can ping my Windows PC from the Unraid console. I've never had a port on a switch go like that, and it's not a cheap switch either (Draytek P2121 - went for something decent to power the home CCTV cameras). Thank you all for the help - I feel like a bit of a plonker!
    1 point
  23. I read another post about changing the split level setting. Mine was set to "1" in my share config file. I removed it and changed it to null "". Now it appears to be working. Strange that my other unraid server is set to 1 and working fine. Not sure what this server needed to be changed.
    1 point
  24. Appreciate the details on the device limits--I definitely misread that to be inclusive of cache. That'll allow me to put the other 5x disks to good use once I receive the additional drive caddies. With regard to the disk detection, it's definitely the 3.3v power pin. While I do not have molex connectors to test, I was able to pick up a small roll of kapton tape and mask it off. The fourth drive was detected on reboot and is now clearing! What doesn't make sense, however, is that the other 3x drives are the same, and were detected without having to mask the pin. I may very well pull the drives once the current clears and is added to the array to be on the safe side, though this remains a mystery. Thank you @Squid and @JorgeB -- I appreciate the quick responses and for the sanity check!
    1 point
  25. I found out I had a bad Sata controller on my motherboard (I have two on this one). I removed all the drives from the bad controller. Then put everything on my LSI raid card and the working controller. Now its working flawlessly with high write speeds. Party sync finished in 6 hours for 18TB array. Thanks everyone for your help.
    1 point
  26. 1 point
  27. Thanks again for supporting this, Wiregurad seems to be working great. Going to start hoping between the servers to see which works best long term.
    1 point
  28. It works thankyou Running a full benchmark on everything now
    1 point
  29. NH-L12S L9x65 also good, you will found they use same mounting method, support Intel and AMD, just L9i not.
    1 point
  30. correct me if i'm wrong but it could be this line <rom file='/mnt/user/isos/1050ti.rom'/> should be <rom bar='on' file='/mnt/user/isos/1050ti.rom'/>
    1 point
  31. Hi, Performance will be good for Plex/Emby though unless size is really an issue then I'd go for at least an mATX build as you will have more expansion options later. e.g. node 804 ... other brand cases are available Space disappears quicky when you start adding media. Also for HDD, the parity size sets the maximum for the other drives. With only 4 sata connectors I'd be looking at least a 12GB parity and second drive to start then add the extra drives as required. The cost per TB is fairly similar, you just have to soak up the cost of the larger parity first off. While you can add a sata card into the PCI-E slot, going larger drives initially pushes that need out quite a way. If you did go for a build with more options(mATX), I'd still start with at least 8TB drives. There is no throughput advantage to more drives and you can easily add additional drives to the array unlike classic RAID systems. For power consuption, the Intel CPU will idle on a few watts, with drives spun down and a decent quality and not oversize PSU (as you have) I'd be expecting less that 30W idle for the full system. Low power quickly becomes a zero sum game where the power savings dont cover the additional hardware cost in it's lifetime.
    1 point
  32. @Naonak I will build tham ASAP. Prebuilt images are now finished and ready to download.
    1 point
  33. The instant we do this, a lot of people using GPU passthrough to VM's may find their VM's don't start or run erratically until they go and mark them for stubbing on Tools/System Devices. There are a lot of changes already in 6.9 vs. 6.8 including multiple pools (and changes to System Devices page) that our strategy is to move the Community to 6.9 first, give people a chance to use new stubbing feature, then produce a 6.10 where all the GPU drivers are included.
    1 point
  34. Any comment on if write amplification on BTRFS cache drives is fixed yet? I am severely affected by this but don't want to lose my redundant cache by being forced to switch to XFS. If you instead support XFS cache raid, that might work as well, though.
    1 point
  35. Supposed to be 6.10 or whatever it's called. They said that in the podcast interview. Make your own builds for now or download ich777 builds when he posts them. Instead of waiting for linuxserver.io builds.
    1 point
  36. Hi Dent_, Nope didn't get back to troubleshooting as it wasn't a huge concern at the time, just something I wanted to enable to backup my parents' devices. At this point my best guess is something to do with DNS and the proxy? I have a pihole container that is my DNS but not DHCP and it doesn't seem to recognize cnames no matter what I do (newbie so I hope I'm using the term correctly. Example of this behavior would be that I have to use the direct IP of my unRAID server instead of the given name to access the webGUI). I seem to remember thinking that I may have set the wrong, or not all of the necessary ports in the proxy config file. So that may be something to look into if you're troubleshooting yourself! Let me know if you figure it out If you do go this route, don't forget to set a password on your urbackup installation!
    1 point
  37. @Jaster Sry, I linked you the wrong thread. Here is the one I use for my snapshots. Method 2 is what I use. The initial Snapshot I do by hand every 2-3 months. For this snapshot I turn all my VMs down to have them in a safe shutdown state. 1. create a read only snapshot of my VMs share. This share isn't the default "domains" share which is created by unraid. It is already a BTRFS subvol on my cache, created like described in the thread from JorgeB and hosts all my VMs. # create readonly snapshot btrfs subvolume snapshot -r /mnt/cache/VMs /mnt/cache/VMs_backup sync 2. send/receive initial snapshot copy to the target drive mounted at "VMs_backup_hdd". This process will take some time transfering all my vdisks. btrfs send /mnt/cache/VMs_backup | btrfs receive /mnt/disks/VMs_backup_hdd sync 3. After that I have 2 scripts running. First script runs every sunday, checking if VMs are running and if so, shutting them down and doing a snapshot. named as "VMs_backup_offline_" with the current date at the end. #!/bin/bash #backgroundOnly=false #arrayStarted=true cd /mnt/cache/VMs_backup sd=$(echo VMs_backup_off* | awk '{print $1}') ps=$(echo VMs_backup_off* | awk '{print $2}') if [ "$ps" == "VMs_backup_offline_$(date '+%Y%m%d')" ] then echo "There's already a snapshot from today" else for i in `virsh list | grep running | awk '{print $2}'`; do virsh shutdown $i; done # Wait until all domains are shut down or timeout has reached. END_TIME=$(date -d "300 seconds" +%s) while [ $(date +%s) -lt $END_TIME ]; do # Break while loop when no domains are left. test -z "`virsh list | grep running | awk '{print $2}'`" && break # Wait a little, we don't want to DoS libvirt. sleep 1 done echo "shutdown completed" virsh list | grep running | awk '{print $2}' btrfs sub snap -r /mnt/cache/VMs /mnt/cache/VMs_backup_offline_$(date '+%Y%m%d') for i in `virsh list --all --autostart|awk '{print $2}'|grep -v Name`; do virsh start $i; done sync btrfs send -p /mnt/cache/VMs_backup/$ps /mnt/cache/VMs_backup_offline_$(date '+%Y%m%d') | btrfs receive /mnt/disks/VMs_backup_hdd if [[ $? -eq 0 ]]; then /usr/local/emhttp/webGui/scripts/notify -i normal -s "BTRFS Send/Receive beendet" -d "Script ausgeführt" -m "$(date '+%Y-%m-%d %H:%M') Information: BTRFS VM Offline Snapshot auf HDD erfolgreich abgeschlossen" btrfs sub del /mnt/cache/$sd #btrfs sub del /mnt/disks/VMs_backup_HDD/VMs_backup/$sd else /usr/local/emhttp/webGui/scripts/notify -i warning -s "BTRFS Send/Receive gescheitert" -d "Script abgebrochen" -m "$(date '+%Y-%m-%d %H:%M') Information: Es wurde heute bereits ein Offline Snapshot erstellt" fi fi 4. The second script runs daily and snapshots the VM as "VMs_backup_online_" with date no matter if they are running or not. Keep in mind if you have to restore snapshots of VMs which where running at the time the snapshot was taken, they will be in a "crashed" state. Not had any issues with that so far, but there might be situations with databases running in a VM which might break by this. Therefore I have set the weekly snapshots with all my VMs turned of. Just in case. #!/bin/bash #description= #arrayStarted=true #backgroundOnly=false cd /mnt/cache/VMs_backup sd=$(echo VMs_backup_onl* | awk '{print $1}') ps=$(echo VMs_backup_onl* | awk '{print $2}') if [ "$ps" == "VMs_backup_online_$(date '+%Y%m%d')" ] then echo "There's already a snapshot from today" else btrfs sub snap -r /mnt/cache/VMs /mnt/cache/VMs_backup_online_$(date '+%Y%m%d') sync btrfs send -p /mnt/cache/VMs_backup/$ps /mnt/cache/VMs_backup_online_$(date '+%Y%m%d') | btrfs receive /mnt/disks/VMs_backup_hdd if [[ $? -eq 0 ]]; then /usr/local/emhttp/webGui/scripts/notify -i normal -s "BTRFS Send/Receive beendet" -d "Script ausgeführt" -m "$(date '+%Y-%m-%d %H:%M') Information: BTRFS VM Online Snapshot auf HDD erfolgreich abgeschlossen" btrfs sub del /mnt/cache/$sd #btrfs sub del /mnt/disks/backup/$sd else /usr/local/emhttp/webGui/scripts/notify -i warning -s "BTRFS Send/Receive gescheitert" -d "Script abgebrochen" -m "$(date '+%Y-%m-%d %H:%M') Information: Es wurde heute bereits ein Online Snapshot erstellt" fi fi I don't have it automated in the way that old snapshots getting deleted automatically. I monitor the target drive and if it's getting full i delete some old snapshots. First command lists all the snapshots and the second deletes a specific one. Don't delete the initial read only snapshot if you have differential snaps building up on that. btrfs sub list /mnt/disks/VMs_backup_hdd btrfs sub del /mnt/disks/VMs_backup_hdd/VMs_Offline_20181125 If you have to restore a vdisk, simply go into the specific folder and copy the vdisk of the specific VM back to it's original share on the cache. The XML and NVRAM files for the VMs aren't backed up by this. Only the vdisks. To backup these files you can use the app "Backup/Restore Appdata" to backup the libvirt.img for example. EDIT: Forget to mention, I use a single 1TB NVME cache device formatted with BTRFS and a single old spinning rust 1,5TB hdd as unassigned device as target for the snapshots. Nothing special, no BTRFS raid involved.
    1 point
  38. Running on a 3700X here. I can confirm this bug is fixed.
    1 point
  39. You don't quite need all those super advanced techniques as they only offer marginal improvement (if any). And then you have to take into account some of those tweaks were for older gen CPU (e.g. numa tuning was only required for TR gen 1 + 2) and some were workarounds while waiting for the software to catch up with the hardware (e.g. cpumode tweaks to fake TR as Epyc so cache is used correctly no longer required; use Unraid 6.9.0-beta1 for the latest 5.5.8 kernel which supposedly works better with 3rd-gen Ryzen; compile your own 6.8.3 with 5.5.8 kernel for the same reason, etc.) In terms of "best practice" for gaming VM, I have these "rules of hand" (cuz there are 5 🧐 ) Pick all the VM cores from the same CCX and CCD (i.e. die) would improve fps consistency (i.e. less stutter). Note: this is specific for gaming VM for which maximum performance is less important than consistent performance. For a workstation VM (for which max performance is paramount), VM cores should be spread evenly across as many CCX/CCD as possible, even if it means partially using a CCX/CCD. Isolate the VM cores in syslinux. The 2020 advice is to use isolcpus + nohz_full + rcu_nocs. (the old advice is just use isolcpus). Pin emulator to cores that are NOT the main VM cores. The advanced technique is also pin iothreads. This only applies if you use vdisk / ata-id pass-through. From my own testing, iothread pinning makes no diff with NVMe PCIe pass-through. Do msi fix with msi_util to help with sound issues The advanced technique is to put all devices from the GPU on the same bus with multifunction. To be honest though, I haven't found this to make any diff. Not run parity sync or any heavy IO / cpu activities while gaming. In terms of where you can find these settings 3900X has 12 cores, which is 3x4 -> every 3 cores is a CCX, every 2 CCX is a die (and your 3900X has 2 dies + an IO die). Watch SpaceInvader One tutorial on youtube. Just remember to do what you do with isolcpus to nohz_full + rcu_nocs as well. Watch SpaceInvader One tutorial on youtube. This is an VM xml edit. Watch SpaceInvader One tutorial on youtube. He has a link to download the msi_util. No explanation needed. Note that due to the inherent CCX/CCD design of Ryzen, you can never match Intel single-die CPU when it comes to consistent performance (i.e. less stutter). And this comes from someone currently running an AMD server, not an Intel fanboy. And of course, running VM will always introduce some variability above bare metal.
    1 point